Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 0 succeeded |
Started | |
Elapsed | 53m33s |
Revision | master |
... skipping 209 lines ... + CHANNELS=/tmp/channels.oOjSqetrs + kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --down --kops-binary-path=/tmp/kops.VI0XMpFMl I0623 20:08:03.141688 6145 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true I0623 20:08:03.142471 6145 app.go:61] RunDir for this run: "/logs/artifacts/f01f2595-f32f-11ec-9e31-9224b4edca5e" I0623 20:08:03.267489 6145 app.go:120] ID for this run: "f01f2595-f32f-11ec-9e31-9224b4edca5e" I0623 20:08:03.303284 6145 dumplogs.go:45] /tmp/kops.VI0XMpFMl toolbox dump --name e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu W0623 20:08:03.807309 6145 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1 I0623 20:08:03.807362 6145 down.go:48] /tmp/kops.VI0XMpFMl delete cluster --name e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --yes I0623 20:08:03.828787 6167 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true I0623 20:08:03.828888 6167 featureflag.go:162] FeatureFlag "AlphaAllowGCE"=true I0623 20:08:03.828893 6167 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-143745cea3-c83fe.test-cncf-aws.k8s.io" not found Error: exit status 1 + echo 'kubetest2 down failed' kubetest2 down failed + [[ v == \v ]] + KOPS_BASE_URL= ++ kops-download-release v1.23.2 ++ local kops +++ mktemp -t kops.XXXXXXXXX ++ kops=/tmp/kops.tDFdq07XQ ... skipping 7 lines ... + kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --up --kops-binary-path=/tmp/kops.tDFdq07XQ --kubernetes-version=v1.23.1 --control-plane-size=1 --template-path=tests/e2e/templates/many-addons.yaml.tmpl '--create-args=--networking calico' I0623 20:08:05.830239 6202 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true I0623 20:08:05.831738 6202 app.go:61] RunDir for this run: "/logs/artifacts/f01f2595-f32f-11ec-9e31-9224b4edca5e" I0623 20:08:05.855656 6202 app.go:120] ID for this run: "f01f2595-f32f-11ec-9e31-9224b4edca5e" I0623 20:08:05.855924 6202 up.go:44] Cleaning up any leaked resources from previous cluster I0623 20:08:05.855992 6202 dumplogs.go:45] /tmp/kops.tDFdq07XQ toolbox dump --name e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu W0623 20:08:06.350295 6202 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1 I0623 20:08:06.350347 6202 down.go:48] /tmp/kops.tDFdq07XQ delete cluster --name e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --yes I0623 20:08:06.372728 6219 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true I0623 20:08:06.372850 6219 featureflag.go:162] FeatureFlag "AlphaAllowGCE"=true I0623 20:08:06.372857 6219 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-143745cea3-c83fe.test-cncf-aws.k8s.io" not found I0623 20:08:06.845721 6202 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip 2022/06/23 20:08:06 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404 I0623 20:08:06.857580 6202 http.go:37] curl https://ip.jsb.workers.dev I0623 20:08:06.977665 6202 template.go:58] /tmp/kops.tDFdq07XQ toolbox template --template tests/e2e/templates/many-addons.yaml.tmpl --output /tmp/kops-template3986730826/manifest.yaml --values /tmp/kops-template3986730826/values.yaml --name e2e-143745cea3-c83fe.test-cncf-aws.k8s.io I0623 20:08:06.999267 6231 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true I0623 20:08:06.999445 6231 featureflag.go:162] FeatureFlag "AlphaAllowGCE"=true I0623 20:08:06.999450 6231 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true I0623 20:08:07.145936 6202 create.go:33] /tmp/kops.tDFdq07XQ create --filename /tmp/kops-template3986730826/manifest.yaml --name e2e-143745cea3-c83fe.test-cncf-aws.k8s.io ... skipping 66 lines ... NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:08:47.648793 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:08:57.684962 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:09:07.725375 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:09:17.763448 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:09:27.797115 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:09:37.845319 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:09:47.897607 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:09:57.939645 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:10:07.979852 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:10:18.031920 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:10:28.079800 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:10:38.114805 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:10:48.151608 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:10:58.204632 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:11:08.252379 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:11:18.288963 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:11:28.347245 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:11:38.389893 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:11:48.449360 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:11:58.500074 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:12:08.540118 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:12:18.587631 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:12:28.640630 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0623 20:12:38.697154 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a ... skipping 33 lines ... Pod kube-system/ebs-csi-node-tnhj4 system-node-critical pod "ebs-csi-node-tnhj4" is pending Pod kube-system/metrics-server-655dc594b4-cvzgz system-cluster-critical pod "metrics-server-655dc594b4-cvzgz" is pending Pod kube-system/metrics-server-655dc594b4-vzt22 system-cluster-critical pod "metrics-server-655dc594b4-vzt22" is pending Pod kube-system/node-local-dns-cn6l4 system-node-critical pod "node-local-dns-cn6l4" is pending Pod kube-system/node-local-dns-t24hs system-node-critical pod "node-local-dns-t24hs" is pending Validation Failed W0623 20:12:51.752927 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a ... skipping 29 lines ... Pod kube-system/ebs-csi-node-rmt5f system-node-critical pod "ebs-csi-node-rmt5f" is pending Pod kube-system/ebs-csi-node-tcqzv system-node-critical pod "ebs-csi-node-tcqzv" is pending Pod kube-system/ebs-csi-node-tnhj4 system-node-critical pod "ebs-csi-node-tnhj4" is pending Pod kube-system/metrics-server-655dc594b4-cvzgz system-cluster-critical pod "metrics-server-655dc594b4-cvzgz" is pending Pod kube-system/metrics-server-655dc594b4-vzt22 system-cluster-critical pod "metrics-server-655dc594b4-vzt22" is pending Validation Failed W0623 20:13:03.709015 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a ... skipping 19 lines ... Pod kube-system/ebs-csi-node-rmt5f system-node-critical pod "ebs-csi-node-rmt5f" is pending Pod kube-system/ebs-csi-node-tcqzv system-node-critical pod "ebs-csi-node-tcqzv" is pending Pod kube-system/ebs-csi-node-tnhj4 system-node-critical pod "ebs-csi-node-tnhj4" is pending Pod kube-system/metrics-server-655dc594b4-cvzgz system-cluster-critical pod "metrics-server-655dc594b4-cvzgz" is pending Pod kube-system/metrics-server-655dc594b4-vzt22 system-cluster-critical pod "metrics-server-655dc594b4-vzt22" is pending Validation Failed W0623 20:13:15.628437 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a ... skipping 12 lines ... Pod kube-system/cert-manager-webhook-6d4d986bbd-dfcf4 system-cluster-critical pod "cert-manager-webhook-6d4d986bbd-dfcf4" is not ready (cert-manager) Pod kube-system/ebs-csi-controller-774fbb7f45-j56lc system-cluster-critical pod "ebs-csi-controller-774fbb7f45-j56lc" is pending Pod kube-system/ebs-csi-node-tnhj4 system-node-critical pod "ebs-csi-node-tnhj4" is pending Pod kube-system/metrics-server-655dc594b4-cvzgz system-cluster-critical pod "metrics-server-655dc594b4-cvzgz" is not ready (metrics-server) Pod kube-system/metrics-server-655dc594b4-vzt22 system-cluster-critical pod "metrics-server-655dc594b4-vzt22" is not ready (metrics-server) Validation Failed W0623 20:13:27.578202 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a ... skipping 7 lines ... VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/aws-load-balancer-controller-7fddbc8655-zgdch system-cluster-critical pod "aws-load-balancer-controller-7fddbc8655-zgdch" is pending Pod kube-system/metrics-server-655dc594b4-vzt22 system-cluster-critical pod "metrics-server-655dc594b4-vzt22" is not ready (metrics-server) Validation Failed W0623 20:13:39.434292 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a ... skipping 7 lines ... VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/aws-load-balancer-controller-7fddbc8655-zgdch system-cluster-critical pod "aws-load-balancer-controller-7fddbc8655-zgdch" is pending Pod kube-system/kube-proxy-ip-172-20-0-90.eu-west-1.compute.internal system-node-critical pod "kube-proxy-ip-172-20-0-90.eu-west-1.compute.internal" is pending Validation Failed W0623 20:13:51.390580 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a ... skipping 6 lines ... ip-172-20-0-90.eu-west-1.compute.internal node True VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/aws-load-balancer-controller-7fddbc8655-zgdch system-cluster-critical pod "aws-load-balancer-controller-7fddbc8655-zgdch" is pending Validation Failed W0623 20:14:03.381481 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a ... skipping 6 lines ... ip-172-20-0-90.eu-west-1.compute.internal node True VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/aws-load-balancer-controller-7fddbc8655-zgdch system-cluster-critical pod "aws-load-balancer-controller-7fddbc8655-zgdch" is pending Validation Failed W0623 20:14:15.420494 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a ... skipping 6 lines ... ip-172-20-0-90.eu-west-1.compute.internal node True VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/aws-load-balancer-controller-7fddbc8655-zgdch system-cluster-critical pod "aws-load-balancer-controller-7fddbc8655-zgdch" is pending Validation Failed W0623 20:14:27.362291 6270 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a ... skipping 553 lines ... evicting pod kube-system/hubble-relay-55846f56fb-dftds I0623 20:18:22.772767 6385 request.go:665] Waited for 1.002900472s due to client-side throttling, not priority and fairness, request: GET:https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/namespaces/kube-system/pods/cilium-operator-7fb7bf5c7-6mfc2 I0623 20:18:50.705476 6385 instancegroups.go:653] Waiting for 5s for pods to stabilize after draining. I0623 20:18:55.706072 6385 instancegroups.go:588] Stopping instance "i-0ed49beb133740f2b", node "ip-172-20-0-125.eu-west-1.compute.internal", in group "master-eu-west-1a.masters.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io" (this may take a while). I0623 20:18:55.982541 6385 instancegroups.go:434] waiting for 15s after terminating instance I0623 20:19:10.982747 6385 instancegroups.go:467] Validating the cluster. I0623 20:19:11.147527 6385 instancegroups.go:513] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.203.65.234:443: connect: connection refused. I0623 20:20:11.185933 6385 instancegroups.go:513] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.203.65.234:443: i/o timeout. I0623 20:21:11.234995 6385 instancegroups.go:513] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.203.65.234:443: i/o timeout. I0623 20:22:11.274922 6385 instancegroups.go:513] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.203.65.234:443: i/o timeout. I0623 20:23:11.331212 6385 instancegroups.go:513] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.203.65.234:443: i/o timeout. I0623 20:24:11.365278 6385 instancegroups.go:513] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.203.65.234:443: i/o timeout. I0623 20:24:41.405304 6385 instancegroups.go:513] Cluster did not validate, will retry in "30s": unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host. I0623 20:25:14.289736 6385 instancegroups.go:523] Cluster did not pass validation, will retry in "30s": node "ip-172-20-0-146.eu-west-1.compute.internal" of role "node" is not ready, node "ip-172-20-0-90.eu-west-1.compute.internal" of role "node" is not ready, system-cluster-critical pod "aws-load-balancer-controller-7fddbc8655-ss56s" is pending, system-cluster-critical pod "aws-node-termination-handler-64f9f6d576-59nqw" is pending, system-cluster-critical pod "cert-manager-699d66b4b-g4lcs" is pending, system-cluster-critical pod "cert-manager-cainjector-6465ccdb69-vzx22" is pending, system-cluster-critical pod "cert-manager-webhook-6d4d986bbd-qvnqg" is pending, system-node-critical pod "cilium-jthbg" is not ready (cilium-agent), system-cluster-critical pod "cluster-autoscaler-58f8cb44b9-4tqfv" is pending, system-cluster-critical pod "ebs-csi-controller-774fbb7f45-jt7x5" is pending, system-node-critical pod "ebs-csi-node-m8sbx" is pending, system-cluster-critical pod "metrics-server-655dc594b4-cvzgz" is not ready (metrics-server), system-cluster-critical pod "metrics-server-655dc594b4-vzt22" is not ready (metrics-server). I0623 20:25:46.218094 6385 instancegroups.go:523] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "aws-load-balancer-controller-7fddbc8655-ss56s" is pending, system-cluster-critical pod "cert-manager-699d66b4b-g4lcs" is pending, system-cluster-critical pod "cert-manager-cainjector-6465ccdb69-vzx22" is pending, system-cluster-critical pod "cert-manager-webhook-6d4d986bbd-qvnqg" is pending, system-cluster-critical pod "ebs-csi-controller-774fbb7f45-jt7x5" is pending, system-node-critical pod "ebs-csi-node-m8sbx" is pending. I0623 20:26:18.144962 6385 instancegroups.go:503] Cluster validated; revalidating in 10s to make sure it does not flap. I0623 20:26:30.128742 6385 instancegroups.go:500] Cluster validated. I0623 20:26:30.128801 6385 instancegroups.go:467] Validating the cluster. ... skipping 35 lines ... evicting pod kube-system/coredns-7884856795-jv7fn evicting pod kube-system/hubble-relay-55846f56fb-l868d WARNING: ignoring DaemonSet-managed Pods: kube-system/cilium-fvkjd, kube-system/ebs-csi-node-tcqzv, kube-system/node-local-dns-bh25t evicting pod kube-system/metrics-server-655dc594b4-vzt22 evicting pod kube-system/coredns-7884856795-pcgt7 evicting pod kube-system/coredns-autoscaler-57dd87df6c-r6vzl error when evicting pods/"metrics-server-655dc594b4-vzt22" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. error when evicting pods/"coredns-7884856795-pcgt7" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I0623 20:33:58.963442 6385 instancegroups.go:653] Waiting for 5s for pods to stabilize after draining. I0623 20:34:00.710272 6385 instancegroups.go:653] Waiting for 5s for pods to stabilize after draining. evicting pod kube-system/metrics-server-655dc594b4-vzt22 evicting pod kube-system/coredns-7884856795-pcgt7 error when evicting pods/"metrics-server-655dc594b4-vzt22" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I0623 20:34:03.964179 6385 instancegroups.go:588] Stopping instance "i-0c6a44d922f81e5f9", node "ip-172-20-0-119.eu-west-1.compute.internal", in group "nodes-eu-west-1a.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io" (this may take a while). I0623 20:34:04.260250 6385 instancegroups.go:434] waiting for 15s after terminating instance I0623 20:34:04.876294 6385 instancegroups.go:653] Waiting for 5s for pods to stabilize after draining. I0623 20:34:05.710496 6385 instancegroups.go:588] Stopping instance "i-0115e7ad6238650c2", node "ip-172-20-0-180.eu-west-1.compute.internal", in group "nodes-eu-west-1a.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io" (this may take a while). I0623 20:34:05.983102 6385 instancegroups.go:434] waiting for 15s after terminating instance evicting pod kube-system/metrics-server-655dc594b4-vzt22 error when evicting pods/"metrics-server-655dc594b4-vzt22" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I0623 20:34:09.877054 6385 instancegroups.go:588] Stopping instance "i-0eb85780b936ae389", node "ip-172-20-0-90.eu-west-1.compute.internal", in group "nodes-eu-west-1a.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io" (this may take a while). I0623 20:34:10.151134 6385 instancegroups.go:434] waiting for 15s after terminating instance evicting pod kube-system/metrics-server-655dc594b4-vzt22 error when evicting pods/"metrics-server-655dc594b4-vzt22" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. evicting pod kube-system/metrics-server-655dc594b4-vzt22 error when evicting pods/"metrics-server-655dc594b4-vzt22" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I0623 20:34:19.261423 6385 instancegroups.go:467] Validating the cluster. I0623 20:34:21.274898 6385 instancegroups.go:523] Cluster did not pass validation, will retry in "30s": system-node-critical pod "cilium-6wgds" is pending, system-node-critical pod "cilium-z5z8r" is pending, system-node-critical pod "ebs-csi-node-j4gtw" is pending, system-node-critical pod "ebs-csi-node-sclbw" is pending, system-node-critical pod "kube-proxy-ip-172-20-0-119.eu-west-1.compute.internal" is not ready (kube-proxy), system-node-critical pod "kube-proxy-ip-172-20-0-180.eu-west-1.compute.internal" is not ready (kube-proxy), system-node-critical pod "kube-proxy-ip-172-20-0-90.eu-west-1.compute.internal" is not ready (kube-proxy), system-cluster-critical pod "metrics-server-655dc594b4-bvxc7" is not ready (metrics-server), system-node-critical pod "node-local-dns-76kh7" is pending, system-node-critical pod "node-local-dns-842vk" is pending. evicting pod kube-system/metrics-server-655dc594b4-vzt22 error when evicting pods/"metrics-server-655dc594b4-vzt22" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. evicting pod kube-system/metrics-server-655dc594b4-vzt22 error when evicting pods/"metrics-server-655dc594b4-vzt22" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. evicting pod kube-system/metrics-server-655dc594b4-vzt22 error when evicting pods/"metrics-server-655dc594b4-vzt22" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. evicting pod kube-system/metrics-server-655dc594b4-vzt22 I0623 20:34:41.833382 6385 instancegroups.go:653] Waiting for 5s for pods to stabilize after draining. I0623 20:34:46.834817 6385 instancegroups.go:588] Stopping instance "i-0ba0c703f11b41978", node "ip-172-20-0-146.eu-west-1.compute.internal", in group "nodes-eu-west-1a.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io" (this may take a while). I0623 20:34:47.110809 6385 instancegroups.go:434] waiting for 15s after terminating instance I0623 20:34:53.310245 6385 instancegroups.go:523] Cluster did not pass validation, will retry in "30s": system-node-critical pod "cilium-6wgds" is pending, system-node-critical pod "cilium-cdfbh" is pending, system-node-critical pod "cilium-fs8p9" is pending, system-node-critical pod "cilium-z5z8r" is pending, system-node-critical pod "ebs-csi-node-658ql" is pending, system-node-critical pod "ebs-csi-node-j4gtw" is pending, system-node-critical pod "ebs-csi-node-sclbw" is pending, system-node-critical pod "ebs-csi-node-wncv9" is pending, system-node-critical pod "kube-proxy-ip-172-20-0-146.eu-west-1.compute.internal" is not ready (kube-proxy), system-node-critical pod "kube-proxy-ip-172-20-0-180.eu-west-1.compute.internal" is not ready (kube-proxy), system-node-critical pod "kube-proxy-ip-172-20-0-90.eu-west-1.compute.internal" is not ready (kube-proxy), system-cluster-critical pod "metrics-server-655dc594b4-nz4sk" is not ready (metrics-server), system-node-critical pod "node-local-dns-76kh7" is pending, system-node-critical pod "node-local-dns-842vk" is pending, system-node-critical pod "node-local-dns-g2dhz" is pending, system-node-critical pod "node-local-dns-ww495" is pending. I0623 20:35:25.380552 6385 instancegroups.go:523] Cluster did not pass validation, will retry in "30s": system-node-critical pod "cilium-66t8p" is pending, system-node-critical pod "cilium-6wgds" is pending, system-node-critical pod "cilium-cdfbh" is pending, system-node-critical pod "cilium-z5z8r" is pending, system-node-critical pod "ebs-csi-node-658ql" is pending, system-node-critical pod "ebs-csi-node-j4gtw" is pending, system-node-critical pod "ebs-csi-node-plrb4" is pending, system-node-critical pod "ebs-csi-node-sclbw" is pending, system-node-critical pod "kube-proxy-ip-172-20-0-180.eu-west-1.compute.internal" is not ready (kube-proxy), system-node-critical pod "node-local-dns-76kh7" is pending, system-node-critical pod "node-local-dns-842vk" is pending, system-node-critical pod "node-local-dns-g2dhz" is pending, system-node-critical pod "node-local-dns-sjqrd" is pending. ... skipping 241 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: local][LocalVolumeType: dir] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 752 lines ... [1mSTEP[0m: Building a namespace api object, basename node-problem-detector W0623 20:37:27.654695 7078 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 23 20:37:27.654: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 Jun 23 20:37:27.868: INFO: No SSH Key for provider aws: 'error reading SSH key /root/.ssh/kube_aws_rsa: 'open /root/.ssh/kube_aws_rsa: no such file or directory'' [AfterEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:37:27.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "node-problem-detector-947" for this suite. [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [1.436 seconds][0m [sig-node] NodeProblemDetector [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m [36m[1mshould run without error [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60[0m [36mNo SSH Key for provider aws: 'error reading SSH key /root/.ssh/kube_aws_rsa: 'open /root/.ssh/kube_aws_rsa: no such file or directory''[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:53 [90m------------------------------[0m [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath ... skipping 275 lines ... [1mSTEP[0m: Destroying namespace "services-8401" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0} [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:37:30.163: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 84 lines ... [32m• [SLOW TEST:9.983 seconds][0m [sig-api-machinery] ResourceQuota [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors. [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:1423[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.","total":-1,"completed":1,"skipped":35,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:37:39.743: INFO: Only supported for providers [vsphere] (not aws) ... skipping 138 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:37:42.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "disruption-7124" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":2,"skipped":48,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:37:42.363: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 48 lines ... W0623 20:37:28.573873 7100 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 23 20:37:28.573: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:110 [1mSTEP[0m: Creating a pod to test downward api env vars Jun 23 20:37:28.890: INFO: Waiting up to 5m0s for pod "downward-api-4df16876-c815-4146-a63a-326470637b91" in namespace "downward-api-6512" to be "Succeeded or Failed" Jun 23 20:37:28.995: INFO: Pod "downward-api-4df16876-c815-4146-a63a-326470637b91": Phase="Pending", Reason="", readiness=false. Elapsed: 104.75779ms Jun 23 20:37:31.102: INFO: Pod "downward-api-4df16876-c815-4146-a63a-326470637b91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211335963s Jun 23 20:37:33.208: INFO: Pod "downward-api-4df16876-c815-4146-a63a-326470637b91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317530973s Jun 23 20:37:35.315: INFO: Pod "downward-api-4df16876-c815-4146-a63a-326470637b91": Phase="Pending", Reason="", readiness=false. Elapsed: 6.425039437s Jun 23 20:37:37.422: INFO: Pod "downward-api-4df16876-c815-4146-a63a-326470637b91": Phase="Pending", Reason="", readiness=false. Elapsed: 8.531395091s Jun 23 20:37:39.528: INFO: Pod "downward-api-4df16876-c815-4146-a63a-326470637b91": Phase="Pending", Reason="", readiness=false. Elapsed: 10.637381452s Jun 23 20:37:41.634: INFO: Pod "downward-api-4df16876-c815-4146-a63a-326470637b91": Phase="Pending", Reason="", readiness=false. Elapsed: 12.743527293s Jun 23 20:37:43.739: INFO: Pod "downward-api-4df16876-c815-4146-a63a-326470637b91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.848706641s [1mSTEP[0m: Saw pod success Jun 23 20:37:43.739: INFO: Pod "downward-api-4df16876-c815-4146-a63a-326470637b91" satisfied condition "Succeeded or Failed" Jun 23 20:37:43.844: INFO: Trying to get logs from node ip-172-20-0-87.eu-west-1.compute.internal pod downward-api-4df16876-c815-4146-a63a-326470637b91 container dapi-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:37:44.307: INFO: Waiting for pod downward-api-4df16876-c815-4146-a63a-326470637b91 to disappear Jun 23 20:37:44.412: INFO: Pod downward-api-4df16876-c815-4146-a63a-326470637b91 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:17.928 seconds][0m [sig-node] Downward API [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:110[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":1,"skipped":16,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 42 lines ... Jun 23 20:37:40.797: INFO: Running '/logs/artifacts/f01f2595-f32f-11ec-9e31-9224b4edca5e/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8039 explain e2e-test-crd-publish-openapi-127-crds.spec' Jun 23 20:37:41.326: INFO: stderr: "" Jun 23 20:37:41.326: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-127-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jun 23 20:37:41.326: INFO: Running '/logs/artifacts/f01f2595-f32f-11ec-9e31-9224b4edca5e/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8039 explain e2e-test-crd-publish-openapi-127-crds.spec.bars' Jun 23 20:37:41.872: INFO: stderr: "" Jun 23 20:37:41.872: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-127-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t<string>\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t<string>\n Whether Bar is feeling great.\n\n name\t<string> -required-\n Name of Bar.\n\n" [1mSTEP[0m: kubectl explain works to return error when explain is called on property that doesn't exist Jun 23 20:37:41.872: INFO: Running '/logs/artifacts/f01f2595-f32f-11ec-9e31-9224b4edca5e/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8039 explain e2e-test-crd-publish-openapi-127-crds.spec.bars2' Jun 23 20:37:42.416: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:37:48.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "crd-publish-openapi-8039" for this suite. ... skipping 2 lines ... [32m• [SLOW TEST:22.175 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m works for CRD with validation schema [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:37:48.616: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping ... skipping 94 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 23 20:37:27.512: INFO: Waiting up to 5m0s for pod "downwardapi-volume-231268bc-6731-423b-951f-8f4faf7fd627" in namespace "projected-5866" to be "Succeeded or Failed" Jun 23 20:37:27.621: INFO: Pod "downwardapi-volume-231268bc-6731-423b-951f-8f4faf7fd627": Phase="Pending", Reason="", readiness=false. Elapsed: 108.566288ms Jun 23 20:37:29.729: INFO: Pod "downwardapi-volume-231268bc-6731-423b-951f-8f4faf7fd627": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2168693s Jun 23 20:37:31.836: INFO: Pod "downwardapi-volume-231268bc-6731-423b-951f-8f4faf7fd627": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323801541s Jun 23 20:37:33.943: INFO: Pod "downwardapi-volume-231268bc-6731-423b-951f-8f4faf7fd627": Phase="Pending", Reason="", readiness=false. Elapsed: 6.430450316s Jun 23 20:37:36.053: INFO: Pod "downwardapi-volume-231268bc-6731-423b-951f-8f4faf7fd627": Phase="Pending", Reason="", readiness=false. Elapsed: 8.540908612s Jun 23 20:37:38.161: INFO: Pod "downwardapi-volume-231268bc-6731-423b-951f-8f4faf7fd627": Phase="Pending", Reason="", readiness=false. Elapsed: 10.648479799s Jun 23 20:37:40.268: INFO: Pod "downwardapi-volume-231268bc-6731-423b-951f-8f4faf7fd627": Phase="Pending", Reason="", readiness=false. Elapsed: 12.756124761s Jun 23 20:37:42.375: INFO: Pod "downwardapi-volume-231268bc-6731-423b-951f-8f4faf7fd627": Phase="Pending", Reason="", readiness=false. Elapsed: 14.862887706s Jun 23 20:37:44.482: INFO: Pod "downwardapi-volume-231268bc-6731-423b-951f-8f4faf7fd627": Phase="Pending", Reason="", readiness=false. Elapsed: 16.970333703s Jun 23 20:37:46.590: INFO: Pod "downwardapi-volume-231268bc-6731-423b-951f-8f4faf7fd627": Phase="Pending", Reason="", readiness=false. Elapsed: 19.077999338s Jun 23 20:37:48.697: INFO: Pod "downwardapi-volume-231268bc-6731-423b-951f-8f4faf7fd627": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.184431965s [1mSTEP[0m: Saw pod success Jun 23 20:37:48.697: INFO: Pod "downwardapi-volume-231268bc-6731-423b-951f-8f4faf7fd627" satisfied condition "Succeeded or Failed" Jun 23 20:37:48.803: INFO: Trying to get logs from node ip-172-20-0-42.eu-west-1.compute.internal pod downwardapi-volume-231268bc-6731-423b-951f-8f4faf7fd627 container client-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:37:49.448: INFO: Waiting for pod downwardapi-volume-231268bc-6731-423b-951f-8f4faf7fd627 to disappear Jun 23 20:37:49.554: INFO: Pod downwardapi-volume-231268bc-6731-423b-951f-8f4faf7fd627 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:23.167 seconds][0m [sig-storage] Projected downwardAPI [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should provide container's cpu request [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath ... skipping 7 lines ... [It] should support existing single file [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219 Jun 23 20:37:27.382: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Jun 23 20:37:27.607: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-4qg7 [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:37:27.723: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-4qg7" in namespace "provisioning-5979" to be "Succeeded or Failed" Jun 23 20:37:27.829: INFO: Pod "pod-subpath-test-inlinevolume-4qg7": Phase="Pending", Reason="", readiness=false. Elapsed: 105.35827ms Jun 23 20:37:29.935: INFO: Pod "pod-subpath-test-inlinevolume-4qg7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211520604s Jun 23 20:37:32.040: INFO: Pod "pod-subpath-test-inlinevolume-4qg7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316920693s Jun 23 20:37:34.146: INFO: Pod "pod-subpath-test-inlinevolume-4qg7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.423065548s Jun 23 20:37:36.252: INFO: Pod "pod-subpath-test-inlinevolume-4qg7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.528463094s Jun 23 20:37:38.358: INFO: Pod "pod-subpath-test-inlinevolume-4qg7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.634230958s Jun 23 20:37:40.463: INFO: Pod "pod-subpath-test-inlinevolume-4qg7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.739890366s Jun 23 20:37:42.569: INFO: Pod "pod-subpath-test-inlinevolume-4qg7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.846027017s Jun 23 20:37:44.676: INFO: Pod "pod-subpath-test-inlinevolume-4qg7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.952951574s Jun 23 20:37:46.782: INFO: Pod "pod-subpath-test-inlinevolume-4qg7": Phase="Pending", Reason="", readiness=false. Elapsed: 19.058342885s Jun 23 20:37:48.887: INFO: Pod "pod-subpath-test-inlinevolume-4qg7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.164181959s [1mSTEP[0m: Saw pod success Jun 23 20:37:48.888: INFO: Pod "pod-subpath-test-inlinevolume-4qg7" satisfied condition "Succeeded or Failed" Jun 23 20:37:48.993: INFO: Trying to get logs from node ip-172-20-0-238.eu-west-1.compute.internal pod pod-subpath-test-inlinevolume-4qg7 container test-container-subpath-inlinevolume-4qg7: <nil> [1mSTEP[0m: delete the pod Jun 23 20:37:49.663: INFO: Waiting for pod pod-subpath-test-inlinevolume-4qg7 to disappear Jun 23 20:37:49.768: INFO: Pod pod-subpath-test-inlinevolume-4qg7 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-4qg7 Jun 23 20:37:49.768: INFO: Deleting pod "pod-subpath-test-inlinevolume-4qg7" in namespace "provisioning-5979" ... skipping 12 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing single file [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":1,"skipped":0,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 3 lines ... Jun 23 20:37:27.171: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 Jun 23 20:37:27.532: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-85b9a59c-73b8-49ce-a63e-f0c4fc6a0298" in namespace "security-context-test-2128" to be "Succeeded or Failed" Jun 23 20:37:27.639: INFO: Pod "busybox-readonly-true-85b9a59c-73b8-49ce-a63e-f0c4fc6a0298": Phase="Pending", Reason="", readiness=false. Elapsed: 106.534477ms Jun 23 20:37:29.746: INFO: Pod "busybox-readonly-true-85b9a59c-73b8-49ce-a63e-f0c4fc6a0298": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213214996s Jun 23 20:37:31.853: INFO: Pod "busybox-readonly-true-85b9a59c-73b8-49ce-a63e-f0c4fc6a0298": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321030949s Jun 23 20:37:33.960: INFO: Pod "busybox-readonly-true-85b9a59c-73b8-49ce-a63e-f0c4fc6a0298": Phase="Pending", Reason="", readiness=false. Elapsed: 6.427883355s Jun 23 20:37:36.067: INFO: Pod "busybox-readonly-true-85b9a59c-73b8-49ce-a63e-f0c4fc6a0298": Phase="Pending", Reason="", readiness=false. Elapsed: 8.534227011s Jun 23 20:37:38.174: INFO: Pod "busybox-readonly-true-85b9a59c-73b8-49ce-a63e-f0c4fc6a0298": Phase="Pending", Reason="", readiness=false. Elapsed: 10.641216674s Jun 23 20:37:40.281: INFO: Pod "busybox-readonly-true-85b9a59c-73b8-49ce-a63e-f0c4fc6a0298": Phase="Pending", Reason="", readiness=false. Elapsed: 12.74869066s Jun 23 20:37:42.388: INFO: Pod "busybox-readonly-true-85b9a59c-73b8-49ce-a63e-f0c4fc6a0298": Phase="Pending", Reason="", readiness=false. Elapsed: 14.855146539s Jun 23 20:37:44.497: INFO: Pod "busybox-readonly-true-85b9a59c-73b8-49ce-a63e-f0c4fc6a0298": Phase="Pending", Reason="", readiness=false. Elapsed: 16.964355995s Jun 23 20:37:46.605: INFO: Pod "busybox-readonly-true-85b9a59c-73b8-49ce-a63e-f0c4fc6a0298": Phase="Pending", Reason="", readiness=false. Elapsed: 19.072235674s Jun 23 20:37:48.712: INFO: Pod "busybox-readonly-true-85b9a59c-73b8-49ce-a63e-f0c4fc6a0298": Phase="Pending", Reason="", readiness=false. Elapsed: 21.179825058s Jun 23 20:37:50.850: INFO: Pod "busybox-readonly-true-85b9a59c-73b8-49ce-a63e-f0c4fc6a0298": Phase="Failed", Reason="", readiness=false. Elapsed: 23.317683322s Jun 23 20:37:50.850: INFO: Pod "busybox-readonly-true-85b9a59c-73b8-49ce-a63e-f0c4fc6a0298" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:37:50.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-2128" for this suite. ... skipping 2 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m When creating a pod with readOnlyRootFilesystem [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171[0m should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":13,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:37:51.173: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 49 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m when scheduling a busybox Pod with hostAliases [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:137[0m should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:37:51.732: INFO: Only supported for providers [vsphere] (not aws) ... skipping 162 lines ... Jun 23 20:37:27.149: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap with name configmap-test-volume-f8a12291-53d4-4971-a511-14bc889ba5db [1mSTEP[0m: Creating a pod to test consume configMaps Jun 23 20:37:27.595: INFO: Waiting up to 5m0s for pod "pod-configmaps-ec53301e-fcda-4ed4-b272-fb523a2a138a" in namespace "configmap-5155" to be "Succeeded or Failed" Jun 23 20:37:27.701: INFO: Pod "pod-configmaps-ec53301e-fcda-4ed4-b272-fb523a2a138a": Phase="Pending", Reason="", readiness=false. Elapsed: 106.191944ms Jun 23 20:37:29.810: INFO: Pod "pod-configmaps-ec53301e-fcda-4ed4-b272-fb523a2a138a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215268479s Jun 23 20:37:31.918: INFO: Pod "pod-configmaps-ec53301e-fcda-4ed4-b272-fb523a2a138a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323136967s Jun 23 20:37:34.024: INFO: Pod "pod-configmaps-ec53301e-fcda-4ed4-b272-fb523a2a138a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.4293613s Jun 23 20:37:36.131: INFO: Pod "pod-configmaps-ec53301e-fcda-4ed4-b272-fb523a2a138a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.536372854s Jun 23 20:37:38.239: INFO: Pod "pod-configmaps-ec53301e-fcda-4ed4-b272-fb523a2a138a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.644591673s ... skipping 2 lines ... Jun 23 20:37:44.563: INFO: Pod "pod-configmaps-ec53301e-fcda-4ed4-b272-fb523a2a138a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.968431987s Jun 23 20:37:46.670: INFO: Pod "pod-configmaps-ec53301e-fcda-4ed4-b272-fb523a2a138a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.075131776s Jun 23 20:37:48.776: INFO: Pod "pod-configmaps-ec53301e-fcda-4ed4-b272-fb523a2a138a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.181283605s Jun 23 20:37:50.884: INFO: Pod "pod-configmaps-ec53301e-fcda-4ed4-b272-fb523a2a138a": Phase="Pending", Reason="", readiness=false. Elapsed: 23.289075408s Jun 23 20:37:52.990: INFO: Pod "pod-configmaps-ec53301e-fcda-4ed4-b272-fb523a2a138a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.395662627s [1mSTEP[0m: Saw pod success Jun 23 20:37:52.991: INFO: Pod "pod-configmaps-ec53301e-fcda-4ed4-b272-fb523a2a138a" satisfied condition "Succeeded or Failed" Jun 23 20:37:53.097: INFO: Trying to get logs from node ip-172-20-0-98.eu-west-1.compute.internal pod pod-configmaps-ec53301e-fcda-4ed4-b272-fb523a2a138a container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:37:53.320: INFO: Waiting for pod pod-configmaps-ec53301e-fcda-4ed4-b272-fb523a2a138a to disappear Jun 23 20:37:53.425: INFO: Pod pod-configmaps-ec53301e-fcda-4ed4-b272-fb523a2a138a no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:27.040 seconds][0m [sig-storage] ConfigMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume as non-root [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:37:53.747: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 146 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:37:53.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "discovery-176" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 29 lines ... Jun 23 20:37:55.823: INFO: AfterEach: Cleaning up test resources. Jun 23 20:37:55.823: INFO: Deleting PersistentVolumeClaim "pvc-cz9nm" Jun 23 20:37:55.928: INFO: Deleting PersistentVolume "hostpath-sst4d" [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":3,"skipped":8,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath ... skipping 5 lines ... [It] should support non-existent path /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194 Jun 23 20:37:51.722: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Jun 23 20:37:51.841: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-72ns [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:37:51.955: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-72ns" in namespace "provisioning-3348" to be "Succeeded or Failed" Jun 23 20:37:52.062: INFO: Pod "pod-subpath-test-inlinevolume-72ns": Phase="Pending", Reason="", readiness=false. Elapsed: 106.114655ms Jun 23 20:37:54.168: INFO: Pod "pod-subpath-test-inlinevolume-72ns": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212945644s Jun 23 20:37:56.294: INFO: Pod "pod-subpath-test-inlinevolume-72ns": Phase="Pending", Reason="", readiness=false. Elapsed: 4.338338859s Jun 23 20:37:58.401: INFO: Pod "pod-subpath-test-inlinevolume-72ns": Phase="Pending", Reason="", readiness=false. Elapsed: 6.445378431s Jun 23 20:38:00.509: INFO: Pod "pod-subpath-test-inlinevolume-72ns": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.553399465s [1mSTEP[0m: Saw pod success Jun 23 20:38:00.509: INFO: Pod "pod-subpath-test-inlinevolume-72ns" satisfied condition "Succeeded or Failed" Jun 23 20:38:00.616: INFO: Trying to get logs from node ip-172-20-0-238.eu-west-1.compute.internal pod pod-subpath-test-inlinevolume-72ns container test-container-volume-inlinevolume-72ns: <nil> [1mSTEP[0m: delete the pod Jun 23 20:38:00.845: INFO: Waiting for pod pod-subpath-test-inlinevolume-72ns to disappear Jun 23 20:38:00.952: INFO: Pod pod-subpath-test-inlinevolume-72ns no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-72ns Jun 23 20:38:00.952: INFO: Deleting pod "pod-subpath-test-inlinevolume-72ns" in namespace "provisioning-3348" ... skipping 12 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support non-existent path [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":23,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:01.395: INFO: Driver local doesn't support InlineVolume -- skipping ... skipping 36 lines ... Jun 23 20:38:00.622: INFO: Creating a PV followed by a PVC Jun 23 20:38:00.836: INFO: Waiting for PV local-pvtq8w7 to bind to PVC pvc-57mqt Jun 23 20:38:00.836: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-57mqt] to have phase Bound Jun 23 20:38:00.945: INFO: PersistentVolumeClaim pvc-57mqt found and phase=Bound (108.275149ms) Jun 23 20:38:00.945: INFO: Waiting up to 3m0s for PersistentVolume local-pvtq8w7 to have phase Bound Jun 23 20:38:01.051: INFO: PersistentVolume local-pvtq8w7 found and phase=Bound (106.232042ms) [It] should fail scheduling due to different NodeSelector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379 [1mSTEP[0m: local-volume-type: dir Jun 23 20:38:01.374: INFO: Waiting up to 5m0s for pod "pod-653e8456-08e7-4d4a-b82c-42aba4f87283" in namespace "persistent-local-volumes-test-2018" to be "Unschedulable" Jun 23 20:38:01.480: INFO: Pod "pod-653e8456-08e7-4d4a-b82c-42aba4f87283": Phase="Pending", Reason="", readiness=false. Elapsed: 106.218433ms Jun 23 20:38:01.480: INFO: Pod "pod-653e8456-08e7-4d4a-b82c-42aba4f87283" satisfied condition "Unschedulable" [AfterEach] Pod with node different from PV's NodeAffinity ... skipping 14 lines ... [32m• [SLOW TEST:10.720 seconds][0m [sig-storage] PersistentVolumes-local [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m Pod with node different from PV's NodeAffinity [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347[0m should fail scheduling due to different NodeSelector [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":2,"skipped":11,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:04.479: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 101 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m One pod requesting one prebound PVC [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m should be able to mount volume and read from pod1 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":1,"skipped":1,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 13 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:38:09.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "svcaccounts-9990" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json,application/vnd.kubernetes.protobuf\"","total":-1,"completed":1,"skipped":9,"failed":0} [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:37:27.253: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename persistent-local-volumes-test W0623 20:37:29.301567 7137 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ ... skipping 72 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m One pod requesting one prebound PVC [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m should be able to mount volume and write from pod1 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":2,"skipped":9,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:09.557: INFO: Only supported for providers [gce gke] (not aws) ... skipping 117 lines ... [32m• [SLOW TEST:21.112 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m works for CRD preserving unknown fields at the schema root [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:09.759: INFO: Driver local doesn't support ext3 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 155 lines ... [32m• [SLOW TEST:43.942 seconds][0m [sig-auth] ServiceAccounts [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23[0m should mount an API token into pods [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0} [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:38:09.234: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename endpointslicemirroring [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 8 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:38:10.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "endpointslicemirroring-3428" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":3,"skipped":3,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:10.791: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 143 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m One pod requesting one prebound PVC [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m should be able to mount volume and write from pod1 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":17,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 7 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:38:14.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "custom-resource-definition-977" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0} [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:14.369: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 52 lines ... Jun 23 20:37:42.919: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} [1mSTEP[0m: creating a StorageClass volume-expand-2001hxpmf [1mSTEP[0m: creating a claim Jun 23 20:37:43.058: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Expanding non-expandable pvc Jun 23 20:37:43.275: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>} BinarySI} Jun 23 20:37:43.509: INFO: Error updating pvc awszjjcx: PersistentVolumeClaim "awszjjcx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-2001hxpmf", ... // 3 identical fields } Jun 23 20:37:45.725: INFO: Error updating pvc awszjjcx: PersistentVolumeClaim "awszjjcx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-2001hxpmf", ... // 3 identical fields } Jun 23 20:37:47.726: INFO: Error updating pvc awszjjcx: PersistentVolumeClaim "awszjjcx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-2001hxpmf", ... // 3 identical fields } Jun 23 20:37:49.727: INFO: Error updating pvc awszjjcx: PersistentVolumeClaim "awszjjcx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-2001hxpmf", ... // 3 identical fields } Jun 23 20:37:51.725: INFO: Error updating pvc awszjjcx: PersistentVolumeClaim "awszjjcx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-2001hxpmf", ... // 3 identical fields } Jun 23 20:37:53.727: INFO: Error updating pvc awszjjcx: PersistentVolumeClaim "awszjjcx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-2001hxpmf", ... // 3 identical fields } Jun 23 20:37:55.725: INFO: Error updating pvc awszjjcx: PersistentVolumeClaim "awszjjcx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-2001hxpmf", ... // 3 identical fields } Jun 23 20:37:57.733: INFO: Error updating pvc awszjjcx: PersistentVolumeClaim "awszjjcx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-2001hxpmf", ... // 3 identical fields } Jun 23 20:37:59.724: INFO: Error updating pvc awszjjcx: PersistentVolumeClaim "awszjjcx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-2001hxpmf", ... // 3 identical fields } Jun 23 20:38:01.726: INFO: Error updating pvc awszjjcx: PersistentVolumeClaim "awszjjcx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-2001hxpmf", ... // 3 identical fields } Jun 23 20:38:03.725: INFO: Error updating pvc awszjjcx: PersistentVolumeClaim "awszjjcx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-2001hxpmf", ... // 3 identical fields } Jun 23 20:38:05.733: INFO: Error updating pvc awszjjcx: PersistentVolumeClaim "awszjjcx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-2001hxpmf", ... // 3 identical fields } Jun 23 20:38:07.724: INFO: Error updating pvc awszjjcx: PersistentVolumeClaim "awszjjcx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-2001hxpmf", ... // 3 identical fields } Jun 23 20:38:09.724: INFO: Error updating pvc awszjjcx: PersistentVolumeClaim "awszjjcx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-2001hxpmf", ... // 3 identical fields } Jun 23 20:38:11.726: INFO: Error updating pvc awszjjcx: PersistentVolumeClaim "awszjjcx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-2001hxpmf", ... // 3 identical fields } Jun 23 20:38:13.728: INFO: Error updating pvc awszjjcx: PersistentVolumeClaim "awszjjcx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-2001hxpmf", ... // 3 identical fields } Jun 23 20:38:13.943: INFO: Error updating pvc awszjjcx: PersistentVolumeClaim "awszjjcx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 24 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] volume-expand [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should not allow expansion of pvcs without AllowVolumeExpansion property [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":3,"skipped":62,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] Generated clientset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 18 lines ... [32m• [SLOW TEST:46.932 seconds][0m [sig-api-machinery] Generated clientset [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/generated_clientset.go:103[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod","total":-1,"completed":1,"skipped":7,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:14.693: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 57 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:38:15.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-6993" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes","total":-1,"completed":4,"skipped":64,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology ... skipping 36 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should provision a volume and schedule a pod with AllowedTopologies [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":1,"skipped":22,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:17.584: INFO: Driver csi-hostpath doesn't support ext4 -- skipping ... skipping 39 lines ... [32m• [SLOW TEST:8.584 seconds][0m [sig-api-machinery] ServerSideApply [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should work for CRDs [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:569[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ServerSideApply should work for CRDs","total":-1,"completed":3,"skipped":24,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:18.171: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 150 lines ... Jun 23 20:38:00.773: INFO: PersistentVolumeClaim pvc-gth9p found but phase is Pending instead of Bound. Jun 23 20:38:02.883: INFO: PersistentVolumeClaim pvc-gth9p found and phase=Bound (6.43722449s) Jun 23 20:38:02.883: INFO: Waiting up to 3m0s for PersistentVolume local-hvbcs to have phase Bound Jun 23 20:38:02.992: INFO: PersistentVolume local-hvbcs found and phase=Bound (108.269191ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-9b8v [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:38:03.321: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9b8v" in namespace "provisioning-7215" to be "Succeeded or Failed" Jun 23 20:38:03.429: INFO: Pod "pod-subpath-test-preprovisionedpv-9b8v": Phase="Pending", Reason="", readiness=false. Elapsed: 107.853568ms Jun 23 20:38:05.538: INFO: Pod "pod-subpath-test-preprovisionedpv-9b8v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217222972s Jun 23 20:38:07.646: INFO: Pod "pod-subpath-test-preprovisionedpv-9b8v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325688176s Jun 23 20:38:09.754: INFO: Pod "pod-subpath-test-preprovisionedpv-9b8v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433715739s Jun 23 20:38:11.864: INFO: Pod "pod-subpath-test-preprovisionedpv-9b8v": Phase="Pending", Reason="", readiness=false. Elapsed: 8.543293322s Jun 23 20:38:13.974: INFO: Pod "pod-subpath-test-preprovisionedpv-9b8v": Phase="Pending", Reason="", readiness=false. Elapsed: 10.653013833s Jun 23 20:38:16.083: INFO: Pod "pod-subpath-test-preprovisionedpv-9b8v": Phase="Running", Reason="", readiness=true. Elapsed: 12.762516397s Jun 23 20:38:18.192: INFO: Pod "pod-subpath-test-preprovisionedpv-9b8v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.870914728s [1mSTEP[0m: Saw pod success Jun 23 20:38:18.192: INFO: Pod "pod-subpath-test-preprovisionedpv-9b8v" satisfied condition "Succeeded or Failed" Jun 23 20:38:18.303: INFO: Trying to get logs from node ip-172-20-0-42.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-9b8v container test-container-subpath-preprovisionedpv-9b8v: <nil> [1mSTEP[0m: delete the pod Jun 23 20:38:18.536: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9b8v to disappear Jun 23 20:38:18.643: INFO: Pod pod-subpath-test-preprovisionedpv-9b8v no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-9b8v Jun 23 20:38:18.643: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9b8v" in namespace "provisioning-7215" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly directory specified in the volumeMount [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":8,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:37:30.178: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating projection with secret that has name projected-secret-test-map-1019317b-fccf-4f84-9091-894a24c869ef [1mSTEP[0m: Creating a pod to test consume secrets Jun 23 20:37:30.937: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-79d24cf3-fe89-4594-8fd8-5b2f274ac7a8" in namespace "projected-4182" to be "Succeeded or Failed" Jun 23 20:37:31.056: INFO: Pod "pod-projected-secrets-79d24cf3-fe89-4594-8fd8-5b2f274ac7a8": Phase="Pending", Reason="", readiness=false. Elapsed: 118.824543ms Jun 23 20:37:33.163: INFO: Pod "pod-projected-secrets-79d24cf3-fe89-4594-8fd8-5b2f274ac7a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225670797s Jun 23 20:37:35.270: INFO: Pod "pod-projected-secrets-79d24cf3-fe89-4594-8fd8-5b2f274ac7a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332661187s Jun 23 20:37:37.382: INFO: Pod "pod-projected-secrets-79d24cf3-fe89-4594-8fd8-5b2f274ac7a8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444845032s Jun 23 20:37:39.490: INFO: Pod "pod-projected-secrets-79d24cf3-fe89-4594-8fd8-5b2f274ac7a8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.552679123s Jun 23 20:37:41.597: INFO: Pod "pod-projected-secrets-79d24cf3-fe89-4594-8fd8-5b2f274ac7a8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.65996179s ... skipping 13 lines ... Jun 23 20:38:11.124: INFO: Pod "pod-projected-secrets-79d24cf3-fe89-4594-8fd8-5b2f274ac7a8": Phase="Pending", Reason="", readiness=false. Elapsed: 40.186719392s Jun 23 20:38:13.234: INFO: Pod "pod-projected-secrets-79d24cf3-fe89-4594-8fd8-5b2f274ac7a8": Phase="Pending", Reason="", readiness=false. Elapsed: 42.296677007s Jun 23 20:38:15.341: INFO: Pod "pod-projected-secrets-79d24cf3-fe89-4594-8fd8-5b2f274ac7a8": Phase="Pending", Reason="", readiness=false. Elapsed: 44.403766855s Jun 23 20:38:17.531: INFO: Pod "pod-projected-secrets-79d24cf3-fe89-4594-8fd8-5b2f274ac7a8": Phase="Pending", Reason="", readiness=false. Elapsed: 46.594109054s Jun 23 20:38:19.639: INFO: Pod "pod-projected-secrets-79d24cf3-fe89-4594-8fd8-5b2f274ac7a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 48.701603493s [1mSTEP[0m: Saw pod success Jun 23 20:38:19.639: INFO: Pod "pod-projected-secrets-79d24cf3-fe89-4594-8fd8-5b2f274ac7a8" satisfied condition "Succeeded or Failed" Jun 23 20:38:19.750: INFO: Trying to get logs from node ip-172-20-0-87.eu-west-1.compute.internal pod pod-projected-secrets-79d24cf3-fe89-4594-8fd8-5b2f274ac7a8 container projected-secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 23 20:38:19.983: INFO: Waiting for pod pod-projected-secrets-79d24cf3-fe89-4594-8fd8-5b2f274ac7a8 to disappear Jun 23 20:38:20.095: INFO: Pod pod-projected-secrets-79d24cf3-fe89-4594-8fd8-5b2f274ac7a8 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:50.131 seconds][0m [sig-storage] Projected secret [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with mappings [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 86 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:38:20.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "proxy-5133" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","total":-1,"completed":2,"skipped":30,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:21.112: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 160 lines ... [32m• [SLOW TEST:56.171 seconds][0m [sig-api-machinery] Garbage collector [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should orphan pods created by rc if delete options say so [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":1,"skipped":14,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:22.974: INFO: Only supported for providers [azure] (not aws) ... skipping 57 lines ... [1mSTEP[0m: Destroying namespace "services-4893" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":2,"skipped":23,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:23.854: INFO: Driver hostPath doesn't support DynamicPV -- skipping ... skipping 52 lines ... Jun 23 20:38:01.321: INFO: PersistentVolumeClaim pvc-58tj7 found but phase is Pending instead of Bound. Jun 23 20:38:03.430: INFO: PersistentVolumeClaim pvc-58tj7 found and phase=Bound (12.805332415s) Jun 23 20:38:03.430: INFO: Waiting up to 3m0s for PersistentVolume local-zg4xt to have phase Bound Jun 23 20:38:03.537: INFO: PersistentVolume local-zg4xt found and phase=Bound (107.808941ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-qtt8 [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:38:03.870: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qtt8" in namespace "provisioning-7732" to be "Succeeded or Failed" Jun 23 20:38:03.981: INFO: Pod "pod-subpath-test-preprovisionedpv-qtt8": Phase="Pending", Reason="", readiness=false. Elapsed: 110.422992ms Jun 23 20:38:06.091: INFO: Pod "pod-subpath-test-preprovisionedpv-qtt8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22024098s Jun 23 20:38:08.200: INFO: Pod "pod-subpath-test-preprovisionedpv-qtt8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329251385s Jun 23 20:38:10.308: INFO: Pod "pod-subpath-test-preprovisionedpv-qtt8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437970617s Jun 23 20:38:12.418: INFO: Pod "pod-subpath-test-preprovisionedpv-qtt8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.547278882s Jun 23 20:38:14.587: INFO: Pod "pod-subpath-test-preprovisionedpv-qtt8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.716947485s Jun 23 20:38:16.695: INFO: Pod "pod-subpath-test-preprovisionedpv-qtt8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.82491757s Jun 23 20:38:18.808: INFO: Pod "pod-subpath-test-preprovisionedpv-qtt8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.937185018s Jun 23 20:38:20.916: INFO: Pod "pod-subpath-test-preprovisionedpv-qtt8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.045626981s [1mSTEP[0m: Saw pod success Jun 23 20:38:20.916: INFO: Pod "pod-subpath-test-preprovisionedpv-qtt8" satisfied condition "Succeeded or Failed" Jun 23 20:38:21.024: INFO: Trying to get logs from node ip-172-20-0-42.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-qtt8 container test-container-volume-preprovisionedpv-qtt8: <nil> [1mSTEP[0m: delete the pod Jun 23 20:38:21.265: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qtt8 to disappear Jun 23 20:38:21.374: INFO: Pod pod-subpath-test-preprovisionedpv-qtt8 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-qtt8 Jun 23 20:38:21.374: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qtt8" in namespace "provisioning-7732" ... skipping 30 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support non-existent path [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":5,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 22 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:38:26.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "ingressclass-315" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":3,"skipped":30,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:26.436: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 45 lines ... [1mSTEP[0m: Building a namespace api object, basename security-context-test [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 23 20:38:05.140: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-5115b7ac-6fac-422d-82b3-db6e6cb34d0f" in namespace "security-context-test-4272" to be "Succeeded or Failed" Jun 23 20:38:05.246: INFO: Pod "busybox-readonly-false-5115b7ac-6fac-422d-82b3-db6e6cb34d0f": Phase="Pending", Reason="", readiness=false. Elapsed: 106.074819ms Jun 23 20:38:07.353: INFO: Pod "busybox-readonly-false-5115b7ac-6fac-422d-82b3-db6e6cb34d0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213449262s Jun 23 20:38:09.459: INFO: Pod "busybox-readonly-false-5115b7ac-6fac-422d-82b3-db6e6cb34d0f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319592647s Jun 23 20:38:11.580: INFO: Pod "busybox-readonly-false-5115b7ac-6fac-422d-82b3-db6e6cb34d0f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440424452s Jun 23 20:38:13.690: INFO: Pod "busybox-readonly-false-5115b7ac-6fac-422d-82b3-db6e6cb34d0f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.550122043s Jun 23 20:38:15.816: INFO: Pod "busybox-readonly-false-5115b7ac-6fac-422d-82b3-db6e6cb34d0f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.675997263s Jun 23 20:38:17.929: INFO: Pod "busybox-readonly-false-5115b7ac-6fac-422d-82b3-db6e6cb34d0f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.789613762s Jun 23 20:38:20.039: INFO: Pod "busybox-readonly-false-5115b7ac-6fac-422d-82b3-db6e6cb34d0f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.899599389s Jun 23 20:38:22.145: INFO: Pod "busybox-readonly-false-5115b7ac-6fac-422d-82b3-db6e6cb34d0f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.005732019s Jun 23 20:38:24.252: INFO: Pod "busybox-readonly-false-5115b7ac-6fac-422d-82b3-db6e6cb34d0f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.111969873s Jun 23 20:38:26.358: INFO: Pod "busybox-readonly-false-5115b7ac-6fac-422d-82b3-db6e6cb34d0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.218289261s Jun 23 20:38:26.358: INFO: Pod "busybox-readonly-false-5115b7ac-6fac-422d-82b3-db6e6cb34d0f" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:38:26.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-4272" for this suite. ... skipping 2 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m When creating a pod with readOnlyRootFilesystem [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171[0m should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 14 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:38:27.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replication-controller-3265" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":4,"skipped":38,"failed":0} [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:27.936: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Inline-volume (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 22 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 23 20:38:20.955: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df66b356-8461-4bcc-a29c-6744213b4336" in namespace "projected-772" to be "Succeeded or Failed" Jun 23 20:38:21.061: INFO: Pod "downwardapi-volume-df66b356-8461-4bcc-a29c-6744213b4336": Phase="Pending", Reason="", readiness=false. Elapsed: 106.086594ms Jun 23 20:38:23.168: INFO: Pod "downwardapi-volume-df66b356-8461-4bcc-a29c-6744213b4336": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213045684s Jun 23 20:38:25.276: INFO: Pod "downwardapi-volume-df66b356-8461-4bcc-a29c-6744213b4336": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321003631s Jun 23 20:38:27.383: INFO: Pod "downwardapi-volume-df66b356-8461-4bcc-a29c-6744213b4336": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.42871452s [1mSTEP[0m: Saw pod success Jun 23 20:38:27.383: INFO: Pod "downwardapi-volume-df66b356-8461-4bcc-a29c-6744213b4336" satisfied condition "Succeeded or Failed" Jun 23 20:38:27.490: INFO: Trying to get logs from node ip-172-20-0-238.eu-west-1.compute.internal pod downwardapi-volume-df66b356-8461-4bcc-a29c-6744213b4336 container client-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:38:27.710: INFO: Waiting for pod downwardapi-volume-df66b356-8461-4bcc-a29c-6744213b4336 to disappear Jun 23 20:38:27.816: INFO: Pod downwardapi-volume-df66b356-8461-4bcc-a29c-6744213b4336 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:7.718 seconds][0m [sig-storage] Projected downwardAPI [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:28.033: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 20 lines ... Jun 23 20:38:21.115: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 23 20:38:21.754: INFO: Waiting up to 5m0s for pod "security-context-fa973a24-ddfd-477e-a2bb-9e5fb3a8c202" in namespace "security-context-8837" to be "Succeeded or Failed" Jun 23 20:38:21.859: INFO: Pod "security-context-fa973a24-ddfd-477e-a2bb-9e5fb3a8c202": Phase="Pending", Reason="", readiness=false. Elapsed: 105.653449ms Jun 23 20:38:23.969: INFO: Pod "security-context-fa973a24-ddfd-477e-a2bb-9e5fb3a8c202": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215195251s Jun 23 20:38:26.076: INFO: Pod "security-context-fa973a24-ddfd-477e-a2bb-9e5fb3a8c202": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322188792s Jun 23 20:38:28.183: INFO: Pod "security-context-fa973a24-ddfd-477e-a2bb-9e5fb3a8c202": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.429019141s [1mSTEP[0m: Saw pod success Jun 23 20:38:28.183: INFO: Pod "security-context-fa973a24-ddfd-477e-a2bb-9e5fb3a8c202" satisfied condition "Succeeded or Failed" Jun 23 20:38:28.289: INFO: Trying to get logs from node ip-172-20-0-238.eu-west-1.compute.internal pod security-context-fa973a24-ddfd-477e-a2bb-9e5fb3a8c202 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:38:28.512: INFO: Waiting for pod security-context-fa973a24-ddfd-477e-a2bb-9e5fb3a8c202 to disappear Jun 23 20:38:28.620: INFO: Pod security-context-fa973a24-ddfd-477e-a2bb-9e5fb3a8c202 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:7.720 seconds][0m [sig-node] Security Context [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":31,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 25 lines ... [32m• [SLOW TEST:18.394 seconds][0m [sig-api-machinery] ResourceQuota [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should verify ResourceQuota with terminating scopes. [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":4,"skipped":5,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 18 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:38:29.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "watch-3190" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":4,"skipped":19,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:29.726: INFO: Only supported for providers [vsphere] (not aws) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 58 lines ... Jun 23 20:38:02.363: INFO: PersistentVolumeClaim pvc-xq7hl found but phase is Pending instead of Bound. Jun 23 20:38:04.511: INFO: PersistentVolumeClaim pvc-xq7hl found and phase=Bound (2.263712126s) Jun 23 20:38:04.511: INFO: Waiting up to 3m0s for PersistentVolume local-qbq2c to have phase Bound Jun 23 20:38:04.631: INFO: PersistentVolume local-qbq2c found and phase=Bound (119.874518ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-s9sl [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:38:04.952: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-s9sl" in namespace "provisioning-8714" to be "Succeeded or Failed" Jun 23 20:38:05.059: INFO: Pod "pod-subpath-test-preprovisionedpv-s9sl": Phase="Pending", Reason="", readiness=false. Elapsed: 106.920744ms Jun 23 20:38:07.167: INFO: Pod "pod-subpath-test-preprovisionedpv-s9sl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214570998s Jun 23 20:38:09.273: INFO: Pod "pod-subpath-test-preprovisionedpv-s9sl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321202063s Jun 23 20:38:11.381: INFO: Pod "pod-subpath-test-preprovisionedpv-s9sl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.428632709s Jun 23 20:38:13.492: INFO: Pod "pod-subpath-test-preprovisionedpv-s9sl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.539582604s Jun 23 20:38:15.603: INFO: Pod "pod-subpath-test-preprovisionedpv-s9sl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.650653825s ... skipping 2 lines ... Jun 23 20:38:21.935: INFO: Pod "pod-subpath-test-preprovisionedpv-s9sl": Phase="Pending", Reason="", readiness=false. Elapsed: 16.983308791s Jun 23 20:38:24.043: INFO: Pod "pod-subpath-test-preprovisionedpv-s9sl": Phase="Pending", Reason="", readiness=false. Elapsed: 19.090553783s Jun 23 20:38:26.151: INFO: Pod "pod-subpath-test-preprovisionedpv-s9sl": Phase="Pending", Reason="", readiness=false. Elapsed: 21.1991816s Jun 23 20:38:28.259: INFO: Pod "pod-subpath-test-preprovisionedpv-s9sl": Phase="Pending", Reason="", readiness=false. Elapsed: 23.306688793s Jun 23 20:38:30.366: INFO: Pod "pod-subpath-test-preprovisionedpv-s9sl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.413933166s [1mSTEP[0m: Saw pod success Jun 23 20:38:30.366: INFO: Pod "pod-subpath-test-preprovisionedpv-s9sl" satisfied condition "Succeeded or Failed" Jun 23 20:38:30.473: INFO: Trying to get logs from node ip-172-20-0-87.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-s9sl container test-container-subpath-preprovisionedpv-s9sl: <nil> [1mSTEP[0m: delete the pod Jun 23 20:38:30.704: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-s9sl to disappear Jun 23 20:38:30.810: INFO: Pod pod-subpath-test-preprovisionedpv-s9sl no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-s9sl Jun 23 20:38:30.810: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-s9sl" in namespace "provisioning-8714" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly directory specified in the volumeMount [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":2,"skipped":1,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:32.449: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 65 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41[0m on terminated container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134[0m should report termination message if TerminationMessagePath is set [Excluded:WindowsDocker] [NodeConformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set [Excluded:WindowsDocker] [NodeConformance]","total":-1,"completed":2,"skipped":14,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath ... skipping 57 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should be able to unmount after the subpath directory is deleted [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":1,"skipped":6,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:33.561: INFO: Only supported for providers [azure] (not aws) ... skipping 157 lines ... Jun 23 20:38:02.067: INFO: PersistentVolumeClaim pvc-bl4hm found but phase is Pending instead of Bound. Jun 23 20:38:04.172: INFO: PersistentVolumeClaim pvc-bl4hm found and phase=Bound (14.869019541s) Jun 23 20:38:04.172: INFO: Waiting up to 3m0s for PersistentVolume local-c4tkj to have phase Bound Jun 23 20:38:04.276: INFO: PersistentVolume local-c4tkj found and phase=Bound (104.177131ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-b2tc [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:38:04.625: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-b2tc" in namespace "provisioning-5204" to be "Succeeded or Failed" Jun 23 20:38:04.729: INFO: Pod "pod-subpath-test-preprovisionedpv-b2tc": Phase="Pending", Reason="", readiness=false. Elapsed: 104.147047ms Jun 23 20:38:06.834: INFO: Pod "pod-subpath-test-preprovisionedpv-b2tc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209697122s Jun 23 20:38:08.940: INFO: Pod "pod-subpath-test-preprovisionedpv-b2tc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314806851s Jun 23 20:38:11.048: INFO: Pod "pod-subpath-test-preprovisionedpv-b2tc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.422980313s Jun 23 20:38:13.158: INFO: Pod "pod-subpath-test-preprovisionedpv-b2tc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.533070587s Jun 23 20:38:15.264: INFO: Pod "pod-subpath-test-preprovisionedpv-b2tc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.638864775s Jun 23 20:38:17.390: INFO: Pod "pod-subpath-test-preprovisionedpv-b2tc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.765205952s Jun 23 20:38:19.497: INFO: Pod "pod-subpath-test-preprovisionedpv-b2tc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.871798511s Jun 23 20:38:21.601: INFO: Pod "pod-subpath-test-preprovisionedpv-b2tc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.976285497s Jun 23 20:38:23.706: INFO: Pod "pod-subpath-test-preprovisionedpv-b2tc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.081062724s [1mSTEP[0m: Saw pod success Jun 23 20:38:23.706: INFO: Pod "pod-subpath-test-preprovisionedpv-b2tc" satisfied condition "Succeeded or Failed" Jun 23 20:38:23.810: INFO: Trying to get logs from node ip-172-20-0-42.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-b2tc container test-container-subpath-preprovisionedpv-b2tc: <nil> [1mSTEP[0m: delete the pod Jun 23 20:38:24.030: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-b2tc to disappear Jun 23 20:38:24.139: INFO: Pod pod-subpath-test-preprovisionedpv-b2tc no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-b2tc Jun 23 20:38:24.139: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-b2tc" in namespace "provisioning-5204" [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-b2tc [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:38:24.350: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-b2tc" in namespace "provisioning-5204" to be "Succeeded or Failed" Jun 23 20:38:24.455: INFO: Pod "pod-subpath-test-preprovisionedpv-b2tc": Phase="Pending", Reason="", readiness=false. Elapsed: 104.123644ms Jun 23 20:38:26.561: INFO: Pod "pod-subpath-test-preprovisionedpv-b2tc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210192289s Jun 23 20:38:28.669: INFO: Pod "pod-subpath-test-preprovisionedpv-b2tc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318561609s Jun 23 20:38:30.776: INFO: Pod "pod-subpath-test-preprovisionedpv-b2tc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.425146286s Jun 23 20:38:32.880: INFO: Pod "pod-subpath-test-preprovisionedpv-b2tc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.529929437s [1mSTEP[0m: Saw pod success Jun 23 20:38:32.880: INFO: Pod "pod-subpath-test-preprovisionedpv-b2tc" satisfied condition "Succeeded or Failed" Jun 23 20:38:32.987: INFO: Trying to get logs from node ip-172-20-0-42.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-b2tc container test-container-subpath-preprovisionedpv-b2tc: <nil> [1mSTEP[0m: delete the pod Jun 23 20:38:33.223: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-b2tc to disappear Jun 23 20:38:33.334: INFO: Pod pod-subpath-test-preprovisionedpv-b2tc no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-b2tc Jun 23 20:38:33.334: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-b2tc" in namespace "provisioning-5204" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directories when readOnly specified in the volumeSource [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":1,"skipped":2,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:34.862: INFO: Driver local doesn't support ext4 -- skipping ... skipping 46 lines ... Jun 23 20:38:18.717: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test emptydir 0644 on tmpfs Jun 23 20:38:19.376: INFO: Waiting up to 5m0s for pod "pod-904b3b5a-5746-4ba5-ab63-a55e6512a39c" in namespace "emptydir-5020" to be "Succeeded or Failed" Jun 23 20:38:19.481: INFO: Pod "pod-904b3b5a-5746-4ba5-ab63-a55e6512a39c": Phase="Pending", Reason="", readiness=false. Elapsed: 105.042204ms Jun 23 20:38:21.587: INFO: Pod "pod-904b3b5a-5746-4ba5-ab63-a55e6512a39c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211298196s Jun 23 20:38:23.693: INFO: Pod "pod-904b3b5a-5746-4ba5-ab63-a55e6512a39c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.3166959s Jun 23 20:38:25.800: INFO: Pod "pod-904b3b5a-5746-4ba5-ab63-a55e6512a39c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.423498347s Jun 23 20:38:27.905: INFO: Pod "pod-904b3b5a-5746-4ba5-ab63-a55e6512a39c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.528481771s Jun 23 20:38:30.012: INFO: Pod "pod-904b3b5a-5746-4ba5-ab63-a55e6512a39c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.636390474s Jun 23 20:38:32.121: INFO: Pod "pod-904b3b5a-5746-4ba5-ab63-a55e6512a39c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.745402859s Jun 23 20:38:34.236: INFO: Pod "pod-904b3b5a-5746-4ba5-ab63-a55e6512a39c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.859707417s Jun 23 20:38:36.341: INFO: Pod "pod-904b3b5a-5746-4ba5-ab63-a55e6512a39c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.965235486s [1mSTEP[0m: Saw pod success Jun 23 20:38:36.341: INFO: Pod "pod-904b3b5a-5746-4ba5-ab63-a55e6512a39c" satisfied condition "Succeeded or Failed" Jun 23 20:38:36.446: INFO: Trying to get logs from node ip-172-20-0-87.eu-west-1.compute.internal pod pod-904b3b5a-5746-4ba5-ab63-a55e6512a39c container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:38:36.665: INFO: Waiting for pod pod-904b3b5a-5746-4ba5-ab63-a55e6512a39c to disappear Jun 23 20:38:36.769: INFO: Pod pod-904b3b5a-5746-4ba5-ab63-a55e6512a39c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:18.264 seconds][0m [sig-storage] EmptyDir volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:36.984: INFO: Driver csi-hostpath doesn't support ext3 -- skipping ... skipping 62 lines ... [32m• [SLOW TEST:45.331 seconds][0m [sig-apps] Job [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should remove pods when job is deleted [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:185[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Job should remove pods when job is deleted","total":-1,"completed":2,"skipped":34,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:49 [It] files with FSGroup ownership should support (root,0644,tmpfs) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:66 [1mSTEP[0m: Creating a pod to test emptydir 0644 on tmpfs Jun 23 20:38:29.510: INFO: Waiting up to 5m0s for pod "pod-690af534-2ef2-4920-95d9-240098b7b5d3" in namespace "emptydir-3342" to be "Succeeded or Failed" Jun 23 20:38:29.620: INFO: Pod "pod-690af534-2ef2-4920-95d9-240098b7b5d3": Phase="Pending", Reason="", readiness=false. Elapsed: 109.863045ms Jun 23 20:38:31.728: INFO: Pod "pod-690af534-2ef2-4920-95d9-240098b7b5d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218104589s Jun 23 20:38:33.834: INFO: Pod "pod-690af534-2ef2-4920-95d9-240098b7b5d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324310351s Jun 23 20:38:35.940: INFO: Pod "pod-690af534-2ef2-4920-95d9-240098b7b5d3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.430263972s Jun 23 20:38:38.047: INFO: Pod "pod-690af534-2ef2-4920-95d9-240098b7b5d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.537251226s [1mSTEP[0m: Saw pod success Jun 23 20:38:38.047: INFO: Pod "pod-690af534-2ef2-4920-95d9-240098b7b5d3" satisfied condition "Succeeded or Failed" Jun 23 20:38:38.152: INFO: Trying to get logs from node ip-172-20-0-238.eu-west-1.compute.internal pod pod-690af534-2ef2-4920-95d9-240098b7b5d3 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:38:38.371: INFO: Waiting for pod pod-690af534-2ef2-4920-95d9-240098b7b5d3 to disappear Jun 23 20:38:38.476: INFO: Pod pod-690af534-2ef2-4920-95d9-240098b7b5d3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:47[0m files with FSGroup ownership should support (root,0644,tmpfs) [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:66[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":4,"skipped":37,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Netpol API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 24 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:38:39.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "netpol-9753" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":2,"skipped":14,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:39.807: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 18 lines ... Jun 23 20:38:18.184: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test emptydir 0666 on node default medium Jun 23 20:38:18.835: INFO: Waiting up to 5m0s for pod "pod-e3624b20-bd7a-407a-9f09-5eb8b7d83b28" in namespace "emptydir-4158" to be "Succeeded or Failed" Jun 23 20:38:18.943: INFO: Pod "pod-e3624b20-bd7a-407a-9f09-5eb8b7d83b28": Phase="Pending", Reason="", readiness=false. Elapsed: 107.929307ms Jun 23 20:38:21.052: INFO: Pod "pod-e3624b20-bd7a-407a-9f09-5eb8b7d83b28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217924127s Jun 23 20:38:23.159: INFO: Pod "pod-e3624b20-bd7a-407a-9f09-5eb8b7d83b28": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324515302s Jun 23 20:38:25.267: INFO: Pod "pod-e3624b20-bd7a-407a-9f09-5eb8b7d83b28": Phase="Pending", Reason="", readiness=false. Elapsed: 6.431996584s Jun 23 20:38:27.373: INFO: Pod "pod-e3624b20-bd7a-407a-9f09-5eb8b7d83b28": Phase="Pending", Reason="", readiness=false. Elapsed: 8.538528509s Jun 23 20:38:29.481: INFO: Pod "pod-e3624b20-bd7a-407a-9f09-5eb8b7d83b28": Phase="Pending", Reason="", readiness=false. Elapsed: 10.6464319s Jun 23 20:38:31.603: INFO: Pod "pod-e3624b20-bd7a-407a-9f09-5eb8b7d83b28": Phase="Pending", Reason="", readiness=false. Elapsed: 12.768210433s Jun 23 20:38:33.710: INFO: Pod "pod-e3624b20-bd7a-407a-9f09-5eb8b7d83b28": Phase="Pending", Reason="", readiness=false. Elapsed: 14.875288961s Jun 23 20:38:35.817: INFO: Pod "pod-e3624b20-bd7a-407a-9f09-5eb8b7d83b28": Phase="Pending", Reason="", readiness=false. Elapsed: 16.982453266s Jun 23 20:38:37.924: INFO: Pod "pod-e3624b20-bd7a-407a-9f09-5eb8b7d83b28": Phase="Pending", Reason="", readiness=false. Elapsed: 19.089519777s Jun 23 20:38:40.032: INFO: Pod "pod-e3624b20-bd7a-407a-9f09-5eb8b7d83b28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.197252678s [1mSTEP[0m: Saw pod success Jun 23 20:38:40.032: INFO: Pod "pod-e3624b20-bd7a-407a-9f09-5eb8b7d83b28" satisfied condition "Succeeded or Failed" Jun 23 20:38:40.138: INFO: Trying to get logs from node ip-172-20-0-98.eu-west-1.compute.internal pod pod-e3624b20-bd7a-407a-9f09-5eb8b7d83b28 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:38:40.363: INFO: Waiting for pod pod-e3624b20-bd7a-407a-9f09-5eb8b7d83b28 to disappear Jun 23 20:38:40.469: INFO: Pod pod-e3624b20-bd7a-407a-9f09-5eb8b7d83b28 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 34 lines ... [32m• [SLOW TEST:74.026 seconds][0m [sig-node] Probing container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be restarted startup probe fails [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:321[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":1,"skipped":3,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:40.738: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 146 lines ... [32m• [SLOW TEST:25.622 seconds][0m [sig-api-machinery] Servers with support for API chunking [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should return chunks of results for list calls [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/chunking.go:77[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":5,"skipped":66,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 140 lines ... [32m• [SLOW TEST:45.795 seconds][0m [sig-network] Services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should serve multiport endpoints from pods [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":4,"skipped":9,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:41.840: INFO: Only supported for providers [openstack] (not aws) ... skipping 54 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should mutate configmap [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":5,"skipped":7,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:41.849: INFO: Only supported for providers [gce gke] (not aws) ... skipping 90 lines ... Jun 23 20:38:32.468: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename svcaccounts [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test service account token: Jun 23 20:38:33.135: INFO: Waiting up to 5m0s for pod "test-pod-cfcca115-05ab-4afa-b498-31a1b344fb55" in namespace "svcaccounts-3321" to be "Succeeded or Failed" Jun 23 20:38:33.242: INFO: Pod "test-pod-cfcca115-05ab-4afa-b498-31a1b344fb55": Phase="Pending", Reason="", readiness=false. Elapsed: 106.941531ms Jun 23 20:38:35.350: INFO: Pod "test-pod-cfcca115-05ab-4afa-b498-31a1b344fb55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215731299s Jun 23 20:38:37.462: INFO: Pod "test-pod-cfcca115-05ab-4afa-b498-31a1b344fb55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326995115s Jun 23 20:38:39.569: INFO: Pod "test-pod-cfcca115-05ab-4afa-b498-31a1b344fb55": Phase="Pending", Reason="", readiness=false. Elapsed: 6.434246976s Jun 23 20:38:41.693: INFO: Pod "test-pod-cfcca115-05ab-4afa-b498-31a1b344fb55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.558122921s [1mSTEP[0m: Saw pod success Jun 23 20:38:41.693: INFO: Pod "test-pod-cfcca115-05ab-4afa-b498-31a1b344fb55" satisfied condition "Succeeded or Failed" Jun 23 20:38:41.800: INFO: Trying to get logs from node ip-172-20-0-42.eu-west-1.compute.internal pod test-pod-cfcca115-05ab-4afa-b498-31a1b344fb55 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:38:42.044: INFO: Waiting for pod test-pod-cfcca115-05ab-4afa-b498-31a1b344fb55 to disappear Jun 23 20:38:42.150: INFO: Pod test-pod-cfcca115-05ab-4afa-b498-31a1b344fb55 no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:9.950 seconds][0m [sig-auth] ServiceAccounts [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23[0m should mount projected service account token [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":3,"skipped":8,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 69 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m should be able to mount volume and write from pod1 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":2,"skipped":13,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:42.440: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 84 lines ... [32m• [SLOW TEST:76.732 seconds][0m [sig-node] Probing container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":18,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes ... skipping 23 lines ... Jun 23 20:38:31.291: INFO: PersistentVolumeClaim pvc-swv2c found but phase is Pending instead of Bound. Jun 23 20:38:33.401: INFO: PersistentVolumeClaim pvc-swv2c found and phase=Bound (4.429366477s) Jun 23 20:38:33.401: INFO: Waiting up to 3m0s for PersistentVolume local-4ww2j to have phase Bound Jun 23 20:38:33.509: INFO: PersistentVolume local-4ww2j found and phase=Bound (107.44187ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-cbzv [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 23 20:38:33.834: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-cbzv" in namespace "volume-1803" to be "Succeeded or Failed" Jun 23 20:38:33.942: INFO: Pod "exec-volume-test-preprovisionedpv-cbzv": Phase="Pending", Reason="", readiness=false. Elapsed: 108.078314ms Jun 23 20:38:36.051: INFO: Pod "exec-volume-test-preprovisionedpv-cbzv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216364074s Jun 23 20:38:38.159: INFO: Pod "exec-volume-test-preprovisionedpv-cbzv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324546249s Jun 23 20:38:40.268: INFO: Pod "exec-volume-test-preprovisionedpv-cbzv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433668551s Jun 23 20:38:42.379: INFO: Pod "exec-volume-test-preprovisionedpv-cbzv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.545143444s [1mSTEP[0m: Saw pod success Jun 23 20:38:42.379: INFO: Pod "exec-volume-test-preprovisionedpv-cbzv" satisfied condition "Succeeded or Failed" Jun 23 20:38:42.487: INFO: Trying to get logs from node ip-172-20-0-42.eu-west-1.compute.internal pod exec-volume-test-preprovisionedpv-cbzv container exec-container-preprovisionedpv-cbzv: <nil> [1mSTEP[0m: delete the pod Jun 23 20:38:42.747: INFO: Waiting for pod exec-volume-test-preprovisionedpv-cbzv to disappear Jun 23 20:38:42.855: INFO: Pod exec-volume-test-preprovisionedpv-cbzv no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-cbzv Jun 23 20:38:42.855: INFO: Deleting pod "exec-volume-test-preprovisionedpv-cbzv" in namespace "volume-1803" ... skipping 28 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (ext4)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":10,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:38:37.116: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename downward-api [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward api env vars Jun 23 20:38:37.884: INFO: Waiting up to 5m0s for pod "downward-api-6b72a981-4d92-46a7-b819-861c3b60d275" in namespace "downward-api-653" to be "Succeeded or Failed" Jun 23 20:38:37.989: INFO: Pod "downward-api-6b72a981-4d92-46a7-b819-861c3b60d275": Phase="Pending", Reason="", readiness=false. Elapsed: 104.718237ms Jun 23 20:38:40.096: INFO: Pod "downward-api-6b72a981-4d92-46a7-b819-861c3b60d275": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211857405s Jun 23 20:38:42.204: INFO: Pod "downward-api-6b72a981-4d92-46a7-b819-861c3b60d275": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318974341s Jun 23 20:38:44.312: INFO: Pod "downward-api-6b72a981-4d92-46a7-b819-861c3b60d275": Phase="Pending", Reason="", readiness=false. Elapsed: 6.427660181s Jun 23 20:38:46.423: INFO: Pod "downward-api-6b72a981-4d92-46a7-b819-861c3b60d275": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.538004064s [1mSTEP[0m: Saw pod success Jun 23 20:38:46.423: INFO: Pod "downward-api-6b72a981-4d92-46a7-b819-861c3b60d275" satisfied condition "Succeeded or Failed" Jun 23 20:38:46.531: INFO: Trying to get logs from node ip-172-20-0-238.eu-west-1.compute.internal pod downward-api-6b72a981-4d92-46a7-b819-861c3b60d275 container dapi-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:38:46.764: INFO: Waiting for pod downward-api-6b72a981-4d92-46a7-b819-861c3b60d275 to disappear Jun 23 20:38:46.870: INFO: Pod downward-api-6b72a981-4d92-46a7-b819-861c3b60d275 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:9.982 seconds][0m [sig-node] Downward API [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should provide pod UID as env vars [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":36,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:47.100: INFO: Only supported for providers [vsphere] (not aws) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 105 lines ... [32m• [SLOW TEST:13.678 seconds][0m [sig-apps] DisruptionController [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m evictions: maxUnavailable allow single eviction, percentage => should allow an eviction [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:286[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage =\u003e should allow an eviction","total":-1,"completed":3,"skipped":16,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 18 lines ... [32m• [SLOW TEST:13.838 seconds][0m [sig-apps] DisruptionController [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m evictions: enough pods, absolute => should allow an eviction [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:286[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, absolute =\u003e should allow an eviction","total":-1,"completed":2,"skipped":8,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 3 lines ... [BeforeEach] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:480 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:484 [1mSTEP[0m: creating the pod that should always exit 0 [1mSTEP[0m: submitting the pod to kubernetes Jun 23 20:38:34.294: INFO: Waiting up to 5m0s for pod "pod-always-succeed4fd130b9-94c5-43fc-950a-962f951bc519" in namespace "pods-392" to be "Succeeded or Failed" Jun 23 20:38:34.401: INFO: Pod "pod-always-succeed4fd130b9-94c5-43fc-950a-962f951bc519": Phase="Pending", Reason="", readiness=false. Elapsed: 106.51157ms Jun 23 20:38:36.509: INFO: Pod "pod-always-succeed4fd130b9-94c5-43fc-950a-962f951bc519": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215058544s Jun 23 20:38:38.622: INFO: Pod "pod-always-succeed4fd130b9-94c5-43fc-950a-962f951bc519": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327763478s Jun 23 20:38:40.729: INFO: Pod "pod-always-succeed4fd130b9-94c5-43fc-950a-962f951bc519": Phase="Pending", Reason="", readiness=false. Elapsed: 6.434908236s Jun 23 20:38:42.836: INFO: Pod "pod-always-succeed4fd130b9-94c5-43fc-950a-962f951bc519": Phase="Pending", Reason="", readiness=false. Elapsed: 8.541904599s Jun 23 20:38:44.948: INFO: Pod "pod-always-succeed4fd130b9-94c5-43fc-950a-962f951bc519": Phase="Pending", Reason="", readiness=false. Elapsed: 10.653277848s Jun 23 20:38:47.059: INFO: Pod "pod-always-succeed4fd130b9-94c5-43fc-950a-962f951bc519": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.764338843s [1mSTEP[0m: Saw pod success Jun 23 20:38:47.059: INFO: Pod "pod-always-succeed4fd130b9-94c5-43fc-950a-962f951bc519" satisfied condition "Succeeded or Failed" [1mSTEP[0m: Getting events about the pod [1mSTEP[0m: Checking events about the pod [1mSTEP[0m: deleting the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:38:49.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready ... skipping 5 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m Pod Container lifecycle [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:478[0m should not create extra sandbox if all containers are done [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:484[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":2,"skipped":38,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:49.495: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 88 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m Two pods mounting a local volume one after the other [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254[0m should be able to write from pod1 and read from pod2 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":23,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 15 lines ... [32m• [SLOW TEST:7.577 seconds][0m [sig-storage] ConfigMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m binary data should be reflected in volume [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 25 lines ... [32m• [SLOW TEST:12.251 seconds][0m [sig-apps] DisruptionController [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should update/patch PodDisruptionBudget status [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":5,"skipped":30,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:54.129: INFO: Only supported for providers [gce gke] (not aws) ... skipping 130 lines ... [32m• [SLOW TEST:12.438 seconds][0m [sig-apps] ReplicationController [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should serve a basic image on each replica with a public image [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":1,"skipped":16,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:38:54.237: INFO: Only supported for providers [azure] (not aws) ... skipping 5 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: azure-disk] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mOnly supported for providers [azure] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567 [90m------------------------------[0m ... skipping 34 lines ... [32m• [SLOW TEST:15.173 seconds][0m [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should be able to convert from CR v1 to CR v2 [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy ... skipping 101 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:214[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents","total":-1,"completed":1,"skipped":4,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 86 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:39:00.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "proxy-34" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource ","total":-1,"completed":2,"skipped":14,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:00.737: INFO: Only supported for providers [gce gke] (not aws) ... skipping 52 lines ... Jun 23 20:38:45.310: INFO: The status of Pod server-envvars-4bceb976-ba08-4086-b84b-40f4d477944e is Pending, waiting for it to be Running (with Ready = true) Jun 23 20:38:47.311: INFO: The status of Pod server-envvars-4bceb976-ba08-4086-b84b-40f4d477944e is Pending, waiting for it to be Running (with Ready = true) Jun 23 20:38:49.310: INFO: The status of Pod server-envvars-4bceb976-ba08-4086-b84b-40f4d477944e is Pending, waiting for it to be Running (with Ready = true) Jun 23 20:38:51.310: INFO: The status of Pod server-envvars-4bceb976-ba08-4086-b84b-40f4d477944e is Pending, waiting for it to be Running (with Ready = true) Jun 23 20:38:53.344: INFO: The status of Pod server-envvars-4bceb976-ba08-4086-b84b-40f4d477944e is Pending, waiting for it to be Running (with Ready = true) Jun 23 20:38:55.311: INFO: The status of Pod server-envvars-4bceb976-ba08-4086-b84b-40f4d477944e is Running (Ready = true) Jun 23 20:38:55.763: INFO: Waiting up to 5m0s for pod "client-envvars-f930fa8b-ff2e-4873-ba0d-95ee295a0b14" in namespace "pods-563" to be "Succeeded or Failed" Jun 23 20:38:55.872: INFO: Pod "client-envvars-f930fa8b-ff2e-4873-ba0d-95ee295a0b14": Phase="Pending", Reason="", readiness=false. Elapsed: 108.897057ms Jun 23 20:38:57.980: INFO: Pod "client-envvars-f930fa8b-ff2e-4873-ba0d-95ee295a0b14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216492095s Jun 23 20:39:00.088: INFO: Pod "client-envvars-f930fa8b-ff2e-4873-ba0d-95ee295a0b14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324154294s Jun 23 20:39:02.198: INFO: Pod "client-envvars-f930fa8b-ff2e-4873-ba0d-95ee295a0b14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.434158534s [1mSTEP[0m: Saw pod success Jun 23 20:39:02.198: INFO: Pod "client-envvars-f930fa8b-ff2e-4873-ba0d-95ee295a0b14" satisfied condition "Succeeded or Failed" Jun 23 20:39:02.310: INFO: Trying to get logs from node ip-172-20-0-238.eu-west-1.compute.internal pod client-envvars-f930fa8b-ff2e-4873-ba0d-95ee295a0b14 container env3cont: <nil> [1mSTEP[0m: delete the pod Jun 23 20:39:02.580: INFO: Waiting for pod client-envvars-f930fa8b-ff2e-4873-ba0d-95ee295a0b14 to disappear Jun 23 20:39:02.689: INFO: Pod client-envvars-f930fa8b-ff2e-4873-ba0d-95ee295a0b14 no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:20.485 seconds][0m [sig-node] Pods [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should contain environment variables for services [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":16,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:02.929: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 48 lines ... [32m• [SLOW TEST:8.800 seconds][0m [sig-apps] Job [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should adopt matching orphans and release non-matching pods [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":4,"skipped":19,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:03.803: INFO: Driver "local" does not provide raw block - skipping ... skipping 61 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:39:03.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-7840" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":3,"skipped":22,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:03.978: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 24 lines ... [1mSTEP[0m: Building a namespace api object, basename configmap [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap with name configmap-test-volume-map-fa5c42e4-545c-4fb8-b34c-213e621308f6 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 23 20:38:54.980: INFO: Waiting up to 5m0s for pod "pod-configmaps-b6d32ac7-0e84-4a1b-af17-42544b40f22f" in namespace "configmap-5491" to be "Succeeded or Failed" Jun 23 20:38:55.086: INFO: Pod "pod-configmaps-b6d32ac7-0e84-4a1b-af17-42544b40f22f": Phase="Pending", Reason="", readiness=false. Elapsed: 105.705525ms Jun 23 20:38:57.192: INFO: Pod "pod-configmaps-b6d32ac7-0e84-4a1b-af17-42544b40f22f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211483833s Jun 23 20:38:59.300: INFO: Pod "pod-configmaps-b6d32ac7-0e84-4a1b-af17-42544b40f22f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319302764s Jun 23 20:39:01.405: INFO: Pod "pod-configmaps-b6d32ac7-0e84-4a1b-af17-42544b40f22f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.424705188s Jun 23 20:39:03.510: INFO: Pod "pod-configmaps-b6d32ac7-0e84-4a1b-af17-42544b40f22f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.530204332s [1mSTEP[0m: Saw pod success Jun 23 20:39:03.510: INFO: Pod "pod-configmaps-b6d32ac7-0e84-4a1b-af17-42544b40f22f" satisfied condition "Succeeded or Failed" Jun 23 20:39:03.622: INFO: Trying to get logs from node ip-172-20-0-238.eu-west-1.compute.internal pod pod-configmaps-b6d32ac7-0e84-4a1b-af17-42544b40f22f container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:39:03.869: INFO: Waiting for pod pod-configmaps-b6d32ac7-0e84-4a1b-af17-42544b40f22f to disappear Jun 23 20:39:03.981: INFO: Pod pod-configmaps-b6d32ac7-0e84-4a1b-af17-42544b40f22f no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:10.032 seconds][0m [sig-storage] ConfigMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with mappings [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":47,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:04.197: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 124 lines ... Jun 23 20:38:46.047: INFO: PersistentVolumeClaim pvc-8hc29 found but phase is Pending instead of Bound. Jun 23 20:38:48.182: INFO: PersistentVolumeClaim pvc-8hc29 found and phase=Bound (12.781232724s) Jun 23 20:38:48.182: INFO: Waiting up to 3m0s for PersistentVolume local-b9445 to have phase Bound Jun 23 20:38:48.299: INFO: PersistentVolume local-b9445 found and phase=Bound (116.666206ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-pn75 [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:38:48.666: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-pn75" in namespace "provisioning-3043" to be "Succeeded or Failed" Jun 23 20:38:48.774: INFO: Pod "pod-subpath-test-preprovisionedpv-pn75": Phase="Pending", Reason="", readiness=false. Elapsed: 107.526663ms Jun 23 20:38:50.881: INFO: Pod "pod-subpath-test-preprovisionedpv-pn75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214402721s Jun 23 20:38:52.988: INFO: Pod "pod-subpath-test-preprovisionedpv-pn75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321679026s Jun 23 20:38:55.095: INFO: Pod "pod-subpath-test-preprovisionedpv-pn75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.428397628s Jun 23 20:38:57.202: INFO: Pod "pod-subpath-test-preprovisionedpv-pn75": Phase="Pending", Reason="", readiness=false. Elapsed: 8.535289772s Jun 23 20:38:59.309: INFO: Pod "pod-subpath-test-preprovisionedpv-pn75": Phase="Pending", Reason="", readiness=false. Elapsed: 10.642706249s Jun 23 20:39:01.416: INFO: Pod "pod-subpath-test-preprovisionedpv-pn75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.749638334s [1mSTEP[0m: Saw pod success Jun 23 20:39:01.416: INFO: Pod "pod-subpath-test-preprovisionedpv-pn75" satisfied condition "Succeeded or Failed" Jun 23 20:39:01.524: INFO: Trying to get logs from node ip-172-20-0-42.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-pn75 container test-container-subpath-preprovisionedpv-pn75: <nil> [1mSTEP[0m: delete the pod Jun 23 20:39:01.792: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-pn75 to disappear Jun 23 20:39:01.898: INFO: Pod pod-subpath-test-preprovisionedpv-pn75 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-pn75 Jun 23 20:39:01.898: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-pn75" in namespace "provisioning-3043" ... skipping 30 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly file specified in the volumeMount [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":4,"skipped":18,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:04.972: INFO: Only supported for providers [vsphere] (not aws) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 231 lines ... [32m• [SLOW TEST:23.361 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should be able to deny attaching pod [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":6,"skipped":20,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:05.232: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 35 lines ... [32m• [SLOW TEST:51.431 seconds][0m [sig-node] Probing container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be ready immediately after startupProbe succeeds [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:408[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container should be ready immediately after startupProbe succeeds","total":-1,"completed":2,"skipped":24,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:06.152: INFO: Only supported for providers [azure] (not aws) ... skipping 94 lines ... [32m• [SLOW TEST:18.343 seconds][0m [sig-api-machinery] ResourceQuota [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should verify ResourceQuota with best effort scope. [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":3,"skipped":53,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:07.866: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 41 lines ... Jun 23 20:38:46.499: INFO: PersistentVolumeClaim pvc-5jxf2 found but phase is Pending instead of Bound. Jun 23 20:38:48.653: INFO: PersistentVolumeClaim pvc-5jxf2 found and phase=Bound (10.691601424s) Jun 23 20:38:48.653: INFO: Waiting up to 3m0s for PersistentVolume local-vwssh to have phase Bound Jun 23 20:38:48.759: INFO: PersistentVolume local-vwssh found and phase=Bound (106.027738ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-6dtb [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:38:49.080: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-6dtb" in namespace "provisioning-6434" to be "Succeeded or Failed" Jun 23 20:38:49.186: INFO: Pod "pod-subpath-test-preprovisionedpv-6dtb": Phase="Pending", Reason="", readiness=false. Elapsed: 106.030762ms Jun 23 20:38:51.294: INFO: Pod "pod-subpath-test-preprovisionedpv-6dtb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213408944s Jun 23 20:38:53.402: INFO: Pod "pod-subpath-test-preprovisionedpv-6dtb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321287588s Jun 23 20:38:55.539: INFO: Pod "pod-subpath-test-preprovisionedpv-6dtb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.459007267s Jun 23 20:38:57.646: INFO: Pod "pod-subpath-test-preprovisionedpv-6dtb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.565872145s Jun 23 20:38:59.754: INFO: Pod "pod-subpath-test-preprovisionedpv-6dtb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.673291259s Jun 23 20:39:01.863: INFO: Pod "pod-subpath-test-preprovisionedpv-6dtb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.783066998s Jun 23 20:39:03.979: INFO: Pod "pod-subpath-test-preprovisionedpv-6dtb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.898645194s Jun 23 20:39:06.086: INFO: Pod "pod-subpath-test-preprovisionedpv-6dtb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.005892696s [1mSTEP[0m: Saw pod success Jun 23 20:39:06.086: INFO: Pod "pod-subpath-test-preprovisionedpv-6dtb" satisfied condition "Succeeded or Failed" Jun 23 20:39:06.207: INFO: Trying to get logs from node ip-172-20-0-98.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-6dtb container test-container-volume-preprovisionedpv-6dtb: <nil> [1mSTEP[0m: delete the pod Jun 23 20:39:06.443: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-6dtb to disappear Jun 23 20:39:06.552: INFO: Pod pod-subpath-test-preprovisionedpv-6dtb no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-6dtb Jun 23 20:39:06.552: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-6dtb" in namespace "provisioning-6434" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directory [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":5,"skipped":40,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral ... skipping 405 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m version v1 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74[0m should proxy through a service and a pod [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":-1,"completed":3,"skipped":24,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 8 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:39:08.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "node-lease-test-958" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] NodeLease NodeLease should have OwnerReferences set","total":-1,"completed":6,"skipped":42,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:09.291: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 23 lines ... Jun 23 20:39:03.983: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test emptydir 0777 on tmpfs Jun 23 20:39:04.627: INFO: Waiting up to 5m0s for pod "pod-f3112d3f-3cf7-43e3-b913-410f7141584f" in namespace "emptydir-3152" to be "Succeeded or Failed" Jun 23 20:39:04.734: INFO: Pod "pod-f3112d3f-3cf7-43e3-b913-410f7141584f": Phase="Pending", Reason="", readiness=false. Elapsed: 107.140295ms Jun 23 20:39:06.853: INFO: Pod "pod-f3112d3f-3cf7-43e3-b913-410f7141584f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22645676s Jun 23 20:39:09.022: INFO: Pod "pod-f3112d3f-3cf7-43e3-b913-410f7141584f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.394906729s [1mSTEP[0m: Saw pod success Jun 23 20:39:09.022: INFO: Pod "pod-f3112d3f-3cf7-43e3-b913-410f7141584f" satisfied condition "Succeeded or Failed" Jun 23 20:39:09.163: INFO: Trying to get logs from node ip-172-20-0-87.eu-west-1.compute.internal pod pod-f3112d3f-3cf7-43e3-b913-410f7141584f container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:39:09.430: INFO: Waiting for pod pod-f3112d3f-3cf7-43e3-b913-410f7141584f to disappear Jun 23 20:39:09.564: INFO: Pod pod-f3112d3f-3cf7-43e3-b913-410f7141584f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.807 seconds][0m [sig-storage] EmptyDir volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":27,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:09.794: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 5 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: local][LocalVolumeType: dir-link-bindmounted] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 75 lines ... [1mSTEP[0m: Building a namespace api object, basename security-context-test [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 Jun 23 20:39:05.311: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-2244" to be "Succeeded or Failed" Jun 23 20:39:05.415: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 104.076407ms Jun 23 20:39:07.520: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20923702s Jun 23 20:39:09.627: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.316569033s Jun 23 20:39:09.627: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:39:09.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-2244" for this suite. ... skipping 2 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m When creating a container with runAsNonRoot [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104[0m should run with an explicit non-root user ID [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":5,"skipped":31,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:09.990: INFO: Only supported for providers [openstack] (not aws) ... skipping 29 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:39:10.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-6541" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":4,"skipped":62,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 120 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:39:13.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-runtime-6426" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":65,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:14.092: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping ... skipping 65 lines ... Jun 23 20:39:09.296: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename containers [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test override command Jun 23 20:39:09.971: INFO: Waiting up to 5m0s for pod "client-containers-5103cbb2-7f5c-4069-b651-d01dd946075c" in namespace "containers-1246" to be "Succeeded or Failed" Jun 23 20:39:10.077: INFO: Pod "client-containers-5103cbb2-7f5c-4069-b651-d01dd946075c": Phase="Pending", Reason="", readiness=false. Elapsed: 106.692253ms Jun 23 20:39:12.189: INFO: Pod "client-containers-5103cbb2-7f5c-4069-b651-d01dd946075c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217990602s Jun 23 20:39:14.298: INFO: Pod "client-containers-5103cbb2-7f5c-4069-b651-d01dd946075c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.327129941s [1mSTEP[0m: Saw pod success Jun 23 20:39:14.298: INFO: Pod "client-containers-5103cbb2-7f5c-4069-b651-d01dd946075c" satisfied condition "Succeeded or Failed" Jun 23 20:39:14.410: INFO: Trying to get logs from node ip-172-20-0-42.eu-west-1.compute.internal pod client-containers-5103cbb2-7f5c-4069-b651-d01dd946075c container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:39:14.643: INFO: Waiting for pod client-containers-5103cbb2-7f5c-4069-b651-d01dd946075c to disappear Jun 23 20:39:14.750: INFO: Pod client-containers-5103cbb2-7f5c-4069-b651-d01dd946075c no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.672 seconds][0m [sig-node] Docker Containers [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":45,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:14.971: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 44 lines ... [32m• [SLOW TEST:38.071 seconds][0m [sig-api-machinery] ResourceQuota [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should create a ResourceQuota and capture the life of a custom resource. [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:582[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.","total":-1,"completed":2,"skipped":32,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:18.884: INFO: Driver local doesn't support InlineVolume -- skipping ... skipping 74 lines ... [32m• [SLOW TEST:5.954 seconds][0m [sig-apps] Deployment [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m RecreateDeployment should delete old pods and create new ones [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":8,"skipped":48,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:20.941: INFO: Driver emptydir doesn't support DynamicPV -- skipping ... skipping 60 lines ... [36mDriver local doesn't support GenericEphemeralVolume -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":37,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:38:40.690: INFO: >>> kubeConfig: /root/.kube/config ... skipping 24 lines ... Jun 23 20:39:02.221: INFO: PersistentVolumeClaim pvc-wjbbx found but phase is Pending instead of Bound. Jun 23 20:39:04.329: INFO: PersistentVolumeClaim pvc-wjbbx found and phase=Bound (12.762769155s) Jun 23 20:39:04.329: INFO: Waiting up to 3m0s for PersistentVolume local-286sq to have phase Bound Jun 23 20:39:04.435: INFO: PersistentVolume local-286sq found and phase=Bound (106.344526ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-bmgf [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:39:04.762: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-bmgf" in namespace "provisioning-9201" to be "Succeeded or Failed" Jun 23 20:39:04.873: INFO: Pod "pod-subpath-test-preprovisionedpv-bmgf": Phase="Pending", Reason="", readiness=false. Elapsed: 110.657687ms Jun 23 20:39:06.983: INFO: Pod "pod-subpath-test-preprovisionedpv-bmgf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220659073s Jun 23 20:39:09.122: INFO: Pod "pod-subpath-test-preprovisionedpv-bmgf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.359963176s Jun 23 20:39:11.249: INFO: Pod "pod-subpath-test-preprovisionedpv-bmgf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.48637368s Jun 23 20:39:13.357: INFO: Pod "pod-subpath-test-preprovisionedpv-bmgf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.594120502s Jun 23 20:39:15.465: INFO: Pod "pod-subpath-test-preprovisionedpv-bmgf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.702591197s Jun 23 20:39:17.575: INFO: Pod "pod-subpath-test-preprovisionedpv-bmgf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.812227402s [1mSTEP[0m: Saw pod success Jun 23 20:39:17.575: INFO: Pod "pod-subpath-test-preprovisionedpv-bmgf" satisfied condition "Succeeded or Failed" Jun 23 20:39:17.681: INFO: Trying to get logs from node ip-172-20-0-98.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-bmgf container test-container-subpath-preprovisionedpv-bmgf: <nil> [1mSTEP[0m: delete the pod Jun 23 20:39:17.931: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-bmgf to disappear Jun 23 20:39:18.037: INFO: Pod pod-subpath-test-preprovisionedpv-bmgf no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-bmgf Jun 23 20:39:18.037: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-bmgf" in namespace "provisioning-9201" ... skipping 30 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly directory specified in the volumeMount [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":5,"skipped":37,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:21.079: INFO: Driver "local" does not provide raw block - skipping ... skipping 48 lines ... Jun 23 20:38:47.198: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support file as subpath [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230 Jun 23 20:38:47.764: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Jun 23 20:38:48.010: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-1388" in namespace "provisioning-1388" to be "Succeeded or Failed" Jun 23 20:38:48.134: INFO: Pod "hostpath-symlink-prep-provisioning-1388": Phase="Pending", Reason="", readiness=false. Elapsed: 123.731611ms Jun 23 20:38:50.243: INFO: Pod "hostpath-symlink-prep-provisioning-1388": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232954075s Jun 23 20:38:52.357: INFO: Pod "hostpath-symlink-prep-provisioning-1388": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.347171364s [1mSTEP[0m: Saw pod success Jun 23 20:38:52.357: INFO: Pod "hostpath-symlink-prep-provisioning-1388" satisfied condition "Succeeded or Failed" Jun 23 20:38:52.357: INFO: Deleting pod "hostpath-symlink-prep-provisioning-1388" in namespace "provisioning-1388" Jun 23 20:38:52.472: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-1388" to be fully deleted Jun 23 20:38:52.580: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-dgfv [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jun 23 20:38:52.699: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-dgfv" in namespace "provisioning-1388" to be "Succeeded or Failed" Jun 23 20:38:52.820: INFO: Pod "pod-subpath-test-inlinevolume-dgfv": Phase="Pending", Reason="", readiness=false. Elapsed: 121.337755ms Jun 23 20:38:54.929: INFO: Pod "pod-subpath-test-inlinevolume-dgfv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230006729s Jun 23 20:38:57.037: INFO: Pod "pod-subpath-test-inlinevolume-dgfv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.338400504s Jun 23 20:38:59.148: INFO: Pod "pod-subpath-test-inlinevolume-dgfv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.4495255s Jun 23 20:39:01.262: INFO: Pod "pod-subpath-test-inlinevolume-dgfv": Phase="Running", Reason="", readiness=true. Elapsed: 8.563342891s Jun 23 20:39:03.373: INFO: Pod "pod-subpath-test-inlinevolume-dgfv": Phase="Running", Reason="", readiness=true. Elapsed: 10.673880534s ... skipping 2 lines ... Jun 23 20:39:09.704: INFO: Pod "pod-subpath-test-inlinevolume-dgfv": Phase="Running", Reason="", readiness=true. Elapsed: 17.005435135s Jun 23 20:39:11.813: INFO: Pod "pod-subpath-test-inlinevolume-dgfv": Phase="Running", Reason="", readiness=true. Elapsed: 19.114725546s Jun 23 20:39:13.922: INFO: Pod "pod-subpath-test-inlinevolume-dgfv": Phase="Running", Reason="", readiness=true. Elapsed: 21.222968877s Jun 23 20:39:16.031: INFO: Pod "pod-subpath-test-inlinevolume-dgfv": Phase="Running", Reason="", readiness=true. Elapsed: 23.332358684s Jun 23 20:39:18.140: INFO: Pod "pod-subpath-test-inlinevolume-dgfv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.440925672s [1mSTEP[0m: Saw pod success Jun 23 20:39:18.140: INFO: Pod "pod-subpath-test-inlinevolume-dgfv" satisfied condition "Succeeded or Failed" Jun 23 20:39:18.247: INFO: Trying to get logs from node ip-172-20-0-87.eu-west-1.compute.internal pod pod-subpath-test-inlinevolume-dgfv container test-container-subpath-inlinevolume-dgfv: <nil> [1mSTEP[0m: delete the pod Jun 23 20:39:18.478: INFO: Waiting for pod pod-subpath-test-inlinevolume-dgfv to disappear Jun 23 20:39:18.585: INFO: Pod pod-subpath-test-inlinevolume-dgfv no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-dgfv Jun 23 20:39:18.585: INFO: Deleting pod "pod-subpath-test-inlinevolume-dgfv" in namespace "provisioning-1388" [1mSTEP[0m: Deleting pod Jun 23 20:39:18.693: INFO: Deleting pod "pod-subpath-test-inlinevolume-dgfv" in namespace "provisioning-1388" Jun 23 20:39:18.914: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-1388" in namespace "provisioning-1388" to be "Succeeded or Failed" Jun 23 20:39:19.025: INFO: Pod "hostpath-symlink-prep-provisioning-1388": Phase="Pending", Reason="", readiness=false. Elapsed: 110.406936ms Jun 23 20:39:21.136: INFO: Pod "hostpath-symlink-prep-provisioning-1388": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222117561s Jun 23 20:39:23.244: INFO: Pod "hostpath-symlink-prep-provisioning-1388": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.330160712s [1mSTEP[0m: Saw pod success Jun 23 20:39:23.245: INFO: Pod "hostpath-symlink-prep-provisioning-1388" satisfied condition "Succeeded or Failed" Jun 23 20:39:23.245: INFO: Deleting pod "hostpath-symlink-prep-provisioning-1388" in namespace "provisioning-1388" Jun 23 20:39:23.360: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-1388" to be fully deleted [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:39:23.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "provisioning-1388" for this suite. ... skipping 41 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:39:24.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "ingress-1196" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":6,"skipped":49,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":4,"skipped":21,"failed":0} [BeforeEach] [sig-api-machinery] API priority and fairness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:39:23.688: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename apf [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 37 lines ... [32m• [SLOW TEST:54.728 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m works for multiple CRDs of different groups [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":5,"skipped":24,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:24.468: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping [AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 66 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Kubectl expose [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1246[0m should create services for rc [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":6,"skipped":73,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:26.083: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 140 lines ... Jun 23 20:39:17.489: INFO: PersistentVolumeClaim pvc-4hg8d found but phase is Pending instead of Bound. Jun 23 20:39:19.599: INFO: PersistentVolumeClaim pvc-4hg8d found and phase=Bound (8.553347657s) Jun 23 20:39:19.599: INFO: Waiting up to 3m0s for PersistentVolume local-7mm4h to have phase Bound Jun 23 20:39:19.705: INFO: PersistentVolume local-7mm4h found and phase=Bound (105.940976ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-rc2h [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:39:20.029: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rc2h" in namespace "provisioning-5218" to be "Succeeded or Failed" Jun 23 20:39:20.136: INFO: Pod "pod-subpath-test-preprovisionedpv-rc2h": Phase="Pending", Reason="", readiness=false. Elapsed: 107.252796ms Jun 23 20:39:22.243: INFO: Pod "pod-subpath-test-preprovisionedpv-rc2h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21386619s Jun 23 20:39:24.353: INFO: Pod "pod-subpath-test-preprovisionedpv-rc2h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.32427362s [1mSTEP[0m: Saw pod success Jun 23 20:39:24.353: INFO: Pod "pod-subpath-test-preprovisionedpv-rc2h" satisfied condition "Succeeded or Failed" Jun 23 20:39:24.460: INFO: Trying to get logs from node ip-172-20-0-238.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-rc2h container test-container-subpath-preprovisionedpv-rc2h: <nil> [1mSTEP[0m: delete the pod Jun 23 20:39:24.685: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rc2h to disappear Jun 23 20:39:24.792: INFO: Pod pod-subpath-test-preprovisionedpv-rc2h no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-rc2h Jun 23 20:39:24.792: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rc2h" in namespace "provisioning-5218" ... skipping 26 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly file specified in the volumeMount [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":5,"skipped":24,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:27.119: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 48 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: windows-gcepd] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mOnly supported for providers [gce gke] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302 [90m------------------------------[0m ... skipping 33 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: local][LocalVolumeType: block] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 63 lines ... [32m• [SLOW TEST:6.216 seconds][0m [sig-node] Pods [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be updated [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":26,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:30.692: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 94 lines ... Jun 23 20:38:13.874: INFO: PersistentVolumeClaim csi-hostpathjx4jd found but phase is Pending instead of Bound. Jun 23 20:38:15.980: INFO: PersistentVolumeClaim csi-hostpathjx4jd found but phase is Pending instead of Bound. Jun 23 20:38:18.086: INFO: PersistentVolumeClaim csi-hostpathjx4jd found but phase is Pending instead of Bound. Jun 23 20:38:20.192: INFO: PersistentVolumeClaim csi-hostpathjx4jd found and phase=Bound (29.603400021s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-bkhz [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:38:20.512: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-bkhz" in namespace "provisioning-8202" to be "Succeeded or Failed" Jun 23 20:38:20.621: INFO: Pod "pod-subpath-test-dynamicpv-bkhz": Phase="Pending", Reason="", readiness=false. Elapsed: 108.98074ms Jun 23 20:38:22.735: INFO: Pod "pod-subpath-test-dynamicpv-bkhz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223325001s Jun 23 20:38:24.842: INFO: Pod "pod-subpath-test-dynamicpv-bkhz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330387099s Jun 23 20:38:26.947: INFO: Pod "pod-subpath-test-dynamicpv-bkhz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435580782s Jun 23 20:38:29.122: INFO: Pod "pod-subpath-test-dynamicpv-bkhz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.610177514s Jun 23 20:38:31.288: INFO: Pod "pod-subpath-test-dynamicpv-bkhz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.776684234s ... skipping 2 lines ... Jun 23 20:38:37.619: INFO: Pod "pod-subpath-test-dynamicpv-bkhz": Phase="Pending", Reason="", readiness=false. Elapsed: 17.107224627s Jun 23 20:38:39.725: INFO: Pod "pod-subpath-test-dynamicpv-bkhz": Phase="Pending", Reason="", readiness=false. Elapsed: 19.213478148s Jun 23 20:38:41.831: INFO: Pod "pod-subpath-test-dynamicpv-bkhz": Phase="Pending", Reason="", readiness=false. Elapsed: 21.319134792s Jun 23 20:38:43.941: INFO: Pod "pod-subpath-test-dynamicpv-bkhz": Phase="Pending", Reason="", readiness=false. Elapsed: 23.429405212s Jun 23 20:38:46.046: INFO: Pod "pod-subpath-test-dynamicpv-bkhz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.534382192s [1mSTEP[0m: Saw pod success Jun 23 20:38:46.046: INFO: Pod "pod-subpath-test-dynamicpv-bkhz" satisfied condition "Succeeded or Failed" Jun 23 20:38:46.151: INFO: Trying to get logs from node ip-172-20-0-98.eu-west-1.compute.internal pod pod-subpath-test-dynamicpv-bkhz container test-container-subpath-dynamicpv-bkhz: <nil> [1mSTEP[0m: delete the pod Jun 23 20:38:46.376: INFO: Waiting for pod pod-subpath-test-dynamicpv-bkhz to disappear Jun 23 20:38:46.481: INFO: Pod pod-subpath-test-dynamicpv-bkhz no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-bkhz Jun 23 20:38:46.481: INFO: Deleting pod "pod-subpath-test-dynamicpv-bkhz" in namespace "provisioning-8202" ... skipping 105 lines ... [32m• [SLOW TEST:21.737 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should mutate custom resource with different stored version [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":4,"skipped":29,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath ... skipping 20 lines ... Jun 23 20:39:17.387: INFO: PersistentVolumeClaim pvc-rrjbb found but phase is Pending instead of Bound. Jun 23 20:39:19.493: INFO: PersistentVolumeClaim pvc-rrjbb found and phase=Bound (8.544856134s) Jun 23 20:39:19.493: INFO: Waiting up to 3m0s for PersistentVolume local-zqlw6 to have phase Bound Jun 23 20:39:19.600: INFO: PersistentVolume local-zqlw6 found and phase=Bound (106.629977ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-zpgx [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:39:19.920: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zpgx" in namespace "provisioning-6765" to be "Succeeded or Failed" Jun 23 20:39:20.030: INFO: Pod "pod-subpath-test-preprovisionedpv-zpgx": Phase="Pending", Reason="", readiness=false. Elapsed: 109.856947ms Jun 23 20:39:22.138: INFO: Pod "pod-subpath-test-preprovisionedpv-zpgx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217718544s Jun 23 20:39:24.247: INFO: Pod "pod-subpath-test-preprovisionedpv-zpgx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326947765s Jun 23 20:39:26.359: INFO: Pod "pod-subpath-test-preprovisionedpv-zpgx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.438619573s [1mSTEP[0m: Saw pod success Jun 23 20:39:26.359: INFO: Pod "pod-subpath-test-preprovisionedpv-zpgx" satisfied condition "Succeeded or Failed" Jun 23 20:39:26.467: INFO: Trying to get logs from node ip-172-20-0-42.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-zpgx container test-container-subpath-preprovisionedpv-zpgx: <nil> [1mSTEP[0m: delete the pod Jun 23 20:39:26.711: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zpgx to disappear Jun 23 20:39:26.819: INFO: Pod pod-subpath-test-preprovisionedpv-zpgx no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-zpgx Jun 23 20:39:26.819: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zpgx" in namespace "provisioning-6765" [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-zpgx [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:39:27.034: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zpgx" in namespace "provisioning-6765" to be "Succeeded or Failed" Jun 23 20:39:27.140: INFO: Pod "pod-subpath-test-preprovisionedpv-zpgx": Phase="Pending", Reason="", readiness=false. Elapsed: 106.219358ms Jun 23 20:39:29.247: INFO: Pod "pod-subpath-test-preprovisionedpv-zpgx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213276673s Jun 23 20:39:31.355: INFO: Pod "pod-subpath-test-preprovisionedpv-zpgx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.321233256s [1mSTEP[0m: Saw pod success Jun 23 20:39:31.355: INFO: Pod "pod-subpath-test-preprovisionedpv-zpgx" satisfied condition "Succeeded or Failed" Jun 23 20:39:31.467: INFO: Trying to get logs from node ip-172-20-0-42.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-zpgx container test-container-subpath-preprovisionedpv-zpgx: <nil> [1mSTEP[0m: delete the pod Jun 23 20:39:31.694: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zpgx to disappear Jun 23 20:39:31.811: INFO: Pod pod-subpath-test-preprovisionedpv-zpgx no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-zpgx Jun 23 20:39:31.811: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zpgx" in namespace "provisioning-6765" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directories when readOnly specified in the volumeSource [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":5,"skipped":46,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:33.397: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 99 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m One pod requesting one prebound PVC [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m should be able to mount volume and read from pod1 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":3,"skipped":35,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 18 lines ... [32m• [SLOW TEST:12.595 seconds][0m [sig-api-machinery] ResourceQuota [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should create a ResourceQuota and capture the life of a replication controller. [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":6,"skipped":41,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:39.743: INFO: Driver hostPath doesn't support GenericEphemeralVolume -- skipping [AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 34 lines ... [32m• [SLOW TEST:52.896 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m works for multiple CRDs of same group and version but different kinds [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":2,"skipped":21,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:39:30.970: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:49 [It] volume on default medium should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:70 [1mSTEP[0m: Creating a pod to test emptydir volume type on node default medium Jun 23 20:39:31.626: INFO: Waiting up to 5m0s for pod "pod-7b76bf4d-2b45-40a7-a58b-55cec2a1abbe" in namespace "emptydir-2980" to be "Succeeded or Failed" Jun 23 20:39:31.737: INFO: Pod "pod-7b76bf4d-2b45-40a7-a58b-55cec2a1abbe": Phase="Pending", Reason="", readiness=false. Elapsed: 111.595936ms Jun 23 20:39:33.843: INFO: Pod "pod-7b76bf4d-2b45-40a7-a58b-55cec2a1abbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217497067s Jun 23 20:39:35.949: INFO: Pod "pod-7b76bf4d-2b45-40a7-a58b-55cec2a1abbe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32366953s Jun 23 20:39:38.062: INFO: Pod "pod-7b76bf4d-2b45-40a7-a58b-55cec2a1abbe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435945168s Jun 23 20:39:40.168: INFO: Pod "pod-7b76bf4d-2b45-40a7-a58b-55cec2a1abbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.542072298s [1mSTEP[0m: Saw pod success Jun 23 20:39:40.168: INFO: Pod "pod-7b76bf4d-2b45-40a7-a58b-55cec2a1abbe" satisfied condition "Succeeded or Failed" Jun 23 20:39:40.273: INFO: Trying to get logs from node ip-172-20-0-98.eu-west-1.compute.internal pod pod-7b76bf4d-2b45-40a7-a58b-55cec2a1abbe container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:39:40.489: INFO: Waiting for pod pod-7b76bf4d-2b45-40a7-a58b-55cec2a1abbe to disappear Jun 23 20:39:40.594: INFO: Pod pod-7b76bf4d-2b45-40a7-a58b-55cec2a1abbe no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:47[0m volume on default medium should have the correct mode using FSGroup [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:70[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":3,"skipped":21,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:40.811: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) ... skipping 73 lines ... [32m• [SLOW TEST:37.338 seconds][0m [sig-network] Conntrack [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should be able to preserve UDP traffic when server pod cycles for a NodePort service [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":7,"skipped":22,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:42.585: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 118 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:39:43.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-5133" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":8,"skipped":46,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:43.694: INFO: Only supported for providers [azure] (not aws) ... skipping 222 lines ... Jun 23 20:39:00.615: INFO: PersistentVolumeClaim pvc-nq7hv found but phase is Pending instead of Bound. Jun 23 20:39:02.727: INFO: PersistentVolumeClaim pvc-nq7hv found and phase=Bound (2.245784313s) [1mSTEP[0m: Deleting the previously created pod Jun 23 20:39:17.286: INFO: Deleting pod "pvc-volume-tester-t98g8" in namespace "csi-mock-volumes-4948" Jun 23 20:39:17.393: INFO: Wait up to 5m0s for pod "pvc-volume-tester-t98g8" to be fully deleted [1mSTEP[0m: Checking CSI driver logs Jun 23 20:39:25.739: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/86c51ed8-21e2-4f62-a430-887221d14aea/volumes/kubernetes.io~csi/pvc-fc1027fe-fd01-42ba-bec1-193be2efce11/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} [1mSTEP[0m: Deleting pod pvc-volume-tester-t98g8 Jun 23 20:39:25.739: INFO: Deleting pod "pvc-volume-tester-t98g8" in namespace "csi-mock-volumes-4948" [1mSTEP[0m: Deleting claim pvc-nq7hv Jun 23 20:39:26.071: INFO: Waiting up to 2m0s for PersistentVolume pvc-fc1027fe-fd01-42ba-bec1-193be2efce11 to get deleted Jun 23 20:39:26.180: INFO: PersistentVolume pvc-fc1027fe-fd01-42ba-bec1-193be2efce11 found and phase=Released (108.508361ms) Jun 23 20:39:28.287: INFO: PersistentVolume pvc-fc1027fe-fd01-42ba-bec1-193be2efce11 found and phase=Released (2.215969171s) ... skipping 46 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSI workload information using mock driver [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:469[0m should not be passed when podInfoOnMount=nil [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:519[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":-1,"completed":2,"skipped":19,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:46.646: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 315 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should be able to unmount after the subpath directory is deleted [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":7,"skipped":52,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","total":-1,"completed":3,"skipped":30,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:39:10.797: INFO: >>> kubeConfig: /root/.kube/config ... skipping 22 lines ... Jun 23 20:39:31.910: INFO: PersistentVolumeClaim pvc-t8n5t found but phase is Pending instead of Bound. Jun 23 20:39:34.018: INFO: PersistentVolumeClaim pvc-t8n5t found and phase=Bound (8.543054306s) Jun 23 20:39:34.018: INFO: Waiting up to 3m0s for PersistentVolume local-t2kp2 to have phase Bound Jun 23 20:39:34.124: INFO: PersistentVolume local-t2kp2 found and phase=Bound (105.815773ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-rdwq [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 23 20:39:34.450: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-rdwq" in namespace "volume-6392" to be "Succeeded or Failed" Jun 23 20:39:34.558: INFO: Pod "exec-volume-test-preprovisionedpv-rdwq": Phase="Pending", Reason="", readiness=false. Elapsed: 108.439103ms Jun 23 20:39:36.665: INFO: Pod "exec-volume-test-preprovisionedpv-rdwq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215568246s Jun 23 20:39:38.775: INFO: Pod "exec-volume-test-preprovisionedpv-rdwq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325129054s Jun 23 20:39:40.881: INFO: Pod "exec-volume-test-preprovisionedpv-rdwq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.430918074s Jun 23 20:39:42.987: INFO: Pod "exec-volume-test-preprovisionedpv-rdwq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.537516s Jun 23 20:39:45.094: INFO: Pod "exec-volume-test-preprovisionedpv-rdwq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.644367064s [1mSTEP[0m: Saw pod success Jun 23 20:39:45.094: INFO: Pod "exec-volume-test-preprovisionedpv-rdwq" satisfied condition "Succeeded or Failed" Jun 23 20:39:45.200: INFO: Trying to get logs from node ip-172-20-0-98.eu-west-1.compute.internal pod exec-volume-test-preprovisionedpv-rdwq container exec-container-preprovisionedpv-rdwq: <nil> [1mSTEP[0m: delete the pod Jun 23 20:39:45.422: INFO: Waiting for pod exec-volume-test-preprovisionedpv-rdwq to disappear Jun 23 20:39:45.528: INFO: Pod exec-volume-test-preprovisionedpv-rdwq no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-rdwq Jun 23 20:39:45.528: INFO: Deleting pod "exec-volume-test-preprovisionedpv-rdwq" in namespace "volume-6392" ... skipping 54 lines ... [32m• [SLOW TEST:8.489 seconds][0m [sig-apps] ReplicaSet [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should serve a basic image on each replica with a public image [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":4,"skipped":25,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes ... skipping 96 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":6,"skipped":49,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:52.188: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 54 lines ... [32m• [SLOW TEST:12.143 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should be able to deny custom resource creation, update and deletion [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":9,"skipped":69,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:39:55.881: INFO: Only supported for providers [gce gke] (not aws) ... skipping 14 lines ... [36mOnly supported for providers [gce gke] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302 [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":9,"skipped":73,"failed":0} [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:39:29.595: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename nettest [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 64 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:40:00.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "podtemplate-3490" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":10,"skipped":77,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:00.264: INFO: Only supported for providers [gce gke] (not aws) ... skipping 88 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should not mount / map unused volumes in a pod [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":5,"skipped":33,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:01.374: INFO: Only supported for providers [vsphere] (not aws) ... skipping 79 lines ... [32m• [SLOW TEST:28.976 seconds][0m [sig-network] Conntrack [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should be able to preserve UDP traffic when server pod cycles for a ClusterIP service [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:206[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":6,"skipped":55,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:02.397: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 203 lines ... [1mSTEP[0m: creating an object not containing a namespace with in-cluster config Jun 23 20:39:54.368: INFO: Running '/logs/artifacts/f01f2595-f32f-11ec-9e31-9224b4edca5e/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-1691 exec httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1' Jun 23 20:39:56.242: INFO: rc: 255 [1mSTEP[0m: trying to use kubectl with invalid token Jun 23 20:39:56.242: INFO: Running '/logs/artifacts/f01f2595-f32f-11ec-9e31-9224b4edca5e/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-1691 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1' Jun 23 20:39:57.475: INFO: rc: 255 Jun 23 20:39:57.475: INFO: got err error running /logs/artifacts/f01f2595-f32f-11ec-9e31-9224b4edca5e/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-1691 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1: Command stdout: I0623 20:39:57.334483 185 merged_client_builder.go:163] Using in-cluster namespace I0623 20:39:57.334695 185 merged_client_builder.go:121] Using in-cluster configuration I0623 20:39:57.342698 185 merged_client_builder.go:121] Using in-cluster configuration I0623 20:39:57.343203 185 round_trippers.go:463] GET https://100.64.0.1:443/api/v1/namespaces/kubectl-1691/pods?limit=500 I0623 20:39:57.343345 185 round_trippers.go:469] Request Headers: ... skipping 7 lines ... "metadata": {}, "status": "Failure", "message": "Unauthorized", "reason": "Unauthorized", "code": 401 }] F0623 20:39:57.352711 185 helpers.go:118] error: You must be logged in to the server (Unauthorized) goroutine 1 [running]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1038 +0x8a k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x307e020, 0x3, 0x0, 0xc0004d2000, 0x2, {0x25f1447, 0x10}, 0xc000450800, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:987 +0x5fd k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0xc0004b6d00, 0x3a, 0x0, {0x0, 0x0}, 0x0, {0xc0005ce540, 0x1, 0x1}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x1ae k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1518 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal({0xc0004b6d00, 0x3a}, 0xc0005ce480) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:96 +0xc5 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr({0x1fecc40, 0xc00063c1c8}, 0x1e78210) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:180 +0x69a k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:118 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func2(0xc00031c780, {0xc0001c3410, 0x1, 0x3}) ... skipping 70 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:725 +0xac5 stderr: + /tmp/kubectl get pods '--token=invalid' '--v=7' command terminated with exit code 255 error: exit status 255 [1mSTEP[0m: trying to use kubectl with invalid server Jun 23 20:39:57.475: INFO: Running '/logs/artifacts/f01f2595-f32f-11ec-9e31-9224b4edca5e/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-1691 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1' Jun 23 20:39:58.745: INFO: rc: 255 Jun 23 20:39:58.745: INFO: got err error running /logs/artifacts/f01f2595-f32f-11ec-9e31-9224b4edca5e/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-1691 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1: Command stdout: I0623 20:39:58.603036 196 merged_client_builder.go:163] Using in-cluster namespace I0623 20:39:58.629923 196 round_trippers.go:553] GET http://invalid/api?timeout=32s in 26 milliseconds I0623 20:39:58.630022 196 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 169.254.20.10:53: no such host I0623 20:39:58.633609 196 round_trippers.go:553] GET http://invalid/api?timeout=32s in 3 milliseconds I0623 20:39:58.633676 196 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 169.254.20.10:53: no such host I0623 20:39:58.633766 196 shortcut.go:89] Error loading discovery information: Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 169.254.20.10:53: no such host I0623 20:39:58.647282 196 round_trippers.go:553] GET http://invalid/api?timeout=32s in 13 milliseconds I0623 20:39:58.647386 196 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 169.254.20.10:53: no such host I0623 20:39:58.650683 196 round_trippers.go:553] GET http://invalid/api?timeout=32s in 3 milliseconds I0623 20:39:58.650742 196 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 169.254.20.10:53: no such host I0623 20:39:58.662334 196 round_trippers.go:553] GET http://invalid/api?timeout=32s in 11 milliseconds I0623 20:39:58.662598 196 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 169.254.20.10:53: no such host I0623 20:39:58.662686 196 helpers.go:237] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 169.254.20.10:53: no such host F0623 20:39:58.662759 196 helpers.go:118] Unable to connect to the server: dial tcp: lookup invalid on 169.254.20.10:53: no such host goroutine 1 [running]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1038 +0x8a k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x307e020, 0x3, 0x0, 0xc0000b5030, 0x2, {0x25f1447, 0x10}, 0xc000060400, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:987 +0x5fd k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0xc00065f260, 0x5b, 0x0, {0x0, 0x0}, 0x35, {0xc000418fd0, 0x1, 0x1}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x1ae k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1518 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal({0xc00065f260, 0x5b}, 0xc00059bda0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:96 +0xc5 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr({0x1febee0, 0xc00059bda0}, 0x1e78210) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:191 +0x7d7 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:118 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func2(0xc000133900, {0xc000544de0, 0x1, 0x3}) ... skipping 28 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:179 +0x85 stderr: + /tmp/kubectl get pods '--server=invalid' '--v=6' command terminated with exit code 255 error: exit status 255 [1mSTEP[0m: trying to use kubectl with invalid namespace Jun 23 20:39:58.745: INFO: Running '/logs/artifacts/f01f2595-f32f-11ec-9e31-9224b4edca5e/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-1691 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1' Jun 23 20:39:59.923: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n" Jun 23 20:39:59.923: INFO: stdout: "I0623 20:39:59.829755 206 merged_client_builder.go:121] Using in-cluster configuration\nI0623 20:39:59.842586 206 merged_client_builder.go:121] Using in-cluster configuration\nI0623 20:39:59.855323 206 round_trippers.go:553] GET https://100.64.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 12 milliseconds\nNo resources found in invalid namespace.\n" Jun 23 20:39:59.923: INFO: stdout: I0623 20:39:59.829755 206 merged_client_builder.go:121] Using in-cluster configuration ... skipping 72 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Simple pod [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379[0m should handle in-cluster config [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:654[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Simple pod should handle in-cluster config","total":-1,"completed":5,"skipped":30,"failed":0} [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:03.087: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 47 lines ... Jun 23 20:39:47.609: INFO: PersistentVolumeClaim pvc-fd2qk found but phase is Pending instead of Bound. Jun 23 20:39:49.763: INFO: PersistentVolumeClaim pvc-fd2qk found and phase=Bound (4.371260759s) Jun 23 20:39:49.763: INFO: Waiting up to 3m0s for PersistentVolume local-m65cn to have phase Bound Jun 23 20:39:49.927: INFO: PersistentVolume local-m65cn found and phase=Bound (164.505992ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-vpzv [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:39:50.253: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vpzv" in namespace "provisioning-1671" to be "Succeeded or Failed" Jun 23 20:39:50.359: INFO: Pod "pod-subpath-test-preprovisionedpv-vpzv": Phase="Pending", Reason="", readiness=false. Elapsed: 106.241443ms Jun 23 20:39:52.466: INFO: Pod "pod-subpath-test-preprovisionedpv-vpzv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213105241s Jun 23 20:39:54.573: INFO: Pod "pod-subpath-test-preprovisionedpv-vpzv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320382809s Jun 23 20:39:56.687: INFO: Pod "pod-subpath-test-preprovisionedpv-vpzv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.434153169s Jun 23 20:39:58.793: INFO: Pod "pod-subpath-test-preprovisionedpv-vpzv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.540783889s Jun 23 20:40:00.902: INFO: Pod "pod-subpath-test-preprovisionedpv-vpzv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.649496074s [1mSTEP[0m: Saw pod success Jun 23 20:40:00.902: INFO: Pod "pod-subpath-test-preprovisionedpv-vpzv" satisfied condition "Succeeded or Failed" Jun 23 20:40:01.008: INFO: Trying to get logs from node ip-172-20-0-87.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-vpzv container test-container-volume-preprovisionedpv-vpzv: <nil> [1mSTEP[0m: delete the pod Jun 23 20:40:01.231: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vpzv to disappear Jun 23 20:40:01.338: INFO: Pod pod-subpath-test-preprovisionedpv-vpzv no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-vpzv Jun 23 20:40:01.338: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vpzv" in namespace "provisioning-1671" ... skipping 34 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support non-existent path [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":7,"skipped":50,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:05.293: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 58 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should create read-only inline ephemeral volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":4,"skipped":33,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 76 lines ... [32m• [SLOW TEST:72.685 seconds][0m [sig-network] Services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":14,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:06.297: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping ... skipping 89 lines ... Jun 23 20:39:19.189: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-btj7b] to have phase Bound Jun 23 20:39:19.295: INFO: PersistentVolumeClaim pvc-btj7b found and phase=Bound (105.636576ms) [1mSTEP[0m: Deleting the previously created pod Jun 23 20:39:45.822: INFO: Deleting pod "pvc-volume-tester-f2wkb" in namespace "csi-mock-volumes-4878" Jun 23 20:39:45.929: INFO: Wait up to 5m0s for pod "pvc-volume-tester-f2wkb" to be fully deleted [1mSTEP[0m: Checking CSI driver logs Jun 23 20:39:50.255: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/e31b1495-9cbc-4744-bff0-c20cf5c6f1c2/volumes/kubernetes.io~csi/pvc-6a7a7e7c-98c8-41bb-99c6-bfec77ee8e6a/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} [1mSTEP[0m: Deleting pod pvc-volume-tester-f2wkb Jun 23 20:39:50.255: INFO: Deleting pod "pvc-volume-tester-f2wkb" in namespace "csi-mock-volumes-4878" [1mSTEP[0m: Deleting claim pvc-btj7b Jun 23 20:39:50.572: INFO: Waiting up to 2m0s for PersistentVolume pvc-6a7a7e7c-98c8-41bb-99c6-bfec77ee8e6a to get deleted Jun 23 20:39:50.677: INFO: PersistentVolume pvc-6a7a7e7c-98c8-41bb-99c6-bfec77ee8e6a was removed [1mSTEP[0m: Deleting storageclass csi-mock-volumes-4878-sch574r ... skipping 43 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSI workload information using mock driver [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:469[0m should not be passed when CSIDriver does not exist [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:519[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":7,"skipped":50,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning ... skipping 75 lines ... [32m• [SLOW TEST:62.332 seconds][0m [sig-node] Probing container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be restarted with a failing exec liveness probe that took longer than the timeout [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:260[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":3,"skipped":38,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:08.506: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 70 lines ... [32m• [SLOW TEST:8.346 seconds][0m [sig-node] Ephemeral Containers [NodeFeature:EphemeralContainers] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m will start an ephemeral container in an existing pod [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/ephemeral_containers.go:42[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Ephemeral Containers [NodeFeature:EphemeralContainers] will start an ephemeral container in an existing pod","total":-1,"completed":11,"skipped":79,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:08.618: INFO: Driver hostPath doesn't support DynamicPV -- skipping ... skipping 145 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:40:08.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "disruption-6636" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":8,"skipped":57,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:09.196: INFO: Only supported for providers [openstack] (not aws) ... skipping 270 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Dynamic PV (default fs)] provisioning [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should provision storage with pvc data source [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:239[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source","total":-1,"completed":3,"skipped":13,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:11.463: INFO: Only supported for providers [openstack] (not aws) ... skipping 34 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:40:12.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "apf-9627" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration","total":-1,"completed":9,"skipped":72,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:12.272: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 42 lines ... [32m• [SLOW TEST:9.221 seconds][0m [sig-auth] ServiceAccounts [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23[0m should ensure a single API token exists [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:52[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] ServiceAccounts should ensure a single API token exists","total":-1,"completed":6,"skipped":33,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:12.322: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 107 lines ... Jun 23 20:39:12.539: INFO: Deleting ReplicationController up-down-1 took: 107.105193ms Jun 23 20:39:12.639: INFO: Terminating ReplicationController up-down-1 pods took: 100.535872ms [1mSTEP[0m: verifying service up-down-1 is not up Jun 23 20:39:19.959: INFO: Creating new host exec pod Jun 23 20:39:20.176: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jun 23 20:39:22.283: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Jun 23 20:39:22.283: INFO: Running '/logs/artifacts/f01f2595-f32f-11ec-9e31-9224b4edca5e/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1829 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.69.26.183:80 && echo service-down-failed' Jun 23 20:39:25.701: INFO: rc: 28 Jun 23 20:39:25.702: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.69.26.183:80 && echo service-down-failed" in pod services-1829/verify-service-down-host-exec-pod: error running /logs/artifacts/f01f2595-f32f-11ec-9e31-9224b4edca5e/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1829 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.69.26.183:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://100.69.26.183:80 command terminated with exit code 28 error: exit status 28 Output: [1mSTEP[0m: Deleting pod verify-service-down-host-exec-pod in namespace services-1829 [1mSTEP[0m: verifying service up-down-2 is still up Jun 23 20:39:25.816: INFO: Creating new host exec pod Jun 23 20:39:26.031: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) ... skipping 62 lines ... [32m• [SLOW TEST:133.114 seconds][0m [sig-network] Services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should be able to up and down services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1036[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":3,"skipped":30,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath ... skipping 9 lines ... Jun 23 20:38:54.790: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-83298ghsm [1mSTEP[0m: creating a claim Jun 23 20:38:54.906: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-5vpr [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:38:55.292: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-5vpr" in namespace "provisioning-8329" to be "Succeeded or Failed" Jun 23 20:38:55.430: INFO: Pod "pod-subpath-test-dynamicpv-5vpr": Phase="Pending", Reason="", readiness=false. Elapsed: 138.163681ms Jun 23 20:38:57.536: INFO: Pod "pod-subpath-test-dynamicpv-5vpr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.243906797s Jun 23 20:38:59.643: INFO: Pod "pod-subpath-test-dynamicpv-5vpr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.350374838s Jun 23 20:39:01.781: INFO: Pod "pod-subpath-test-dynamicpv-5vpr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.48917404s Jun 23 20:39:03.898: INFO: Pod "pod-subpath-test-dynamicpv-5vpr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.605470577s Jun 23 20:39:06.009: INFO: Pod "pod-subpath-test-dynamicpv-5vpr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.71695066s Jun 23 20:39:08.118: INFO: Pod "pod-subpath-test-dynamicpv-5vpr": Phase="Pending", Reason="", readiness=false. Elapsed: 12.826048882s Jun 23 20:39:10.227: INFO: Pod "pod-subpath-test-dynamicpv-5vpr": Phase="Pending", Reason="", readiness=false. Elapsed: 14.935205343s Jun 23 20:39:12.334: INFO: Pod "pod-subpath-test-dynamicpv-5vpr": Phase="Pending", Reason="", readiness=false. Elapsed: 17.041410977s Jun 23 20:39:14.440: INFO: Pod "pod-subpath-test-dynamicpv-5vpr": Phase="Pending", Reason="", readiness=false. Elapsed: 19.147568917s Jun 23 20:39:16.548: INFO: Pod "pod-subpath-test-dynamicpv-5vpr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.255226494s [1mSTEP[0m: Saw pod success Jun 23 20:39:16.548: INFO: Pod "pod-subpath-test-dynamicpv-5vpr" satisfied condition "Succeeded or Failed" Jun 23 20:39:16.653: INFO: Trying to get logs from node ip-172-20-0-87.eu-west-1.compute.internal pod pod-subpath-test-dynamicpv-5vpr container test-container-subpath-dynamicpv-5vpr: <nil> [1mSTEP[0m: delete the pod Jun 23 20:39:16.882: INFO: Waiting for pod pod-subpath-test-dynamicpv-5vpr to disappear Jun 23 20:39:16.988: INFO: Pod pod-subpath-test-dynamicpv-5vpr no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-5vpr Jun 23 20:39:16.988: INFO: Deleting pod "pod-subpath-test-dynamicpv-5vpr" in namespace "provisioning-8329" [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-5vpr [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:39:17.227: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-5vpr" in namespace "provisioning-8329" to be "Succeeded or Failed" Jun 23 20:39:17.333: INFO: Pod "pod-subpath-test-dynamicpv-5vpr": Phase="Pending", Reason="", readiness=false. Elapsed: 106.007561ms Jun 23 20:39:19.440: INFO: Pod "pod-subpath-test-dynamicpv-5vpr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213094538s Jun 23 20:39:21.546: INFO: Pod "pod-subpath-test-dynamicpv-5vpr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319056728s Jun 23 20:39:23.652: INFO: Pod "pod-subpath-test-dynamicpv-5vpr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.425342238s Jun 23 20:39:25.759: INFO: Pod "pod-subpath-test-dynamicpv-5vpr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.532528976s Jun 23 20:39:27.865: INFO: Pod "pod-subpath-test-dynamicpv-5vpr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.638237195s ... skipping 7 lines ... Jun 23 20:39:44.735: INFO: Pod "pod-subpath-test-dynamicpv-5vpr": Phase="Pending", Reason="", readiness=false. Elapsed: 27.50824129s Jun 23 20:39:46.842: INFO: Pod "pod-subpath-test-dynamicpv-5vpr": Phase="Pending", Reason="", readiness=false. Elapsed: 29.615161068s Jun 23 20:39:48.951: INFO: Pod "pod-subpath-test-dynamicpv-5vpr": Phase="Pending", Reason="", readiness=false. Elapsed: 31.724606588s Jun 23 20:39:51.062: INFO: Pod "pod-subpath-test-dynamicpv-5vpr": Phase="Pending", Reason="", readiness=false. Elapsed: 33.835176073s Jun 23 20:39:53.168: INFO: Pod "pod-subpath-test-dynamicpv-5vpr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.941304439s [1mSTEP[0m: Saw pod success Jun 23 20:39:53.168: INFO: Pod "pod-subpath-test-dynamicpv-5vpr" satisfied condition "Succeeded or Failed" Jun 23 20:39:53.274: INFO: Trying to get logs from node ip-172-20-0-238.eu-west-1.compute.internal pod pod-subpath-test-dynamicpv-5vpr container test-container-subpath-dynamicpv-5vpr: <nil> [1mSTEP[0m: delete the pod Jun 23 20:39:53.505: INFO: Waiting for pod pod-subpath-test-dynamicpv-5vpr to disappear Jun 23 20:39:53.701: INFO: Pod pod-subpath-test-dynamicpv-5vpr no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-5vpr Jun 23 20:39:53.701: INFO: Deleting pod "pod-subpath-test-dynamicpv-5vpr" in namespace "provisioning-8329" ... skipping 48 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41[0m when running a container with a new image [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266[0m should be able to pull image [NodeConformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":7,"skipped":81,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:15.835: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern ... skipping 124 lines ... Jun 23 20:40:12.328: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test emptydir 0777 on tmpfs Jun 23 20:40:12.994: INFO: Waiting up to 5m0s for pod "pod-66a84ba9-0807-4738-b9e9-d2b0a41c8a4b" in namespace "emptydir-7984" to be "Succeeded or Failed" Jun 23 20:40:13.100: INFO: Pod "pod-66a84ba9-0807-4738-b9e9-d2b0a41c8a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 106.403926ms Jun 23 20:40:15.207: INFO: Pod "pod-66a84ba9-0807-4738-b9e9-d2b0a41c8a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213551889s Jun 23 20:40:17.316: INFO: Pod "pod-66a84ba9-0807-4738-b9e9-d2b0a41c8a4b": Phase="Running", Reason="", readiness=true. Elapsed: 4.32268838s Jun 23 20:40:19.424: INFO: Pod "pod-66a84ba9-0807-4738-b9e9-d2b0a41c8a4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.429954603s [1mSTEP[0m: Saw pod success Jun 23 20:40:19.424: INFO: Pod "pod-66a84ba9-0807-4738-b9e9-d2b0a41c8a4b" satisfied condition "Succeeded or Failed" Jun 23 20:40:19.531: INFO: Trying to get logs from node ip-172-20-0-42.eu-west-1.compute.internal pod pod-66a84ba9-0807-4738-b9e9-d2b0a41c8a4b container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:40:19.899: INFO: Waiting for pod pod-66a84ba9-0807-4738-b9e9-d2b0a41c8a4b to disappear Jun 23 20:40:20.005: INFO: Pod pod-66a84ba9-0807-4738-b9e9-d2b0a41c8a4b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:7.896 seconds][0m [sig-storage] EmptyDir volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":40,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:20.228: INFO: Driver local doesn't support InlineVolume -- skipping ... skipping 67 lines ... Jun 23 20:40:15.842: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename var-expansion [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test substitution in volume subpath Jun 23 20:40:16.492: INFO: Waiting up to 5m0s for pod "var-expansion-d635997f-1239-4ee9-921e-a08c12c21761" in namespace "var-expansion-4771" to be "Succeeded or Failed" Jun 23 20:40:16.599: INFO: Pod "var-expansion-d635997f-1239-4ee9-921e-a08c12c21761": Phase="Pending", Reason="", readiness=false. Elapsed: 106.066334ms Jun 23 20:40:18.712: INFO: Pod "var-expansion-d635997f-1239-4ee9-921e-a08c12c21761": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219328621s Jun 23 20:40:20.820: INFO: Pod "var-expansion-d635997f-1239-4ee9-921e-a08c12c21761": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327313677s Jun 23 20:40:22.927: INFO: Pod "var-expansion-d635997f-1239-4ee9-921e-a08c12c21761": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.434745564s [1mSTEP[0m: Saw pod success Jun 23 20:40:22.927: INFO: Pod "var-expansion-d635997f-1239-4ee9-921e-a08c12c21761" satisfied condition "Succeeded or Failed" Jun 23 20:40:23.033: INFO: Trying to get logs from node ip-172-20-0-42.eu-west-1.compute.internal pod var-expansion-d635997f-1239-4ee9-921e-a08c12c21761 container dapi-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:40:23.255: INFO: Waiting for pod var-expansion-d635997f-1239-4ee9-921e-a08c12c21761 to disappear Jun 23 20:40:23.364: INFO: Pod var-expansion-d635997f-1239-4ee9-921e-a08c12c21761 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:7.737 seconds][0m [sig-node] Variable Expansion [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should allow substituting values in a volume subpath [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":8,"skipped":91,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:23.599: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 51 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: gluster] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mOnly supported for node OS distro [gci ubuntu custom] (not debian)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263 [90m------------------------------[0m ... skipping 86 lines ... [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":-1,"completed":4,"skipped":49,"failed":0} [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:40:19.008: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename configmap [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap with name configmap-test-volume-650a21b2-d6ba-48f0-9487-1023363507d2 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 23 20:40:19.837: INFO: Waiting up to 5m0s for pod "pod-configmaps-72d4dcc7-80de-4588-ac56-5a5179610560" in namespace "configmap-8249" to be "Succeeded or Failed" Jun 23 20:40:19.956: INFO: Pod "pod-configmaps-72d4dcc7-80de-4588-ac56-5a5179610560": Phase="Pending", Reason="", readiness=false. Elapsed: 118.732559ms Jun 23 20:40:22.064: INFO: Pod "pod-configmaps-72d4dcc7-80de-4588-ac56-5a5179610560": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226816511s Jun 23 20:40:24.174: INFO: Pod "pod-configmaps-72d4dcc7-80de-4588-ac56-5a5179610560": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.336624468s [1mSTEP[0m: Saw pod success Jun 23 20:40:24.174: INFO: Pod "pod-configmaps-72d4dcc7-80de-4588-ac56-5a5179610560" satisfied condition "Succeeded or Failed" Jun 23 20:40:24.285: INFO: Trying to get logs from node ip-172-20-0-42.eu-west-1.compute.internal pod pod-configmaps-72d4dcc7-80de-4588-ac56-5a5179610560 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:40:24.507: INFO: Waiting for pod pod-configmaps-72d4dcc7-80de-4588-ac56-5a5179610560 to disappear Jun 23 20:40:24.613: INFO: Pod pod-configmaps-72d4dcc7-80de-4588-ac56-5a5179610560 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.859 seconds][0m [sig-storage] ConfigMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":49,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":2,"skipped":21,"failed":0} [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:40:15.144: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename deployment [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 106 lines ... [32m• [SLOW TEST:11.658 seconds][0m [sig-apps] Deployment [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m deployment should support proportional scaling [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":3,"skipped":21,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:26.814: INFO: Only supported for providers [openstack] (not aws) ... skipping 30 lines ... Jun 23 20:39:49.854: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass volume-5593kf84l [1mSTEP[0m: creating a claim Jun 23 20:39:49.988: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-wmbv [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 23 20:39:50.326: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-wmbv" in namespace "volume-5593" to be "Succeeded or Failed" Jun 23 20:39:50.431: INFO: Pod "exec-volume-test-dynamicpv-wmbv": Phase="Pending", Reason="", readiness=false. Elapsed: 104.768058ms Jun 23 20:39:52.536: INFO: Pod "exec-volume-test-dynamicpv-wmbv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209940055s Jun 23 20:39:54.641: INFO: Pod "exec-volume-test-dynamicpv-wmbv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314849389s Jun 23 20:39:56.747: INFO: Pod "exec-volume-test-dynamicpv-wmbv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.420974867s Jun 23 20:39:58.853: INFO: Pod "exec-volume-test-dynamicpv-wmbv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.526483347s Jun 23 20:40:00.959: INFO: Pod "exec-volume-test-dynamicpv-wmbv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.63261215s Jun 23 20:40:03.064: INFO: Pod "exec-volume-test-dynamicpv-wmbv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.738047936s Jun 23 20:40:05.170: INFO: Pod "exec-volume-test-dynamicpv-wmbv": Phase="Pending", Reason="", readiness=false. Elapsed: 14.844039339s Jun 23 20:40:07.275: INFO: Pod "exec-volume-test-dynamicpv-wmbv": Phase="Pending", Reason="", readiness=false. Elapsed: 16.948915007s Jun 23 20:40:09.382: INFO: Pod "exec-volume-test-dynamicpv-wmbv": Phase="Pending", Reason="", readiness=false. Elapsed: 19.056195051s Jun 23 20:40:11.494: INFO: Pod "exec-volume-test-dynamicpv-wmbv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.167959388s [1mSTEP[0m: Saw pod success Jun 23 20:40:11.494: INFO: Pod "exec-volume-test-dynamicpv-wmbv" satisfied condition "Succeeded or Failed" Jun 23 20:40:11.602: INFO: Trying to get logs from node ip-172-20-0-98.eu-west-1.compute.internal pod exec-volume-test-dynamicpv-wmbv container exec-container-dynamicpv-wmbv: <nil> [1mSTEP[0m: delete the pod Jun 23 20:40:11.820: INFO: Waiting for pod exec-volume-test-dynamicpv-wmbv to disappear Jun 23 20:40:11.925: INFO: Pod exec-volume-test-dynamicpv-wmbv no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-wmbv Jun 23 20:40:11.925: INFO: Deleting pod "exec-volume-test-dynamicpv-wmbv" in namespace "volume-5593" ... skipping 18 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":26,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:28.106: INFO: Only supported for providers [gce gke] (not aws) ... skipping 23 lines ... Jun 23 20:40:23.631: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename downward-api [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward api env vars Jun 23 20:40:24.272: INFO: Waiting up to 5m0s for pod "downward-api-aea14401-b72f-4b61-880c-ac427b201974" in namespace "downward-api-6938" to be "Succeeded or Failed" Jun 23 20:40:24.380: INFO: Pod "downward-api-aea14401-b72f-4b61-880c-ac427b201974": Phase="Pending", Reason="", readiness=false. Elapsed: 107.576457ms Jun 23 20:40:26.486: INFO: Pod "downward-api-aea14401-b72f-4b61-880c-ac427b201974": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214086643s Jun 23 20:40:28.594: INFO: Pod "downward-api-aea14401-b72f-4b61-880c-ac427b201974": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321852509s Jun 23 20:40:30.700: INFO: Pod "downward-api-aea14401-b72f-4b61-880c-ac427b201974": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.428199873s [1mSTEP[0m: Saw pod success Jun 23 20:40:30.700: INFO: Pod "downward-api-aea14401-b72f-4b61-880c-ac427b201974" satisfied condition "Succeeded or Failed" Jun 23 20:40:30.806: INFO: Trying to get logs from node ip-172-20-0-42.eu-west-1.compute.internal pod downward-api-aea14401-b72f-4b61-880c-ac427b201974 container dapi-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:40:31.026: INFO: Waiting for pod downward-api-aea14401-b72f-4b61-880c-ac427b201974 to disappear Jun 23 20:40:31.132: INFO: Pod downward-api-aea14401-b72f-4b61-880c-ac427b201974 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:7.722 seconds][0m [sig-node] Downward API [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":124,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral ... skipping 37 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should create read/write inline ephemeral volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:194[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":7,"skipped":76,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode ... skipping 77 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (block volmode)] volumeMode [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should not mount / map unused volumes in a pod [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":7,"skipped":52,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:36.143: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping ... skipping 97 lines ... Jun 23 20:40:12.814: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics [1mSTEP[0m: creating a test aws volume Jun 23 20:40:13.640: INFO: Successfully created a new PD: "aws://eu-west-1a/vol-05e54449c26c9e075". Jun 23 20:40:13.640: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod exec-volume-test-inlinevolume-4ljg [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 23 20:40:13.755: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-4ljg" in namespace "volume-8506" to be "Succeeded or Failed" Jun 23 20:40:13.862: INFO: Pod "exec-volume-test-inlinevolume-4ljg": Phase="Pending", Reason="", readiness=false. Elapsed: 107.098414ms Jun 23 20:40:15.968: INFO: Pod "exec-volume-test-inlinevolume-4ljg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213542911s Jun 23 20:40:18.074: INFO: Pod "exec-volume-test-inlinevolume-4ljg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31898949s Jun 23 20:40:20.180: INFO: Pod "exec-volume-test-inlinevolume-4ljg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.42489525s Jun 23 20:40:22.291: INFO: Pod "exec-volume-test-inlinevolume-4ljg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.53566485s Jun 23 20:40:24.397: INFO: Pod "exec-volume-test-inlinevolume-4ljg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.642203847s [1mSTEP[0m: Saw pod success Jun 23 20:40:24.397: INFO: Pod "exec-volume-test-inlinevolume-4ljg" satisfied condition "Succeeded or Failed" Jun 23 20:40:24.503: INFO: Trying to get logs from node ip-172-20-0-42.eu-west-1.compute.internal pod exec-volume-test-inlinevolume-4ljg container exec-container-inlinevolume-4ljg: <nil> [1mSTEP[0m: delete the pod Jun 23 20:40:24.770: INFO: Waiting for pod exec-volume-test-inlinevolume-4ljg to disappear Jun 23 20:40:24.875: INFO: Pod exec-volume-test-inlinevolume-4ljg no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-inlinevolume-4ljg Jun 23 20:40:24.875: INFO: Deleting pod "exec-volume-test-inlinevolume-4ljg" in namespace "volume-8506" Jun 23 20:40:25.168: INFO: Couldn't delete PD "aws://eu-west-1a/vol-05e54449c26c9e075", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-05e54449c26c9e075 is currently attached to i-00e710bba200d40fc status code: 400, request id: e3f72653-9dd0-4f41-8477-850c126880f4 Jun 23 20:40:30.770: INFO: Couldn't delete PD "aws://eu-west-1a/vol-05e54449c26c9e075", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-05e54449c26c9e075 is currently attached to i-00e710bba200d40fc status code: 400, request id: 299155a9-61a4-49fe-a651-e496d696b372 Jun 23 20:40:36.352: INFO: Successfully deleted PD "aws://eu-west-1a/vol-05e54449c26c9e075". [AfterEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:40:36.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-8506" for this suite. ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":10,"skipped":80,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:36.579: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 5 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: local][LocalVolumeType: block] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 29 lines ... [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating projection with secret that has name projected-secret-test-e6391185-f797-4f35-98db-8cc6addc963d [1mSTEP[0m: Creating a pod to test consume secrets Jun 23 20:40:28.855: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bd0720bb-44f8-4eb3-9d5c-9baf7498c78a" in namespace "projected-413" to be "Succeeded or Failed" Jun 23 20:40:28.960: INFO: Pod "pod-projected-secrets-bd0720bb-44f8-4eb3-9d5c-9baf7498c78a": Phase="Pending", Reason="", readiness=false. Elapsed: 105.244164ms Jun 23 20:40:31.065: INFO: Pod "pod-projected-secrets-bd0720bb-44f8-4eb3-9d5c-9baf7498c78a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210620464s Jun 23 20:40:33.174: INFO: Pod "pod-projected-secrets-bd0720bb-44f8-4eb3-9d5c-9baf7498c78a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318879385s Jun 23 20:40:35.279: INFO: Pod "pod-projected-secrets-bd0720bb-44f8-4eb3-9d5c-9baf7498c78a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.424009998s Jun 23 20:40:37.384: INFO: Pod "pod-projected-secrets-bd0720bb-44f8-4eb3-9d5c-9baf7498c78a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.529477517s [1mSTEP[0m: Saw pod success Jun 23 20:40:37.384: INFO: Pod "pod-projected-secrets-bd0720bb-44f8-4eb3-9d5c-9baf7498c78a" satisfied condition "Succeeded or Failed" Jun 23 20:40:37.489: INFO: Trying to get logs from node ip-172-20-0-98.eu-west-1.compute.internal pod pod-projected-secrets-bd0720bb-44f8-4eb3-9d5c-9baf7498c78a container projected-secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 23 20:40:37.716: INFO: Waiting for pod pod-projected-secrets-bd0720bb-44f8-4eb3-9d5c-9baf7498c78a to disappear Jun 23 20:40:37.820: INFO: Pod pod-projected-secrets-bd0720bb-44f8-4eb3-9d5c-9baf7498c78a no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:9.923 seconds][0m [sig-storage] Projected secret [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":29,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 39 lines ... [32m• [SLOW TEST:12.703 seconds][0m [sig-api-machinery] Garbage collector [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should delete pods created by rc when not orphaning [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":4,"skipped":25,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:40:31.823: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 [1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups Jun 23 20:40:32.487: INFO: Waiting up to 5m0s for pod "security-context-de5377d7-0b76-44bb-b7a3-57f43b572e51" in namespace "security-context-9647" to be "Succeeded or Failed" Jun 23 20:40:32.593: INFO: Pod "security-context-de5377d7-0b76-44bb-b7a3-57f43b572e51": Phase="Pending", Reason="", readiness=false. Elapsed: 105.876317ms Jun 23 20:40:34.700: INFO: Pod "security-context-de5377d7-0b76-44bb-b7a3-57f43b572e51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212990182s Jun 23 20:40:36.811: INFO: Pod "security-context-de5377d7-0b76-44bb-b7a3-57f43b572e51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323041965s Jun 23 20:40:38.917: INFO: Pod "security-context-de5377d7-0b76-44bb-b7a3-57f43b572e51": Phase="Pending", Reason="", readiness=false. Elapsed: 6.429888935s Jun 23 20:40:41.025: INFO: Pod "security-context-de5377d7-0b76-44bb-b7a3-57f43b572e51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.537100259s [1mSTEP[0m: Saw pod success Jun 23 20:40:41.025: INFO: Pod "security-context-de5377d7-0b76-44bb-b7a3-57f43b572e51" satisfied condition "Succeeded or Failed" Jun 23 20:40:41.130: INFO: Trying to get logs from node ip-172-20-0-42.eu-west-1.compute.internal pod security-context-de5377d7-0b76-44bb-b7a3-57f43b572e51 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:40:41.352: INFO: Waiting for pod security-context-de5377d7-0b76-44bb-b7a3-57f43b572e51 to disappear Jun 23 20:40:41.458: INFO: Pod security-context-de5377d7-0b76-44bb-b7a3-57f43b572e51 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:9.849 seconds][0m [sig-node] Security Context [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":8,"skipped":77,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:41.679: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 69 lines ... Jun 23 20:40:36.164: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename var-expansion [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test substitution in container's args Jun 23 20:40:36.798: INFO: Waiting up to 5m0s for pod "var-expansion-5f09198d-678b-437c-8ccd-87256c9c1b60" in namespace "var-expansion-4344" to be "Succeeded or Failed" Jun 23 20:40:36.902: INFO: Pod "var-expansion-5f09198d-678b-437c-8ccd-87256c9c1b60": Phase="Pending", Reason="", readiness=false. Elapsed: 103.790961ms Jun 23 20:40:39.012: INFO: Pod "var-expansion-5f09198d-678b-437c-8ccd-87256c9c1b60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213997596s Jun 23 20:40:41.117: INFO: Pod "var-expansion-5f09198d-678b-437c-8ccd-87256c9c1b60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318330605s Jun 23 20:40:43.221: INFO: Pod "var-expansion-5f09198d-678b-437c-8ccd-87256c9c1b60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.422769128s [1mSTEP[0m: Saw pod success Jun 23 20:40:43.221: INFO: Pod "var-expansion-5f09198d-678b-437c-8ccd-87256c9c1b60" satisfied condition "Succeeded or Failed" Jun 23 20:40:43.330: INFO: Trying to get logs from node ip-172-20-0-42.eu-west-1.compute.internal pod var-expansion-5f09198d-678b-437c-8ccd-87256c9c1b60 container dapi-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:40:43.572: INFO: Waiting for pod var-expansion-5f09198d-678b-437c-8ccd-87256c9c1b60 to disappear Jun 23 20:40:43.676: INFO: Pod var-expansion-5f09198d-678b-437c-8ccd-87256c9c1b60 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:7.731 seconds][0m [sig-node] Variable Expansion [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should allow substituting values in a container's args [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":67,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:43.900: INFO: Only supported for providers [gce gke] (not aws) ... skipping 69 lines ... Jun 23 20:40:31.532: INFO: PersistentVolumeClaim pvc-4zfk2 found but phase is Pending instead of Bound. Jun 23 20:40:33.640: INFO: PersistentVolumeClaim pvc-4zfk2 found and phase=Bound (14.868860089s) Jun 23 20:40:33.640: INFO: Waiting up to 3m0s for PersistentVolume local-jnt7k to have phase Bound Jun 23 20:40:33.747: INFO: PersistentVolume local-jnt7k found and phase=Bound (107.080926ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-s6xj [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 23 20:40:34.071: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-s6xj" in namespace "volume-5029" to be "Succeeded or Failed" Jun 23 20:40:34.178: INFO: Pod "exec-volume-test-preprovisionedpv-s6xj": Phase="Pending", Reason="", readiness=false. Elapsed: 107.13999ms Jun 23 20:40:36.295: INFO: Pod "exec-volume-test-preprovisionedpv-s6xj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224380097s Jun 23 20:40:38.420: INFO: Pod "exec-volume-test-preprovisionedpv-s6xj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.349196849s Jun 23 20:40:40.528: INFO: Pod "exec-volume-test-preprovisionedpv-s6xj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.457664042s Jun 23 20:40:42.641: INFO: Pod "exec-volume-test-preprovisionedpv-s6xj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.569850757s [1mSTEP[0m: Saw pod success Jun 23 20:40:42.641: INFO: Pod "exec-volume-test-preprovisionedpv-s6xj" satisfied condition "Succeeded or Failed" Jun 23 20:40:42.748: INFO: Trying to get logs from node ip-172-20-0-98.eu-west-1.compute.internal pod exec-volume-test-preprovisionedpv-s6xj container exec-container-preprovisionedpv-s6xj: <nil> [1mSTEP[0m: delete the pod Jun 23 20:40:42.968: INFO: Waiting for pod exec-volume-test-preprovisionedpv-s6xj to disappear Jun 23 20:40:43.076: INFO: Pod exec-volume-test-preprovisionedpv-s6xj no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-s6xj Jun 23 20:40:43.076: INFO: Deleting pod "exec-volume-test-preprovisionedpv-s6xj" in namespace "volume-5029" ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":34,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:44.469: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 146 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: gcepd] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mOnly supported for providers [gce gke] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302 [90m------------------------------[0m ... skipping 190 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Dynamic PV (block volmode)] provisioning [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should provision storage with pvc data source [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:239[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source","total":-1,"completed":5,"skipped":38,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:45.694: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 84 lines ... [1mSTEP[0m: Destroying namespace "services-3910" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":6,"skipped":42,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:45.836: INFO: Driver emptydir doesn't support DynamicPV -- skipping ... skipping 54 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m Basic StatefulSet functionality [StatefulSetBasic] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99[0m should list, patch and delete a collection of StatefulSets [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":-1,"completed":6,"skipped":56,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 106 lines ... Jun 23 20:40:38.036: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 23 20:40:38.708: INFO: Waiting up to 5m0s for pod "security-context-1cc00d08-a235-4589-87a4-fcba6c9c2ce8" in namespace "security-context-5362" to be "Succeeded or Failed" Jun 23 20:40:38.813: INFO: Pod "security-context-1cc00d08-a235-4589-87a4-fcba6c9c2ce8": Phase="Pending", Reason="", readiness=false. Elapsed: 104.665493ms Jun 23 20:40:40.919: INFO: Pod "security-context-1cc00d08-a235-4589-87a4-fcba6c9c2ce8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210709992s Jun 23 20:40:43.024: INFO: Pod "security-context-1cc00d08-a235-4589-87a4-fcba6c9c2ce8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316362982s Jun 23 20:40:45.130: INFO: Pod "security-context-1cc00d08-a235-4589-87a4-fcba6c9c2ce8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.422591417s Jun 23 20:40:47.237: INFO: Pod "security-context-1cc00d08-a235-4589-87a4-fcba6c9c2ce8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.529028076s Jun 23 20:40:49.342: INFO: Pod "security-context-1cc00d08-a235-4589-87a4-fcba6c9c2ce8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.634041363s [1mSTEP[0m: Saw pod success Jun 23 20:40:49.342: INFO: Pod "security-context-1cc00d08-a235-4589-87a4-fcba6c9c2ce8" satisfied condition "Succeeded or Failed" Jun 23 20:40:49.447: INFO: Trying to get logs from node ip-172-20-0-87.eu-west-1.compute.internal pod security-context-1cc00d08-a235-4589-87a4-fcba6c9c2ce8 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:40:49.683: INFO: Waiting for pod security-context-1cc00d08-a235-4589-87a4-fcba6c9c2ce8 to disappear Jun 23 20:40:49.788: INFO: Pod security-context-1cc00d08-a235-4589-87a4-fcba6c9c2ce8 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:11.965 seconds][0m [sig-node] Security Context [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":30,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:50.012: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping ... skipping 78 lines ... Jun 23 20:40:41.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.June, 23, 20, 40, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 23, 20, 40, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.June, 23, 20, 40, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 23, 20, 40, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 23 20:40:43.924: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.June, 23, 20, 40, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 23, 20, 40, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.June, 23, 20, 40, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 23, 20, 40, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 23 20:40:45.925: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.June, 23, 20, 40, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 23, 20, 40, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.June, 23, 20, 40, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 23, 20, 40, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} [1mSTEP[0m: Deploying the webhook service [1mSTEP[0m: Verifying the service has paired with the endpoint Jun 23 20:40:49.043: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API [1mSTEP[0m: create a namespace for the webhook [1mSTEP[0m: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:40:49.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-6221" for this suite. ... skipping 2 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 [32m• [SLOW TEST:11.050 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should unconditionally reject operations on fail closed webhook [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":5,"skipped":31,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes ... skipping 118 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":7,"skipped":30,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:50.913: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 21 lines ... [1mSTEP[0m: Building a namespace api object, basename secrets [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating secret with name secret-test-bae9cb68-4980-4672-baef-ee8dfe08bb98 [1mSTEP[0m: Creating a pod to test consume secrets Jun 23 20:40:44.648: INFO: Waiting up to 5m0s for pod "pod-secrets-e3da7bf5-2c44-45bb-8295-1a31789f4d9c" in namespace "secrets-2149" to be "Succeeded or Failed" Jun 23 20:40:44.752: INFO: Pod "pod-secrets-e3da7bf5-2c44-45bb-8295-1a31789f4d9c": Phase="Pending", Reason="", readiness=false. Elapsed: 104.384681ms Jun 23 20:40:46.857: INFO: Pod "pod-secrets-e3da7bf5-2c44-45bb-8295-1a31789f4d9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209675048s Jun 23 20:40:48.963: INFO: Pod "pod-secrets-e3da7bf5-2c44-45bb-8295-1a31789f4d9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315367821s Jun 23 20:40:51.067: INFO: Pod "pod-secrets-e3da7bf5-2c44-45bb-8295-1a31789f4d9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.419845426s [1mSTEP[0m: Saw pod success Jun 23 20:40:51.068: INFO: Pod "pod-secrets-e3da7bf5-2c44-45bb-8295-1a31789f4d9c" satisfied condition "Succeeded or Failed" Jun 23 20:40:51.172: INFO: Trying to get logs from node ip-172-20-0-98.eu-west-1.compute.internal pod pod-secrets-e3da7bf5-2c44-45bb-8295-1a31789f4d9c container secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 23 20:40:51.391: INFO: Waiting for pod pod-secrets-e3da7bf5-2c44-45bb-8295-1a31789f4d9c to disappear Jun 23 20:40:51.495: INFO: Pod pod-secrets-e3da7bf5-2c44-45bb-8295-1a31789f4d9c no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:7.793 seconds][0m [sig-storage] Secrets [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":77,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:51.711: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 14 lines ... [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":9,"skipped":101,"failed":0} [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:40:49.785: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename endpointslice [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 22 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:40:52.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "endpointslice-7864" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":10,"skipped":101,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:52.686: INFO: Driver emptydir doesn't support DynamicPV -- skipping ... skipping 48 lines ... Jun 23 20:40:14.523: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support existing single file [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219 Jun 23 20:40:15.057: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Jun 23 20:40:15.272: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7976" in namespace "provisioning-7976" to be "Succeeded or Failed" Jun 23 20:40:15.379: INFO: Pod "hostpath-symlink-prep-provisioning-7976": Phase="Pending", Reason="", readiness=false. Elapsed: 106.696866ms Jun 23 20:40:17.486: INFO: Pod "hostpath-symlink-prep-provisioning-7976": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213383852s Jun 23 20:40:19.607: INFO: Pod "hostpath-symlink-prep-provisioning-7976": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334714706s Jun 23 20:40:21.716: INFO: Pod "hostpath-symlink-prep-provisioning-7976": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.444306979s [1mSTEP[0m: Saw pod success Jun 23 20:40:21.717: INFO: Pod "hostpath-symlink-prep-provisioning-7976" satisfied condition "Succeeded or Failed" Jun 23 20:40:21.717: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7976" in namespace "provisioning-7976" Jun 23 20:40:21.830: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7976" to be fully deleted Jun 23 20:40:21.936: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-r78d [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:40:22.044: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-r78d" in namespace "provisioning-7976" to be "Succeeded or Failed" Jun 23 20:40:22.151: INFO: Pod "pod-subpath-test-inlinevolume-r78d": Phase="Pending", Reason="", readiness=false. Elapsed: 107.355154ms Jun 23 20:40:24.260: INFO: Pod "pod-subpath-test-inlinevolume-r78d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216387966s Jun 23 20:40:26.368: INFO: Pod "pod-subpath-test-inlinevolume-r78d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324024258s Jun 23 20:40:28.475: INFO: Pod "pod-subpath-test-inlinevolume-r78d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.431097997s Jun 23 20:40:30.581: INFO: Pod "pod-subpath-test-inlinevolume-r78d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.537630526s Jun 23 20:40:32.691: INFO: Pod "pod-subpath-test-inlinevolume-r78d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.646966511s Jun 23 20:40:34.798: INFO: Pod "pod-subpath-test-inlinevolume-r78d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.754357004s Jun 23 20:40:36.906: INFO: Pod "pod-subpath-test-inlinevolume-r78d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.861982727s Jun 23 20:40:39.021: INFO: Pod "pod-subpath-test-inlinevolume-r78d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.976864719s Jun 23 20:40:41.127: INFO: Pod "pod-subpath-test-inlinevolume-r78d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.083555053s Jun 23 20:40:43.234: INFO: Pod "pod-subpath-test-inlinevolume-r78d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.190246794s Jun 23 20:40:45.343: INFO: Pod "pod-subpath-test-inlinevolume-r78d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.299440233s [1mSTEP[0m: Saw pod success Jun 23 20:40:45.343: INFO: Pod "pod-subpath-test-inlinevolume-r78d" satisfied condition "Succeeded or Failed" Jun 23 20:40:45.450: INFO: Trying to get logs from node ip-172-20-0-87.eu-west-1.compute.internal pod pod-subpath-test-inlinevolume-r78d container test-container-subpath-inlinevolume-r78d: <nil> [1mSTEP[0m: delete the pod Jun 23 20:40:45.685: INFO: Waiting for pod pod-subpath-test-inlinevolume-r78d to disappear Jun 23 20:40:45.791: INFO: Pod pod-subpath-test-inlinevolume-r78d no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-r78d Jun 23 20:40:45.791: INFO: Deleting pod "pod-subpath-test-inlinevolume-r78d" in namespace "provisioning-7976" [1mSTEP[0m: Deleting pod Jun 23 20:40:45.897: INFO: Deleting pod "pod-subpath-test-inlinevolume-r78d" in namespace "provisioning-7976" Jun 23 20:40:46.113: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7976" in namespace "provisioning-7976" to be "Succeeded or Failed" Jun 23 20:40:46.221: INFO: Pod "hostpath-symlink-prep-provisioning-7976": Phase="Pending", Reason="", readiness=false. Elapsed: 107.561468ms Jun 23 20:40:48.328: INFO: Pod "hostpath-symlink-prep-provisioning-7976": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21476126s Jun 23 20:40:50.435: INFO: Pod "hostpath-symlink-prep-provisioning-7976": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321802865s Jun 23 20:40:52.542: INFO: Pod "hostpath-symlink-prep-provisioning-7976": Phase="Pending", Reason="", readiness=false. Elapsed: 6.429435134s Jun 23 20:40:54.649: INFO: Pod "hostpath-symlink-prep-provisioning-7976": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.536462668s [1mSTEP[0m: Saw pod success Jun 23 20:40:54.650: INFO: Pod "hostpath-symlink-prep-provisioning-7976" satisfied condition "Succeeded or Failed" Jun 23 20:40:54.650: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7976" in namespace "provisioning-7976" Jun 23 20:40:54.763: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7976" to be fully deleted [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:40:54.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "provisioning-7976" for this suite. ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing single file [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":4,"skipped":33,"failed":0} [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:55.091: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Inline-volume (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 17 lines ... [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:40:36.581: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename job [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a job [1mSTEP[0m: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:40:55.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "job-6398" for this suite. [32m• [SLOW TEST:18.972 seconds][0m [sig-apps] Job [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":11,"skipped":90,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:55.557: INFO: Only supported for providers [openstack] (not aws) ... skipping 14 lines ... [36mOnly supported for providers [openstack] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092 [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should support orphan deletion of custom resources","total":-1,"completed":8,"skipped":55,"failed":0} [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:40:37.497: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename dns [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 19 lines ... [32m• [SLOW TEST:19.759 seconds][0m [sig-network] DNS [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should provide DNS for the cluster [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":9,"skipped":55,"failed":0} [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:57.259: INFO: Only supported for providers [vsphere] (not aws) [AfterEach] [Testpattern: Inline-volume (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 21 lines ... [1mSTEP[0m: Building a namespace api object, basename secrets [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating secret with name secret-test-c5e54106-d07a-4bd8-b45d-c1ae7cc71b01 [1mSTEP[0m: Creating a pod to test consume secrets Jun 23 20:40:48.800: INFO: Waiting up to 5m0s for pod "pod-secrets-a3d70e26-9ab0-4680-9e09-e16e0fec4d3c" in namespace "secrets-7850" to be "Succeeded or Failed" Jun 23 20:40:48.911: INFO: Pod "pod-secrets-a3d70e26-9ab0-4680-9e09-e16e0fec4d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 110.928722ms Jun 23 20:40:51.021: INFO: Pod "pod-secrets-a3d70e26-9ab0-4680-9e09-e16e0fec4d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221063784s Jun 23 20:40:53.130: INFO: Pod "pod-secrets-a3d70e26-9ab0-4680-9e09-e16e0fec4d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329375819s Jun 23 20:40:55.241: INFO: Pod "pod-secrets-a3d70e26-9ab0-4680-9e09-e16e0fec4d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440733547s Jun 23 20:40:57.356: INFO: Pod "pod-secrets-a3d70e26-9ab0-4680-9e09-e16e0fec4d3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.555869792s [1mSTEP[0m: Saw pod success Jun 23 20:40:57.356: INFO: Pod "pod-secrets-a3d70e26-9ab0-4680-9e09-e16e0fec4d3c" satisfied condition "Succeeded or Failed" Jun 23 20:40:57.464: INFO: Trying to get logs from node ip-172-20-0-98.eu-west-1.compute.internal pod pod-secrets-a3d70e26-9ab0-4680-9e09-e16e0fec4d3c container secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 23 20:40:57.687: INFO: Waiting for pod pod-secrets-a3d70e26-9ab0-4680-9e09-e16e0fec4d3c to disappear Jun 23 20:40:57.794: INFO: Pod pod-secrets-a3d70e26-9ab0-4680-9e09-e16e0fec4d3c no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 5 lines ... [32m• [SLOW TEST:10.515 seconds][0m [sig-storage] Secrets [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":60,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:58.133: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 58 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:214[0m [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":30,"failed":0} [BeforeEach] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:39:48.359: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename conntrack [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 380 lines ... [32m• [SLOW TEST:70.282 seconds][0m [sig-network] Conntrack [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should drop INVALID conntrack entries [Privileged] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:361[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries [Privileged]","total":-1,"completed":5,"skipped":30,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:58.646: INFO: Only supported for providers [vsphere] (not aws) ... skipping 35 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:40:58.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replicaset-6933" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota","total":-1,"completed":10,"skipped":58,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:58.773: INFO: Driver local doesn't support InlineVolume -- skipping ... skipping 20 lines ... [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:40:58.153: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename topology [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192 Jun 23 20:40:58.805: INFO: found topology map[topology.kubernetes.io/zone:eu-west-1a] Jun 23 20:40:58.805: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics Jun 23 20:40:58.805: INFO: Not enough topologies in cluster -- skipping [1mSTEP[0m: Deleting pvc [1mSTEP[0m: Deleting sc ... skipping 7 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: aws] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [It][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mNot enough topologies in cluster -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199 [90m------------------------------[0m ... skipping 28 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m With a server listening on localhost [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474[0m should support forwarding over websockets [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:490[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets","total":-1,"completed":7,"skipped":47,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:40:59.524: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) ... skipping 24 lines ... [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-a64204f6-04be-45ac-8eed-be54b26f8c3c [1mSTEP[0m: Creating a pod to test consume configMaps Jun 23 20:40:51.669: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e2e5606b-72aa-400c-b908-10c348434857" in namespace "projected-3752" to be "Succeeded or Failed" Jun 23 20:40:51.775: INFO: Pod "pod-projected-configmaps-e2e5606b-72aa-400c-b908-10c348434857": Phase="Pending", Reason="", readiness=false. Elapsed: 106.064069ms Jun 23 20:40:53.883: INFO: Pod "pod-projected-configmaps-e2e5606b-72aa-400c-b908-10c348434857": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213538291s Jun 23 20:40:55.990: INFO: Pod "pod-projected-configmaps-e2e5606b-72aa-400c-b908-10c348434857": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320912154s Jun 23 20:40:58.097: INFO: Pod "pod-projected-configmaps-e2e5606b-72aa-400c-b908-10c348434857": Phase="Running", Reason="", readiness=true. Elapsed: 6.427955321s Jun 23 20:41:00.225: INFO: Pod "pod-projected-configmaps-e2e5606b-72aa-400c-b908-10c348434857": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.556119888s [1mSTEP[0m: Saw pod success Jun 23 20:41:00.225: INFO: Pod "pod-projected-configmaps-e2e5606b-72aa-400c-b908-10c348434857" satisfied condition "Succeeded or Failed" Jun 23 20:41:00.335: INFO: Trying to get logs from node ip-172-20-0-42.eu-west-1.compute.internal pod pod-projected-configmaps-e2e5606b-72aa-400c-b908-10c348434857 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:41:00.584: INFO: Waiting for pod pod-projected-configmaps-e2e5606b-72aa-400c-b908-10c348434857 to disappear Jun 23 20:41:00.750: INFO: Pod pod-projected-configmaps-e2e5606b-72aa-400c-b908-10c348434857 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:10.083 seconds][0m [sig-storage] Projected configMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with mappings [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":33,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:41:01.004: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 25 lines ... Jun 23 20:40:31.357: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support readOnly directory specified in the volumeMount /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365 Jun 23 20:40:31.888: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Jun 23 20:40:32.124: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3716" in namespace "provisioning-3716" to be "Succeeded or Failed" Jun 23 20:40:32.234: INFO: Pod "hostpath-symlink-prep-provisioning-3716": Phase="Pending", Reason="", readiness=false. Elapsed: 109.465399ms Jun 23 20:40:34.342: INFO: Pod "hostpath-symlink-prep-provisioning-3716": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217350456s Jun 23 20:40:36.450: INFO: Pod "hostpath-symlink-prep-provisioning-3716": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325362762s Jun 23 20:40:38.565: INFO: Pod "hostpath-symlink-prep-provisioning-3716": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440354473s Jun 23 20:40:40.672: INFO: Pod "hostpath-symlink-prep-provisioning-3716": Phase="Running", Reason="", readiness=true. Elapsed: 8.547820365s Jun 23 20:40:42.779: INFO: Pod "hostpath-symlink-prep-provisioning-3716": Phase="Running", Reason="", readiness=true. Elapsed: 10.654284728s Jun 23 20:40:44.885: INFO: Pod "hostpath-symlink-prep-provisioning-3716": Phase="Running", Reason="", readiness=true. Elapsed: 12.760552477s Jun 23 20:40:46.993: INFO: Pod "hostpath-symlink-prep-provisioning-3716": Phase="Running", Reason="", readiness=true. Elapsed: 14.868341298s Jun 23 20:40:49.101: INFO: Pod "hostpath-symlink-prep-provisioning-3716": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.976976663s [1mSTEP[0m: Saw pod success Jun 23 20:40:49.101: INFO: Pod "hostpath-symlink-prep-provisioning-3716" satisfied condition "Succeeded or Failed" Jun 23 20:40:49.101: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3716" in namespace "provisioning-3716" Jun 23 20:40:49.216: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3716" to be fully deleted Jun 23 20:40:49.321: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-lsgl [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:40:49.429: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-lsgl" in namespace "provisioning-3716" to be "Succeeded or Failed" Jun 23 20:40:49.539: INFO: Pod "pod-subpath-test-inlinevolume-lsgl": Phase="Pending", Reason="", readiness=false. Elapsed: 109.956864ms Jun 23 20:40:51.646: INFO: Pod "pod-subpath-test-inlinevolume-lsgl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217322583s Jun 23 20:40:53.754: INFO: Pod "pod-subpath-test-inlinevolume-lsgl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325056197s Jun 23 20:40:55.860: INFO: Pod "pod-subpath-test-inlinevolume-lsgl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.431905424s Jun 23 20:40:57.968: INFO: Pod "pod-subpath-test-inlinevolume-lsgl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.539001787s [1mSTEP[0m: Saw pod success Jun 23 20:40:57.968: INFO: Pod "pod-subpath-test-inlinevolume-lsgl" satisfied condition "Succeeded or Failed" Jun 23 20:40:58.074: INFO: Trying to get logs from node ip-172-20-0-87.eu-west-1.compute.internal pod pod-subpath-test-inlinevolume-lsgl container test-container-subpath-inlinevolume-lsgl: <nil> [1mSTEP[0m: delete the pod Jun 23 20:40:58.307: INFO: Waiting for pod pod-subpath-test-inlinevolume-lsgl to disappear Jun 23 20:40:58.416: INFO: Pod pod-subpath-test-inlinevolume-lsgl no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-lsgl Jun 23 20:40:58.416: INFO: Deleting pod "pod-subpath-test-inlinevolume-lsgl" in namespace "provisioning-3716" [1mSTEP[0m: Deleting pod Jun 23 20:40:58.523: INFO: Deleting pod "pod-subpath-test-inlinevolume-lsgl" in namespace "provisioning-3716" Jun 23 20:40:58.743: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3716" in namespace "provisioning-3716" to be "Succeeded or Failed" Jun 23 20:40:58.855: INFO: Pod "hostpath-symlink-prep-provisioning-3716": Phase="Pending", Reason="", readiness=false. Elapsed: 111.652014ms Jun 23 20:41:00.980: INFO: Pod "hostpath-symlink-prep-provisioning-3716": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.236420576s [1mSTEP[0m: Saw pod success Jun 23 20:41:00.980: INFO: Pod "hostpath-symlink-prep-provisioning-3716" satisfied condition "Succeeded or Failed" Jun 23 20:41:00.980: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3716" in namespace "provisioning-3716" Jun 23 20:41:01.114: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3716" to be fully deleted [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:41:01.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "provisioning-3716" for this suite. ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly directory specified in the volumeMount [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":10,"skipped":125,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should replace jobs when ReplaceConcurrent [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":4,"skipped":41,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 56 lines ... Jun 23 20:40:30.846: INFO: PersistentVolumeClaim pvc-njs8p found but phase is Pending instead of Bound. Jun 23 20:40:32.952: INFO: PersistentVolumeClaim pvc-njs8p found and phase=Bound (12.758626535s) Jun 23 20:40:32.952: INFO: Waiting up to 3m0s for PersistentVolume local-9rr8k to have phase Bound Jun 23 20:40:33.058: INFO: PersistentVolume local-9rr8k found and phase=Bound (106.109015ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-scfl [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jun 23 20:40:33.388: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-scfl" in namespace "provisioning-5831" to be "Succeeded or Failed" Jun 23 20:40:33.499: INFO: Pod "pod-subpath-test-preprovisionedpv-scfl": Phase="Pending", Reason="", readiness=false. Elapsed: 110.095885ms Jun 23 20:40:35.605: INFO: Pod "pod-subpath-test-preprovisionedpv-scfl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216720405s Jun 23 20:40:37.712: INFO: Pod "pod-subpath-test-preprovisionedpv-scfl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323492263s Jun 23 20:40:39.820: INFO: Pod "pod-subpath-test-preprovisionedpv-scfl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.431301923s Jun 23 20:40:41.928: INFO: Pod "pod-subpath-test-preprovisionedpv-scfl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.539486321s Jun 23 20:40:44.036: INFO: Pod "pod-subpath-test-preprovisionedpv-scfl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.646954193s ... skipping 3 lines ... Jun 23 20:40:52.467: INFO: Pod "pod-subpath-test-preprovisionedpv-scfl": Phase="Running", Reason="", readiness=true. Elapsed: 19.078411448s Jun 23 20:40:54.574: INFO: Pod "pod-subpath-test-preprovisionedpv-scfl": Phase="Running", Reason="", readiness=true. Elapsed: 21.185144524s Jun 23 20:40:56.681: INFO: Pod "pod-subpath-test-preprovisionedpv-scfl": Phase="Running", Reason="", readiness=true. Elapsed: 23.292891175s Jun 23 20:40:58.788: INFO: Pod "pod-subpath-test-preprovisionedpv-scfl": Phase="Running", Reason="", readiness=true. Elapsed: 25.399499553s Jun 23 20:41:00.918: INFO: Pod "pod-subpath-test-preprovisionedpv-scfl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.529425472s [1mSTEP[0m: Saw pod success Jun 23 20:41:00.918: INFO: Pod "pod-subpath-test-preprovisionedpv-scfl" satisfied condition "Succeeded or Failed" Jun 23 20:41:01.030: INFO: Trying to get logs from node ip-172-20-0-87.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-scfl container test-container-subpath-preprovisionedpv-scfl: <nil> [1mSTEP[0m: delete the pod Jun 23 20:41:01.267: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-scfl to disappear Jun 23 20:41:01.374: INFO: Pod pod-subpath-test-preprovisionedpv-scfl no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-scfl Jun 23 20:41:01.374: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-scfl" in namespace "provisioning-5831" ... skipping 34 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support file as subpath [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":12,"skipped":104,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 13 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:41:09.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "nettest-4255" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":13,"skipped":105,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:41:09.291: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 22 lines ... [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating projection with secret that has name projected-secret-test-25acc4dc-750e-456d-9c6d-3df7331a173c [1mSTEP[0m: Creating a pod to test consume secrets Jun 23 20:41:02.251: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a9ad7450-7cc6-4804-b075-c00c3f13c6dd" in namespace "projected-209" to be "Succeeded or Failed" Jun 23 20:41:02.359: INFO: Pod "pod-projected-secrets-a9ad7450-7cc6-4804-b075-c00c3f13c6dd": Phase="Pending", Reason="", readiness=false. Elapsed: 107.844208ms Jun 23 20:41:04.468: INFO: Pod "pod-projected-secrets-a9ad7450-7cc6-4804-b075-c00c3f13c6dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216550532s Jun 23 20:41:06.576: INFO: Pod "pod-projected-secrets-a9ad7450-7cc6-4804-b075-c00c3f13c6dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324992561s Jun 23 20:41:08.683: INFO: Pod "pod-projected-secrets-a9ad7450-7cc6-4804-b075-c00c3f13c6dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.43112388s Jun 23 20:41:10.789: INFO: Pod "pod-projected-secrets-a9ad7450-7cc6-4804-b075-c00c3f13c6dd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.537687177s Jun 23 20:41:12.895: INFO: Pod "pod-projected-secrets-a9ad7450-7cc6-4804-b075-c00c3f13c6dd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.643736354s Jun 23 20:41:15.001: INFO: Pod "pod-projected-secrets-a9ad7450-7cc6-4804-b075-c00c3f13c6dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.749807704s [1mSTEP[0m: Saw pod success Jun 23 20:41:15.001: INFO: Pod "pod-projected-secrets-a9ad7450-7cc6-4804-b075-c00c3f13c6dd" satisfied condition "Succeeded or Failed" Jun 23 20:41:15.107: INFO: Trying to get logs from node ip-172-20-0-98.eu-west-1.compute.internal pod pod-projected-secrets-a9ad7450-7cc6-4804-b075-c00c3f13c6dd container projected-secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 23 20:41:15.340: INFO: Waiting for pod pod-projected-secrets-a9ad7450-7cc6-4804-b075-c00c3f13c6dd to disappear Jun 23 20:41:15.446: INFO: Pod pod-projected-secrets-a9ad7450-7cc6-4804-b075-c00c3f13c6dd no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:14.170 seconds][0m [sig-storage] Projected secret [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":49,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:41:01.789: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 [1mSTEP[0m: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 23 20:41:02.434: INFO: Waiting up to 5m0s for pod "security-context-91f4f765-ac5c-4c7d-8488-904fc213b852" in namespace "security-context-2629" to be "Succeeded or Failed" Jun 23 20:41:02.543: INFO: Pod "security-context-91f4f765-ac5c-4c7d-8488-904fc213b852": Phase="Pending", Reason="", readiness=false. Elapsed: 108.542557ms Jun 23 20:41:04.650: INFO: Pod "security-context-91f4f765-ac5c-4c7d-8488-904fc213b852": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215507467s Jun 23 20:41:06.757: INFO: Pod "security-context-91f4f765-ac5c-4c7d-8488-904fc213b852": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322482098s Jun 23 20:41:08.864: INFO: Pod "security-context-91f4f765-ac5c-4c7d-8488-904fc213b852": Phase="Pending", Reason="", readiness=false. Elapsed: 6.429712963s Jun 23 20:41:10.972: INFO: Pod "security-context-91f4f765-ac5c-4c7d-8488-904fc213b852": Phase="Pending", Reason="", readiness=false. Elapsed: 8.537749832s Jun 23 20:41:13.079: INFO: Pod "security-context-91f4f765-ac5c-4c7d-8488-904fc213b852": Phase="Pending", Reason="", readiness=false. Elapsed: 10.644988991s Jun 23 20:41:15.187: INFO: Pod "security-context-91f4f765-ac5c-4c7d-8488-904fc213b852": Phase="Pending", Reason="", readiness=false. Elapsed: 12.752394346s Jun 23 20:41:17.294: INFO: Pod "security-context-91f4f765-ac5c-4c7d-8488-904fc213b852": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.859973753s [1mSTEP[0m: Saw pod success Jun 23 20:41:17.294: INFO: Pod "security-context-91f4f765-ac5c-4c7d-8488-904fc213b852" satisfied condition "Succeeded or Failed" Jun 23 20:41:17.401: INFO: Trying to get logs from node ip-172-20-0-98.eu-west-1.compute.internal pod security-context-91f4f765-ac5c-4c7d-8488-904fc213b852 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:41:17.635: INFO: Waiting for pod security-context-91f4f765-ac5c-4c7d-8488-904fc213b852 to disappear Jun 23 20:41:17.742: INFO: Pod security-context-91f4f765-ac5c-4c7d-8488-904fc213b852 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:16.169 seconds][0m [sig-node] Security Context [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m should support seccomp runtime/default [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":9,"skipped":44,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:41:17.960: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 51 lines ... [32m• [SLOW TEST:18.809 seconds][0m [sig-network] DNS [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should support configurable pod DNS nameservers [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":8,"skipped":54,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 104 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSIStorageCapacity [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1336[0m CSIStorageCapacity unused [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1379[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused","total":-1,"completed":6,"skipped":37,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:41:19.687: INFO: Only supported for providers [azure] (not aws) [AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 82 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m One pod requesting one prebound PVC [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m should be able to mount volume and read from pod1 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":5,"skipped":38,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:41:20.983: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 59 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474[0m that expects a client request [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:475[0m should support a client that connects, sends NO DATA, and disconnects [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:476[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":11,"skipped":60,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:41:21.085: INFO: Only supported for providers [gce gke] (not aws) ... skipping 46 lines ... Jun 23 20:41:01.312: INFO: PersistentVolumeClaim pvc-5z98n found but phase is Pending instead of Bound. Jun 23 20:41:03.427: INFO: PersistentVolumeClaim pvc-5z98n found and phase=Bound (14.872248492s) Jun 23 20:41:03.427: INFO: Waiting up to 3m0s for PersistentVolume local-ppn5t to have phase Bound Jun 23 20:41:03.532: INFO: PersistentVolume local-ppn5t found and phase=Bound (105.094164ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-6fjl [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:41:03.858: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-6fjl" in namespace "provisioning-6819" to be "Succeeded or Failed" Jun 23 20:41:03.974: INFO: Pod "pod-subpath-test-preprovisionedpv-6fjl": Phase="Pending", Reason="", readiness=false. Elapsed: 115.465734ms Jun 23 20:41:06.081: INFO: Pod "pod-subpath-test-preprovisionedpv-6fjl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222538896s Jun 23 20:41:08.187: INFO: Pod "pod-subpath-test-preprovisionedpv-6fjl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328202456s Jun 23 20:41:10.292: INFO: Pod "pod-subpath-test-preprovisionedpv-6fjl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433449926s Jun 23 20:41:12.400: INFO: Pod "pod-subpath-test-preprovisionedpv-6fjl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.541147306s Jun 23 20:41:14.505: INFO: Pod "pod-subpath-test-preprovisionedpv-6fjl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.646332877s [1mSTEP[0m: Saw pod success Jun 23 20:41:14.505: INFO: Pod "pod-subpath-test-preprovisionedpv-6fjl" satisfied condition "Succeeded or Failed" Jun 23 20:41:14.610: INFO: Trying to get logs from node ip-172-20-0-238.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-6fjl container test-container-subpath-preprovisionedpv-6fjl: <nil> [1mSTEP[0m: delete the pod Jun 23 20:41:14.831: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-6fjl to disappear Jun 23 20:41:14.935: INFO: Pod pod-subpath-test-preprovisionedpv-6fjl no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-6fjl Jun 23 20:41:14.936: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-6fjl" in namespace "provisioning-6819" [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-6fjl [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:41:15.147: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-6fjl" in namespace "provisioning-6819" to be "Succeeded or Failed" Jun 23 20:41:15.254: INFO: Pod "pod-subpath-test-preprovisionedpv-6fjl": Phase="Pending", Reason="", readiness=false. Elapsed: 107.689704ms Jun 23 20:41:17.360: INFO: Pod "pod-subpath-test-preprovisionedpv-6fjl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213144045s Jun 23 20:41:19.464: INFO: Pod "pod-subpath-test-preprovisionedpv-6fjl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.31780107s [1mSTEP[0m: Saw pod success Jun 23 20:41:19.464: INFO: Pod "pod-subpath-test-preprovisionedpv-6fjl" satisfied condition "Succeeded or Failed" Jun 23 20:41:19.569: INFO: Trying to get logs from node ip-172-20-0-238.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-6fjl container test-container-subpath-preprovisionedpv-6fjl: <nil> [1mSTEP[0m: delete the pod Jun 23 20:41:19.790: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-6fjl to disappear Jun 23 20:41:19.894: INFO: Pod pod-subpath-test-preprovisionedpv-6fjl no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-6fjl Jun 23 20:41:19.895: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-6fjl" in namespace "provisioning-6819" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directories when readOnly specified in the volumeSource [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":33,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:41:21.365: INFO: Only supported for providers [vsphere] (not aws) ... skipping 92 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Simple pod [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379[0m should support exec using resource/name [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:431[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":8,"skipped":80,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:41:21.951: INFO: Only supported for providers [vsphere] (not aws) ... skipping 46 lines ... Jun 23 20:41:02.593: INFO: PersistentVolumeClaim pvc-n7cbp found but phase is Pending instead of Bound. Jun 23 20:41:04.703: INFO: PersistentVolumeClaim pvc-n7cbp found and phase=Bound (4.327924974s) Jun 23 20:41:04.703: INFO: Waiting up to 3m0s for PersistentVolume local-4d9z2 to have phase Bound Jun 23 20:41:04.809: INFO: PersistentVolume local-4d9z2 found and phase=Bound (105.433182ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-tpv9 [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:41:05.128: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tpv9" in namespace "provisioning-1484" to be "Succeeded or Failed" Jun 23 20:41:05.234: INFO: Pod "pod-subpath-test-preprovisionedpv-tpv9": Phase="Pending", Reason="", readiness=false. Elapsed: 105.817991ms Jun 23 20:41:07.341: INFO: Pod "pod-subpath-test-preprovisionedpv-tpv9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212929053s Jun 23 20:41:09.448: INFO: Pod "pod-subpath-test-preprovisionedpv-tpv9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319904241s Jun 23 20:41:11.556: INFO: Pod "pod-subpath-test-preprovisionedpv-tpv9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.427936002s Jun 23 20:41:13.662: INFO: Pod "pod-subpath-test-preprovisionedpv-tpv9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.534390947s [1mSTEP[0m: Saw pod success Jun 23 20:41:13.662: INFO: Pod "pod-subpath-test-preprovisionedpv-tpv9" satisfied condition "Succeeded or Failed" Jun 23 20:41:13.768: INFO: Trying to get logs from node ip-172-20-0-238.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-tpv9 container test-container-subpath-preprovisionedpv-tpv9: <nil> [1mSTEP[0m: delete the pod Jun 23 20:41:14.000: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tpv9 to disappear Jun 23 20:41:14.105: INFO: Pod pod-subpath-test-preprovisionedpv-tpv9 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-tpv9 Jun 23 20:41:14.105: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tpv9" in namespace "provisioning-1484" [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-tpv9 [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:41:14.319: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tpv9" in namespace "provisioning-1484" to be "Succeeded or Failed" Jun 23 20:41:14.424: INFO: Pod "pod-subpath-test-preprovisionedpv-tpv9": Phase="Pending", Reason="", readiness=false. Elapsed: 105.604641ms Jun 23 20:41:16.530: INFO: Pod "pod-subpath-test-preprovisionedpv-tpv9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211627063s Jun 23 20:41:18.637: INFO: Pod "pod-subpath-test-preprovisionedpv-tpv9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.318056899s [1mSTEP[0m: Saw pod success Jun 23 20:41:18.637: INFO: Pod "pod-subpath-test-preprovisionedpv-tpv9" satisfied condition "Succeeded or Failed" Jun 23 20:41:18.744: INFO: Trying to get logs from node ip-172-20-0-238.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-tpv9 container test-container-subpath-preprovisionedpv-tpv9: <nil> [1mSTEP[0m: delete the pod Jun 23 20:41:18.962: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tpv9 to disappear Jun 23 20:41:19.068: INFO: Pod pod-subpath-test-preprovisionedpv-tpv9 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-tpv9 Jun 23 20:41:19.068: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tpv9" in namespace "provisioning-1484" ... skipping 30 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directories when readOnly specified in the volumeSource [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":12,"skipped":94,"failed":0} [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:41:22.137: INFO: Only supported for providers [openstack] (not aws) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 157 lines ... [It] should support readOnly file specified in the volumeMount [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380 Jun 23 20:41:16.199: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics Jun 23 20:41:16.199: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-n9vd [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:41:16.324: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-n9vd" in namespace "provisioning-6845" to be "Succeeded or Failed" Jun 23 20:41:16.430: INFO: Pod "pod-subpath-test-inlinevolume-n9vd": Phase="Pending", Reason="", readiness=false. Elapsed: 106.006055ms Jun 23 20:41:18.536: INFO: Pod "pod-subpath-test-inlinevolume-n9vd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212448851s Jun 23 20:41:20.643: INFO: Pod "pod-subpath-test-inlinevolume-n9vd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319188047s Jun 23 20:41:22.752: INFO: Pod "pod-subpath-test-inlinevolume-n9vd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.427814576s [1mSTEP[0m: Saw pod success Jun 23 20:41:22.752: INFO: Pod "pod-subpath-test-inlinevolume-n9vd" satisfied condition "Succeeded or Failed" Jun 23 20:41:22.858: INFO: Trying to get logs from node ip-172-20-0-87.eu-west-1.compute.internal pod pod-subpath-test-inlinevolume-n9vd container test-container-subpath-inlinevolume-n9vd: <nil> [1mSTEP[0m: delete the pod Jun 23 20:41:23.106: INFO: Waiting for pod pod-subpath-test-inlinevolume-n9vd to disappear Jun 23 20:41:23.212: INFO: Pod pod-subpath-test-inlinevolume-n9vd no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-n9vd Jun 23 20:41:23.212: INFO: Deleting pod "pod-subpath-test-inlinevolume-n9vd" in namespace "provisioning-6845" ... skipping 12 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly file specified in the volumeMount [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":6,"skipped":53,"failed":0} [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:41:23.650: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 118 lines ... [36mOnly supported for node OS distro [windows] (not debian)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:30 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":55,"failed":0} [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:40:47.655: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename statefulset [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 7 lines ... [1mSTEP[0m: Looking for a node to schedule stateful set and pod [1mSTEP[0m: Creating pod with conflicting port in namespace statefulset-937 [1mSTEP[0m: Waiting until pod test-pod will start running in namespace statefulset-937 [1mSTEP[0m: Creating statefulset with conflicting port in namespace statefulset-937 [1mSTEP[0m: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-937 Jun 23 20:40:56.967: INFO: Observed stateful pod in namespace: statefulset-937, name: ss-0, uid: ed99c932-b1a4-4fa7-a3de-f4d98e622a2b, status phase: Pending. Waiting for statefulset controller to delete. Jun 23 20:40:58.707: INFO: Observed stateful pod in namespace: statefulset-937, name: ss-0, uid: ed99c932-b1a4-4fa7-a3de-f4d98e622a2b, status phase: Failed. Waiting for statefulset controller to delete. Jun 23 20:40:58.711: INFO: Observed stateful pod in namespace: statefulset-937, name: ss-0, uid: ed99c932-b1a4-4fa7-a3de-f4d98e622a2b, status phase: Failed. Waiting for statefulset controller to delete. Jun 23 20:40:58.719: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-937 [1mSTEP[0m: Removing pod with conflicting port in namespace statefulset-937 [1mSTEP[0m: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-937 and will be in running state [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Jun 23 20:41:15.819: INFO: Deleting all statefulset in ns statefulset-937 ... skipping 11 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m Basic StatefulSet functionality [StatefulSetBasic] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99[0m Should recreate evicted statefulset [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":9,"skipped":55,"failed":0} [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:41:27.015: INFO: Only supported for providers [openstack] (not aws) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 9 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230[0m [36mOnly supported for providers [openstack] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092 [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":4,"skipped":45,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:39:40.015: INFO: >>> kubeConfig: /root/.kube/config ... skipping 191 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Dynamic PV (block volmode)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":5,"skipped":45,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:41:28.152: INFO: Only supported for providers [openstack] (not aws) [AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 45 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 23 20:41:24.324: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d3fa37ef-c1c8-412c-864e-b4c09b128578" in namespace "downward-api-1414" to be "Succeeded or Failed" Jun 23 20:41:24.431: INFO: Pod "downwardapi-volume-d3fa37ef-c1c8-412c-864e-b4c09b128578": Phase="Pending", Reason="", readiness=false. Elapsed: 106.223363ms Jun 23 20:41:26.540: INFO: Pod "downwardapi-volume-d3fa37ef-c1c8-412c-864e-b4c09b128578": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215409318s Jun 23 20:41:28.647: INFO: Pod "downwardapi-volume-d3fa37ef-c1c8-412c-864e-b4c09b128578": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.322328113s [1mSTEP[0m: Saw pod success Jun 23 20:41:28.647: INFO: Pod "downwardapi-volume-d3fa37ef-c1c8-412c-864e-b4c09b128578" satisfied condition "Succeeded or Failed" Jun 23 20:41:28.755: INFO: Trying to get logs from node ip-172-20-0-98.eu-west-1.compute.internal pod downwardapi-volume-d3fa37ef-c1c8-412c-864e-b4c09b128578 container client-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:41:28.977: INFO: Waiting for pod downwardapi-volume-d3fa37ef-c1c8-412c-864e-b4c09b128578 to disappear Jun 23 20:41:29.082: INFO: Pod downwardapi-volume-d3fa37ef-c1c8-412c-864e-b4c09b128578 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.619 seconds][0m [sig-storage] Downward API volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should provide container's memory limit [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":72,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:41:29.304: INFO: Only supported for providers [openstack] (not aws) ... skipping 99 lines ... [32m• [SLOW TEST:8.227 seconds][0m [sig-apps] Deployment [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m test Deployment ReplicaSet orphaning and adoption regarding controllerRef [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:136[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":5,"skipped":41,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:41:20.986: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test emptydir 0777 on node default medium Jun 23 20:41:21.632: INFO: Waiting up to 5m0s for pod "pod-ffc1a8f0-4ba8-4163-b60c-33fe69bccd5a" in namespace "emptydir-8658" to be "Succeeded or Failed" Jun 23 20:41:21.741: INFO: Pod "pod-ffc1a8f0-4ba8-4163-b60c-33fe69bccd5a": Phase="Pending", Reason="", readiness=false. Elapsed: 108.479532ms Jun 23 20:41:23.848: INFO: Pod "pod-ffc1a8f0-4ba8-4163-b60c-33fe69bccd5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216211357s Jun 23 20:41:25.955: INFO: Pod "pod-ffc1a8f0-4ba8-4163-b60c-33fe69bccd5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322775848s Jun 23 20:41:28.063: INFO: Pod "pod-ffc1a8f0-4ba8-4163-b60c-33fe69bccd5a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.431186425s Jun 23 20:41:30.170: INFO: Pod "pod-ffc1a8f0-4ba8-4163-b60c-33fe69bccd5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.538316193s [1mSTEP[0m: Saw pod success Jun 23 20:41:30.171: INFO: Pod "pod-ffc1a8f0-4ba8-4163-b60c-33fe69bccd5a" satisfied condition "Succeeded or Failed" Jun 23 20:41:30.279: INFO: Trying to get logs from node ip-172-20-0-87.eu-west-1.compute.internal pod pod-ffc1a8f0-4ba8-4163-b60c-33fe69bccd5a container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:41:30.504: INFO: Waiting for pod pod-ffc1a8f0-4ba8-4163-b60c-33fe69bccd5a to disappear Jun 23 20:41:30.614: INFO: Pod pod-ffc1a8f0-4ba8-4163-b60c-33fe69bccd5a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:9.845 seconds][0m [sig-storage] EmptyDir volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":41,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral ... skipping 112 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support multiple inline ephemeral volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:252[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":8,"skipped":48,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:41:29.313: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename containers [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test override arguments Jun 23 20:41:29.953: INFO: Waiting up to 5m0s for pod "client-containers-a8997d73-c432-4f3b-803c-69bbead9f1dd" in namespace "containers-7245" to be "Succeeded or Failed" Jun 23 20:41:30.059: INFO: Pod "client-containers-a8997d73-c432-4f3b-803c-69bbead9f1dd": Phase="Pending", Reason="", readiness=false. Elapsed: 105.884129ms Jun 23 20:41:32.167: INFO: Pod "client-containers-a8997d73-c432-4f3b-803c-69bbead9f1dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213878946s Jun 23 20:41:34.274: INFO: Pod "client-containers-a8997d73-c432-4f3b-803c-69bbead9f1dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321654408s Jun 23 20:41:36.387: INFO: Pod "client-containers-a8997d73-c432-4f3b-803c-69bbead9f1dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.433999402s [1mSTEP[0m: Saw pod success Jun 23 20:41:36.387: INFO: Pod "client-containers-a8997d73-c432-4f3b-803c-69bbead9f1dd" satisfied condition "Succeeded or Failed" Jun 23 20:41:36.493: INFO: Trying to get logs from node ip-172-20-0-98.eu-west-1.compute.internal pod client-containers-a8997d73-c432-4f3b-803c-69bbead9f1dd container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:41:36.746: INFO: Waiting for pod client-containers-a8997d73-c432-4f3b-803c-69bbead9f1dd to disappear Jun 23 20:41:36.859: INFO: Pod client-containers-a8997d73-c432-4f3b-803c-69bbead9f1dd no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:7.771 seconds][0m [sig-node] Docker Containers [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":78,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:41:37.087: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping ... skipping 173 lines ... [32m• [SLOW TEST:40.687 seconds][0m [sig-apps] Job [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should delete a job [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":11,"skipped":128,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 32 lines ... [32m• [SLOW TEST:11.375 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m patching/updating a validating webhook should work [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":7,"skipped":43,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath ... skipping 28 lines ... Jun 23 20:41:32.106: INFO: PersistentVolumeClaim pvc-sdvpt found but phase is Pending instead of Bound. Jun 23 20:41:34.212: INFO: PersistentVolumeClaim pvc-sdvpt found and phase=Bound (6.424318833s) Jun 23 20:41:34.212: INFO: Waiting up to 3m0s for PersistentVolume local-h9xjm to have phase Bound Jun 23 20:41:34.322: INFO: PersistentVolume local-h9xjm found and phase=Bound (109.807351ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-td9z [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:41:34.643: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-td9z" in namespace "provisioning-893" to be "Succeeded or Failed" Jun 23 20:41:34.749: INFO: Pod "pod-subpath-test-preprovisionedpv-td9z": Phase="Pending", Reason="", readiness=false. Elapsed: 105.653579ms Jun 23 20:41:36.860: INFO: Pod "pod-subpath-test-preprovisionedpv-td9z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216645688s Jun 23 20:41:38.965: INFO: Pod "pod-subpath-test-preprovisionedpv-td9z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.322100265s [1mSTEP[0m: Saw pod success Jun 23 20:41:38.965: INFO: Pod "pod-subpath-test-preprovisionedpv-td9z" satisfied condition "Succeeded or Failed" Jun 23 20:41:39.073: INFO: Trying to get logs from node ip-172-20-0-238.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-td9z container test-container-subpath-preprovisionedpv-td9z: <nil> [1mSTEP[0m: delete the pod Jun 23 20:41:39.302: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-td9z to disappear Jun 23 20:41:39.407: INFO: Pod pod-subpath-test-preprovisionedpv-td9z no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-td9z Jun 23 20:41:39.407: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-td9z" in namespace "provisioning-893" ... skipping 34 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly directory specified in the volumeMount [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":13,"skipped":107,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 427 lines ... [32m• [SLOW TEST:16.138 seconds][0m [sig-network] Service endpoints latency [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should not be very high [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":6,"skipped":42,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:41:45.744: INFO: Only supported for providers [gce gke] (not aws) ... skipping 95 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:41:46.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "certificates-2527" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":14,"skipped":114,"failed":0} [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:41:40.023: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename replicaset [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 132 lines ... [32m• [SLOW TEST:25.099 seconds][0m [sig-network] Services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should be able to change the type from ExternalName to NodePort [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":10,"skipped":56,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:41:52.118: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 98 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Kubectl replace [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568[0m should update a single-container pod's image [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":9,"skipped":50,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:41:52.601: INFO: Driver hostPathSymlink doesn't support GenericEphemeralVolume -- skipping ... skipping 91 lines ... Jun 23 20:41:05.509: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-xlb5v] to have phase Bound Jun 23 20:41:05.615: INFO: PersistentVolumeClaim pvc-xlb5v found and phase=Bound (105.545382ms) [1mSTEP[0m: Deleting the previously created pod Jun 23 20:41:26.150: INFO: Deleting pod "pvc-volume-tester-mthtw" in namespace "csi-mock-volumes-5441" Jun 23 20:41:26.259: INFO: Wait up to 5m0s for pod "pvc-volume-tester-mthtw" to be fully deleted [1mSTEP[0m: Checking CSI driver logs Jun 23 20:41:30.582: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/a0ca4b3d-ffda-4a89-830f-4add0b4b3627/volumes/kubernetes.io~csi/pvc-4ae93368-a9f3-40e2-bd99-2f628c2674df/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} [1mSTEP[0m: Deleting pod pvc-volume-tester-mthtw Jun 23 20:41:30.582: INFO: Deleting pod "pvc-volume-tester-mthtw" in namespace "csi-mock-volumes-5441" [1mSTEP[0m: Deleting claim pvc-xlb5v Jun 23 20:41:30.901: INFO: Waiting up to 2m0s for PersistentVolume pvc-4ae93368-a9f3-40e2-bd99-2f628c2674df to get deleted Jun 23 20:41:31.007: INFO: PersistentVolume pvc-4ae93368-a9f3-40e2-bd99-2f628c2674df found and phase=Released (105.284694ms) Jun 23 20:41:33.117: INFO: PersistentVolume pvc-4ae93368-a9f3-40e2-bd99-2f628c2674df found and phase=Released (2.214978439s) ... skipping 47 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSIServiceAccountToken [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1576[0m token should not be plumbed down when CSIDriver is not deployed [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1604[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed","total":-1,"completed":6,"skipped":33,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:41:55.367: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 96 lines ... Jun 23 20:41:46.431: INFO: PersistentVolumeClaim pvc-vblsr found but phase is Pending instead of Bound. Jun 23 20:41:48.538: INFO: PersistentVolumeClaim pvc-vblsr found and phase=Bound (14.879595165s) Jun 23 20:41:48.538: INFO: Waiting up to 3m0s for PersistentVolume local-jmps2 to have phase Bound Jun 23 20:41:48.645: INFO: PersistentVolume local-jmps2 found and phase=Bound (107.127749ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-7qh2 [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:41:48.967: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7qh2" in namespace "provisioning-9746" to be "Succeeded or Failed" Jun 23 20:41:49.074: INFO: Pod "pod-subpath-test-preprovisionedpv-7qh2": Phase="Pending", Reason="", readiness=false. Elapsed: 107.061252ms Jun 23 20:41:51.184: INFO: Pod "pod-subpath-test-preprovisionedpv-7qh2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216421528s Jun 23 20:41:53.313: INFO: Pod "pod-subpath-test-preprovisionedpv-7qh2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.345968856s [1mSTEP[0m: Saw pod success Jun 23 20:41:53.313: INFO: Pod "pod-subpath-test-preprovisionedpv-7qh2" satisfied condition "Succeeded or Failed" Jun 23 20:41:53.439: INFO: Trying to get logs from node ip-172-20-0-42.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-7qh2 container test-container-volume-preprovisionedpv-7qh2: <nil> [1mSTEP[0m: delete the pod Jun 23 20:41:53.702: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7qh2 to disappear Jun 23 20:41:53.817: INFO: Pod pod-subpath-test-preprovisionedpv-7qh2 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-7qh2 Jun 23 20:41:53.817: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7qh2" in namespace "provisioning-9746" ... skipping 34 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directory [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":-1,"completed":15,"skipped":114,"failed":0} [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:41:47.961: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename container-runtime [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 15 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41[0m when running a container with a new image [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266[0m should not be able to pull from private registry without secret [NodeConformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":16,"skipped":114,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:41:58.053: INFO: Only supported for providers [gce gke] (not aws) ... skipping 215 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:180[0m [36mOnly supported for providers [gce gke] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302 [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":8,"skipped":46,"failed":0} [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:41:52.518: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-f595d635-53ce-4198-bd25-e59408f69389 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 23 20:41:53.313: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c003c774-8e6f-4d9d-92ff-af105fe262a1" in namespace "projected-8863" to be "Succeeded or Failed" Jun 23 20:41:53.439: INFO: Pod "pod-projected-configmaps-c003c774-8e6f-4d9d-92ff-af105fe262a1": Phase="Pending", Reason="", readiness=false. Elapsed: 126.252067ms Jun 23 20:41:55.551: INFO: Pod "pod-projected-configmaps-c003c774-8e6f-4d9d-92ff-af105fe262a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.238042598s Jun 23 20:41:57.660: INFO: Pod "pod-projected-configmaps-c003c774-8e6f-4d9d-92ff-af105fe262a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.347417895s [1mSTEP[0m: Saw pod success Jun 23 20:41:57.660: INFO: Pod "pod-projected-configmaps-c003c774-8e6f-4d9d-92ff-af105fe262a1" satisfied condition "Succeeded or Failed" Jun 23 20:41:57.769: INFO: Trying to get logs from node ip-172-20-0-238.eu-west-1.compute.internal pod pod-projected-configmaps-c003c774-8e6f-4d9d-92ff-af105fe262a1 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:41:58.000: INFO: Waiting for pod pod-projected-configmaps-c003c774-8e6f-4d9d-92ff-af105fe262a1 to disappear Jun 23 20:41:58.107: INFO: Pod pod-projected-configmaps-c003c774-8e6f-4d9d-92ff-af105fe262a1 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.839 seconds][0m [sig-storage] Projected configMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":9,"skipped":46,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:41:58.362: INFO: Only supported for providers [azure] (not aws) ... skipping 30 lines ... Jun 23 20:41:18.879: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-75095qccl [1mSTEP[0m: creating a claim Jun 23 20:41:18.989: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-nk5c [1mSTEP[0m: Creating a pod to test subpath Jun 23 20:41:19.316: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-nk5c" in namespace "provisioning-7509" to be "Succeeded or Failed" Jun 23 20:41:19.423: INFO: Pod "pod-subpath-test-dynamicpv-nk5c": Phase="Pending", Reason="", readiness=false. Elapsed: 107.326365ms Jun 23 20:41:21.532: INFO: Pod "pod-subpath-test-dynamicpv-nk5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215970896s Jun 23 20:41:23.641: INFO: Pod "pod-subpath-test-dynamicpv-nk5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324928276s Jun 23 20:41:25.749: INFO: Pod "pod-subpath-test-dynamicpv-nk5c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433701776s Jun 23 20:41:27.858: INFO: Pod "pod-subpath-test-dynamicpv-nk5c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.542720347s Jun 23 20:41:29.967: INFO: Pod "pod-subpath-test-dynamicpv-nk5c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.651181563s Jun 23 20:41:32.076: INFO: Pod "pod-subpath-test-dynamicpv-nk5c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.759959571s Jun 23 20:41:34.186: INFO: Pod "pod-subpath-test-dynamicpv-nk5c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.870154838s Jun 23 20:41:36.293: INFO: Pod "pod-subpath-test-dynamicpv-nk5c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.977351339s Jun 23 20:41:38.405: INFO: Pod "pod-subpath-test-dynamicpv-nk5c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.089465459s Jun 23 20:41:40.514: INFO: Pod "pod-subpath-test-dynamicpv-nk5c": Phase="Pending", Reason="", readiness=false. Elapsed: 21.198146101s Jun 23 20:41:42.625: INFO: Pod "pod-subpath-test-dynamicpv-nk5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.309082106s [1mSTEP[0m: Saw pod success Jun 23 20:41:42.625: INFO: Pod "pod-subpath-test-dynamicpv-nk5c" satisfied condition "Succeeded or Failed" Jun 23 20:41:42.734: INFO: Trying to get logs from node ip-172-20-0-87.eu-west-1.compute.internal pod pod-subpath-test-dynamicpv-nk5c container test-container-volume-dynamicpv-nk5c: <nil> [1mSTEP[0m: delete the pod Jun 23 20:41:42.964: INFO: Waiting for pod pod-subpath-test-dynamicpv-nk5c to disappear Jun 23 20:41:43.080: INFO: Pod pod-subpath-test-dynamicpv-nk5c no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-nk5c Jun 23 20:41:43.080: INFO: Deleting pod "pod-subpath-test-dynamicpv-nk5c" in namespace "provisioning-7509" ... skipping 20 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support non-existent path [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":9,"skipped":55,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:41:59.399: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 45 lines ... [1mSTEP[0m: Building a namespace api object, basename secrets [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating secret with name secret-test-002289d6-2567-4bd1-89f9-6ecf69304800 [1mSTEP[0m: Creating a pod to test consume secrets Jun 23 20:41:56.134: INFO: Waiting up to 5m0s for pod "pod-secrets-7be2cc29-b02a-4126-87a1-03c013f8e6d2" in namespace "secrets-8103" to be "Succeeded or Failed" Jun 23 20:41:56.239: INFO: Pod "pod-secrets-7be2cc29-b02a-4126-87a1-03c013f8e6d2": Phase="Pending", Reason="", readiness=false. Elapsed: 105.510419ms Jun 23 20:41:58.351: INFO: Pod "pod-secrets-7be2cc29-b02a-4126-87a1-03c013f8e6d2": Phase="Running", Reason="", readiness=true. Elapsed: 2.21715097s Jun 23 20:42:00.458: INFO: Pod "pod-secrets-7be2cc29-b02a-4126-87a1-03c013f8e6d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.324649677s [1mSTEP[0m: Saw pod success Jun 23 20:42:00.459: INFO: Pod "pod-secrets-7be2cc29-b02a-4126-87a1-03c013f8e6d2" satisfied condition "Succeeded or Failed" Jun 23 20:42:00.564: INFO: Trying to get logs from node ip-172-20-0-238.eu-west-1.compute.internal pod pod-secrets-7be2cc29-b02a-4126-87a1-03c013f8e6d2 container secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 23 20:42:00.781: INFO: Waiting for pod pod-secrets-7be2cc29-b02a-4126-87a1-03c013f8e6d2 to disappear Jun 23 20:42:00.886: INFO: Pod pod-secrets-7be2cc29-b02a-4126-87a1-03c013f8e6d2 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.721 seconds][0m [sig-storage] Secrets [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":40,"failed":0} [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:42:01.102: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 80 lines ... [32m• [SLOW TEST:9.709 seconds][0m [sig-scheduling] LimitRange [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40[0m should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":11,"skipped":62,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:42:01.838: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 33 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:42:02.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-274" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":8,"skipped":44,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:42:02.362: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) ... skipping 14 lines ... [36mOnly supported for node OS distro [gci ubuntu custom] (not debian)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":9,"skipped":86,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:41:57.543: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test emptydir 0666 on tmpfs Jun 23 20:41:58.211: INFO: Waiting up to 5m0s for pod "pod-5e6b3f1e-4f59-4235-81ad-ed06bff02c00" in namespace "emptydir-1864" to be "Succeeded or Failed" Jun 23 20:41:58.327: INFO: Pod "pod-5e6b3f1e-4f59-4235-81ad-ed06bff02c00": Phase="Pending", Reason="", readiness=false. Elapsed: 116.180929ms Jun 23 20:42:00.439: INFO: Pod "pod-5e6b3f1e-4f59-4235-81ad-ed06bff02c00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227475771s Jun 23 20:42:02.551: INFO: Pod "pod-5e6b3f1e-4f59-4235-81ad-ed06bff02c00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.339459733s [1mSTEP[0m: Saw pod success Jun 23 20:42:02.551: INFO: Pod "pod-5e6b3f1e-4f59-4235-81ad-ed06bff02c00" satisfied condition "Succeeded or Failed" Jun 23 20:42:02.659: INFO: Trying to get logs from node ip-172-20-0-42.eu-west-1.compute.internal pod pod-5e6b3f1e-4f59-4235-81ad-ed06bff02c00 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 20:42:02.884: INFO: Waiting for pod pod-5e6b3f1e-4f59-4235-81ad-ed06bff02c00 to disappear Jun 23 20:42:02.992: INFO: Pod pod-5e6b3f1e-4f59-4235-81ad-ed06bff02c00 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 14 lines ... [1mSTEP[0m: Building a namespace api object, basename security-context-test [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 23 20:41:59.020: INFO: Waiting up to 5m0s for pod "busybox-user-65534-f494b83e-d86b-4321-af1f-fbac79358460" in namespace "security-context-test-3544" to be "Succeeded or Failed" Jun 23 20:41:59.127: INFO: Pod "busybox-user-65534-f494b83e-d86b-4321-af1f-fbac79358460": Phase="Pending", Reason="", readiness=false. Elapsed: 106.951149ms Jun 23 20:42:01.237: INFO: Pod "busybox-user-65534-f494b83e-d86b-4321-af1f-fbac79358460": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216919712s Jun 23 20:42:03.347: INFO: Pod "busybox-user-65534-f494b83e-d86b-4321-af1f-fbac79358460": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.327306655s Jun 23 20:42:03.347: INFO: Pod "busybox-user-65534-f494b83e-d86b-4321-af1f-fbac79358460" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:42:03.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-3544" for this suite. ... skipping 2 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m When creating a container with runAsUser [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50[0m should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":49,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 60 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Kubectl logs [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1406[0m should be able to retrieve and filter logs [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":10,"skipped":68,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":14,"skipped":116,"failed":0} [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 23 20:41:46.831: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename deployment [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 94 lines ... [32m• [SLOW TEST:20.326 seconds][0m [sig-apps] Deployment [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should run the lifecycle of a Deployment [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":15,"skipped":116,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 24 lines ... [1mSTEP[0m: retrieving the pod [1mSTEP[0m: looking for the results for each expected name from probers Jun 23 20:41:54.365: INFO: File wheezy_udp@dns-test-service-3.dns-4608.svc.cluster.local from pod dns-4608/dns-test-6676033c-a3e0-4e77-b7f2-062396393286 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 23 20:41:54.475: INFO: File jessie_udp@dns-test-service-3.dns-4608.svc.cluster.local from pod dns-4608/dns-test-6676033c-a3e0-4e77-b7f2-062396393286 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 23 20:41:54.475: INFO: Lookups using dns-4608/dns-test-6676033c-a3e0-4e77-b7f2-062396393286 failed for: [wheezy_udp@dns-test-service-3.dns-4608.svc.cluster.local jessie_udp@dns-test-service-3.dns-4608.svc.cluster.local] Jun 23 20:41:59.583: INFO: File wheezy_udp@dns-test-service-3.dns-4608.svc.cluster.local from pod dns-4608/dns-test-6676033c-a3e0-4e77-b7f2-062396393286 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 23 20:41:59.691: INFO: File jessie_udp@dns-test-service-3.dns-4608.svc.cluster.local from pod dns-4608/dns-test-6676033c-a3e0-4e77-b7f2-062396393286 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 23 20:41:59.691: INFO: Lookups using dns-4608/dns-test-6676033c-a3e0-4e77-b7f2-062396393286 failed for: [wheezy_udp@dns-test-service-3.dns-4608.svc.cluster.local jessie_udp@dns-test-service-3.dns-4608.svc.cluster.local] Jun 23 20:42:04.690: INFO: DNS probes using dns-test-6676033c-a3e0-4e77-b7f2-062396393286 succeeded [1mSTEP[0m: deleting the pod [1mSTEP[0m: changing the service to type=ClusterIP [1mSTEP[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4608.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4608.svc.cluster.local; sleep 1; done ... skipping 17 lines ... [32m• [SLOW TEST:42.056 seconds][0m [sig-network] DNS [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should provide DNS for ExternalName services [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":6,"skipped":58,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:42:10.238: INFO: Driver hostPath doesn't support DynamicPV -- skipping ... skipping 135 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":6,"skipped":55,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:42:10.598: INFO: Only supported for providers [openstack] (not aws) ... skipping 24 lines ... [1mSTEP[0m: Building a namespace api object, basename configmap [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap configmap-6751/configmap-test-fa721542-2b25-4df5-9779-10dd74ae9e8b [1mSTEP[0m: Creating a pod to test consume configMaps Jun 23 20:42:07.940: INFO: Waiting up to 5m0s for pod "pod-configmaps-f3616127-a698-4dc1-829f-06c1372da41a" in namespace "configmap-6751" to be "Succeeded or Failed" Jun 23 20:42:08.046: INFO: Pod "pod-configmaps-f3616127-a698-4dc1-829f-06c1372da41a": Phase="Pending", Reason="", readiness=false. Elapsed: 105.613754ms Jun 23 20:42:10.152: INFO: Pod "pod-configmaps-f3616127-a698-4dc1-829f-06c1372da41a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.211781024s [1mSTEP[0m: Saw pod success Jun 23 20:42:10.152: INFO: Pod "pod-configmaps-f3616127-a698-4dc1-829f-06c1372da41a" satisfied condition "Succeeded or Failed" Jun 23 20:42:10.257: INFO: Trying to get logs from node ip-172-20-0-42.eu-west-1.compute.internal pod pod-configmaps-f3616127-a698-4dc1-829f-06c1372da41a container env-test: <nil> [1mSTEP[0m: delete the pod Jun 23 20:42:10.497: INFO: Waiting for pod pod-configmaps-f3616127-a698-4dc1-829f-06c1372da41a to disappear Jun 23 20:42:10.602: INFO: Pod pod-configmaps-f3616127-a698-4dc1-829f-06c1372da41a no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 23 20:42:10.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-6751" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":120,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:42:10.817: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 114 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":7,"skipped":45,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:42:16.569: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 134 lines ... Jun 23 20:41:45.891: INFO: PersistentVolumeClaim pvc-rpk87 found but phase is Pending instead of Bound. Jun 23 20:41:47.998: INFO: PersistentVolumeClaim pvc-rpk87 found and phase=Bound (14.888365032s) Jun 23 20:41:47.998: INFO: Waiting up to 3m0s for PersistentVolume local-7jjch to have phase Bound Jun 23 20:41:48.104: INFO: PersistentVolume local-7jjch found and phase=Bound (106.129689ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-wkdj [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jun 23 20:41:48.432: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wkdj" in namespace "provisioning-5801" to be "Succeeded or Failed" Jun 23 20:41:48.538: INFO: Pod "pod-subpath-test-preprovisionedpv-wkdj": Phase="Pending", Reason="", readiness=false. Elapsed: 106.339565ms Jun 23 20:41:50.649: INFO: Pod "pod-subpath-test-preprovisionedpv-wkdj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21773231s Jun 23 20:41:52.762: INFO: Pod "pod-subpath-test-preprovisionedpv-wkdj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33053595s Jun 23 20:41:54.871: INFO: Pod "pod-subpath-test-preprovisionedpv-wkdj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.43896978s Jun 23 20:41:56.979: INFO: Pod "pod-subpath-test-preprovisionedpv-wkdj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.546935004s Jun 23 20:41:59.085: INFO: Pod "pod-subpath-test-preprovisionedpv-wkdj": Phase="Running", Reason="", readiness=true. Elapsed: 10.653762777s ... skipping 3 lines ... Jun 23 20:42:07.514: INFO: Pod "pod-subpath-test-preprovisionedpv-wkdj": Phase="Running", Reason="", readiness=true. Elapsed: 19.082724376s Jun 23 20:42:09.622: INFO: Pod "pod-subpath-test-preprovisionedpv-wkdj": Phase="Running", Reason="", readiness=true. Elapsed: 21.18979006s Jun 23 20:42:11.730: INFO: Pod "pod-subpath-test-preprovisionedpv-wkdj": Phase="Running", Reason="", readiness=true. Elapsed: 23.298399279s Jun 23 20:42:13.837: INFO: Pod "pod-subpath-test-preprovisionedpv-wkdj": Phase="Running", Reason="", readiness=true. Elapsed: 25.405676176s Jun 23 20:42:15.944: INFO: Pod "pod-subpath-test-preprovisionedpv-wkdj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.512606015s [1mSTEP[0m: Saw pod success Jun 23 20:42:15.944: INFO: Pod "pod-subpath-test-preprovisionedpv-wkdj" satisfied condition "Succeeded or Failed" Jun 23 20:42:16.051: INFO: Trying to get logs from node ip-172-20-0-87.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-wkdj container test-container-subpath-preprovisionedpv-wkdj: <nil> [1mSTEP[0m: delete the pod Jun 23 20:42:16.287: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wkdj to disappear Jun 23 20:42:16.396: INFO: Pod pod-subpath-test-preprovisionedpv-wkdj no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-wkdj Jun 23 20:42:16.396: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wkdj" in namespace "provisioning-5801" ... skipping 26 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support file as subpath [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":12,"skipped":70,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 23 20:42:18.598: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 73 lines ... [32m• [SLOW TEST:9.501 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should mutate custom resource [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":7,"skipped":59,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral ... skipping 51861 lines ... new service port\" portName=\"svc-latency-7847/latency-svc-vs4f5\" servicePort=\"100.69.87.107:80/TCP\"\nI0623 20:41:37.317845 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-6nldp\" servicePort=\"100.66.83.83:80/TCP\"\nI0623 20:41:37.317914 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-6dl9n\" servicePort=\"100.68.68.170:80/TCP\"\nI0623 20:41:37.317978 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-kt757\" servicePort=\"100.70.55.115:80/TCP\"\nI0623 20:41:37.318054 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-dckgs\" servicePort=\"100.66.17.128:80/TCP\"\nI0623 20:41:37.318118 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-dlbmc\" servicePort=\"100.69.130.185:80/TCP\"\nI0623 20:41:37.318191 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-fwx9m\" servicePort=\"100.69.130.77:80/TCP\"\nI0623 20:41:37.318262 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-r79hp\" servicePort=\"100.68.29.96:80/TCP\"\nI0623 20:41:37.318333 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-d57tn\" servicePort=\"100.64.107.147:80/TCP\"\nI0623 20:41:37.318407 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-hs667\" servicePort=\"100.64.171.16:80/TCP\"\nI0623 20:41:37.318848 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:37.330288 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-gl82d\" portCount=1\nI0623 20:41:37.338450 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5wgst\" portCount=1\nI0623 20:41:37.347439 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-2r47p\" portCount=1\nI0623 20:41:37.348583 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-q8bcw\" portCount=1\nI0623 20:41:37.363739 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-q2gmm\" portCount=1\nI0623 20:41:37.376877 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-pfdnh\" portCount=1\nI0623 20:41:37.382489 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-kb265\" portCount=1\nI0623 20:41:37.390089 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-bg5xf\" portCount=1\nI0623 20:41:37.390379 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-bhhr2\" portCount=1\nI0623 20:41:37.407173 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9p68w\" portCount=1\nI0623 20:41:37.409395 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nvpl7\" portCount=1\nI0623 20:41:37.415805 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"99.643504ms\"\nI0623 20:41:37.417631 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rkv78\" portCount=1\nI0623 20:41:37.421390 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-c6jfc\" portCount=1\nI0623 20:41:37.423393 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-p2q78\" portCount=1\nI0623 20:41:37.437974 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rmv65\" portCount=1\nI0623 20:41:37.460387 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nx5js\" portCount=1\nI0623 20:41:37.472559 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-229xf\" portCount=1\nI0623 20:41:37.480573 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-t2s25\" portCount=1\nI0623 20:41:37.485662 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5x85d\" portCount=1\nI0623 20:41:37.490239 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nkr6p\" portCount=1\nI0623 20:41:37.515126 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nztnh\" portCount=1\nI0623 20:41:37.527033 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-h8blk\" portCount=1\nI0623 20:41:37.539468 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-pbzq9\" portCount=1\nI0623 20:41:37.545851 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-4k6ch\" portCount=1\nI0623 20:41:37.553126 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-btt9z\" portCount=1\nI0623 20:41:37.593044 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-hhb55\" portCount=1\nI0623 20:41:37.634429 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-st8bv\" portCount=1\nI0623 20:41:37.686261 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-kv8gl\" portCount=1\nI0623 20:41:37.733403 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9nmbz\" portCount=1\nI0623 20:41:37.784644 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-qk7cp\" portCount=1\nI0623 20:41:37.832045 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-w744x\" portCount=1\nI0623 20:41:37.898444 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-w5x7l\" portCount=1\nI0623 20:41:37.928395 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-h7qwf\" portCount=1\nI0623 20:41:37.983059 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rzgsp\" portCount=1\nI0623 20:41:38.032993 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-c4nd4\" portCount=1\nI0623 20:41:38.082818 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-flp5b\" portCount=1\nI0623 20:41:38.130929 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-8xzjg\" portCount=1\nI0623 20:41:38.181946 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-br84t\" portCount=1\nI0623 20:41:38.228552 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-xsldq\" portCount=1\nI0623 20:41:38.283880 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-6n5dd\" portCount=1\nI0623 20:41:38.320722 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-rkv78\" servicePort=\"100.67.47.142:80/TCP\"\nI0623 20:41:38.320744 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-rmv65\" servicePort=\"100.70.83.218:80/TCP\"\nI0623 20:41:38.320756 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-h8blk\" servicePort=\"100.64.198.101:80/TCP\"\nI0623 20:41:38.320767 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-qk7cp\" servicePort=\"100.71.182.181:80/TCP\"\nI0623 20:41:38.320780 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-flp5b\" servicePort=\"100.66.18.253:80/TCP\"\nI0623 20:41:38.320790 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-2r47p\" servicePort=\"100.66.194.151:80/TCP\"\nI0623 20:41:38.320800 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-pfdnh\" servicePort=\"100.65.187.254:80/TCP\"\nI0623 20:41:38.320825 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-nvpl7\" servicePort=\"100.66.41.128:80/TCP\"\nI0623 20:41:38.320837 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-nztnh\" servicePort=\"100.70.130.238:80/TCP\"\nI0623 20:41:38.320852 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-pbzq9\" servicePort=\"100.65.39.85:80/TCP\"\nI0623 20:41:38.320863 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-4k6ch\" servicePort=\"100.70.181.26:80/TCP\"\nI0623 20:41:38.320874 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-c4nd4\" servicePort=\"100.66.151.121:80/TCP\"\nI0623 20:41:38.320886 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-5wgst\" servicePort=\"100.71.180.107:80/TCP\"\nI0623 20:41:38.320896 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-bhhr2\" servicePort=\"100.67.135.174:80/TCP\"\nI0623 20:41:38.320909 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-nkr6p\" servicePort=\"100.65.237.250:80/TCP\"\nI0623 20:41:38.320920 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-gl82d\" servicePort=\"100.65.51.184:80/TCP\"\nI0623 20:41:38.320931 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-t2s25\" servicePort=\"100.69.161.2:80/TCP\"\nI0623 20:41:38.320942 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-w744x\" servicePort=\"100.66.186.186:80/TCP\"\nI0623 20:41:38.320952 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-w5x7l\" servicePort=\"100.70.14.74:80/TCP\"\nI0623 20:41:38.320962 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-6n5dd\" servicePort=\"100.67.151.34:80/TCP\"\nI0623 20:41:38.320973 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-kb265\" servicePort=\"100.68.25.246:80/TCP\"\nI0623 20:41:38.320984 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-bg5xf\" servicePort=\"100.71.219.42:80/TCP\"\nI0623 20:41:38.320999 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-hhb55\" servicePort=\"100.64.125.21:80/TCP\"\nI0623 20:41:38.321034 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-kv8gl\" servicePort=\"100.67.91.51:80/TCP\"\nI0623 20:41:38.321045 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-h7qwf\" servicePort=\"100.68.158.28:80/TCP\"\nI0623 20:41:38.321057 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-rzgsp\" servicePort=\"100.69.69.92:80/TCP\"\nI0623 20:41:38.321069 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-229xf\" servicePort=\"100.71.33.239:80/TCP\"\nI0623 20:41:38.321082 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-q8bcw\" servicePort=\"100.69.229.251:80/TCP\"\nI0623 20:41:38.321093 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-q2gmm\" servicePort=\"100.69.59.83:80/TCP\"\nI0623 20:41:38.321106 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-c6jfc\" servicePort=\"100.68.175.149:80/TCP\"\nI0623 20:41:38.321117 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-btt9z\" servicePort=\"100.69.197.223:80/TCP\"\nI0623 20:41:38.321127 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-br84t\" servicePort=\"100.71.230.51:80/TCP\"\nI0623 20:41:38.321138 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-9p68w\" servicePort=\"100.66.47.111:80/TCP\"\nI0623 20:41:38.321151 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-nx5js\" servicePort=\"100.68.253.4:80/TCP\"\nI0623 20:41:38.321162 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-5x85d\" servicePort=\"100.66.132.252:80/TCP\"\nI0623 20:41:38.321175 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-8xzjg\" servicePort=\"100.70.150.140:80/TCP\"\nI0623 20:41:38.321318 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-xsldq\" servicePort=\"100.65.223.177:80/TCP\"\nI0623 20:41:38.321338 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-p2q78\" servicePort=\"100.68.248.111:80/TCP\"\nI0623 20:41:38.321349 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-st8bv\" servicePort=\"100.66.173.32:80/TCP\"\nI0623 20:41:38.321360 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-9nmbz\" servicePort=\"100.70.75.249:80/TCP\"\nI0623 20:41:38.321666 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:38.333339 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vsvrd\" portCount=1\nI0623 20:41:38.368364 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"47.65422ms\"\nI0623 20:41:38.381889 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vkxxx\" portCount=1\nI0623 20:41:38.432953 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rkllg\" portCount=1\nI0623 20:41:38.487033 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-b22s4\" portCount=1\nI0623 20:41:38.530782 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-4v8ld\" portCount=1\nI0623 20:41:38.587283 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v7w9j\" portCount=1\nI0623 20:41:38.631613 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-58rk5\" portCount=1\nI0623 20:41:38.684676 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-qhdm8\" portCount=1\nI0623 20:41:38.730811 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-c4dsp\" portCount=1\nI0623 20:41:38.781195 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5g2ql\" portCount=1\nI0623 20:41:38.842196 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fbphr\" portCount=1\nI0623 20:41:38.886549 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-q8qg6\" portCount=1\nI0623 20:41:38.932154 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-8kd85\" portCount=1\nI0623 20:41:38.981409 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-w2xwx\" portCount=1\nI0623 20:41:39.044725 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-l45zw\" portCount=1\nI0623 20:41:39.082977 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-k8prz\" portCount=1\nI0623 20:41:39.135553 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-gbd75\" portCount=1\nI0623 20:41:39.154725 10 service.go:304] \"Service updated ports\" service=\"webhook-5124/e2e-test-webhook\" portCount=1\nI0623 20:41:39.180806 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fs7v9\" portCount=1\nI0623 20:41:39.230561 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rgkxh\" portCount=1\nI0623 20:41:39.284478 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fg4bp\" portCount=1\nI0623 20:41:39.322437 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-b22s4\" servicePort=\"100.64.4.34:80/TCP\"\nI0623 20:41:39.322460 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-v7w9j\" servicePort=\"100.68.196.30:80/TCP\"\nI0623 20:41:39.322472 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-58rk5\" servicePort=\"100.70.128.84:80/TCP\"\nI0623 20:41:39.322485 10 service.go:419] \"Adding new service port\" portName=\"webhook-5124/e2e-test-webhook\" servicePort=\"100.68.71.95:8443/TCP\"\nI0623 20:41:39.322498 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-fg4bp\" servicePort=\"100.69.117.194:80/TCP\"\nI0623 20:41:39.322510 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-vkxxx\" servicePort=\"100.66.73.30:80/TCP\"\nI0623 20:41:39.322523 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-5g2ql\" servicePort=\"100.64.201.119:80/TCP\"\nI0623 20:41:39.322533 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-fbphr\" servicePort=\"100.65.114.33:80/TCP\"\nI0623 20:41:39.322543 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-q8qg6\" servicePort=\"100.64.18.92:80/TCP\"\nI0623 20:41:39.322570 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-l45zw\" servicePort=\"100.67.138.4:80/TCP\"\nI0623 20:41:39.323968 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-fs7v9\" servicePort=\"100.70.218.176:80/TCP\"\nI0623 20:41:39.330934 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-rkllg\" servicePort=\"100.65.189.196:80/TCP\"\nI0623 20:41:39.331571 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-4v8ld\" servicePort=\"100.68.172.178:80/TCP\"\nI0623 20:41:39.331594 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-qhdm8\" servicePort=\"100.70.241.233:80/TCP\"\nI0623 20:41:39.331607 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-c4dsp\" servicePort=\"100.67.172.75:80/TCP\"\nI0623 20:41:39.331617 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-8kd85\" servicePort=\"100.71.111.210:80/TCP\"\nI0623 20:41:39.331629 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-vsvrd\" servicePort=\"100.64.212.63:80/TCP\"\nI0623 20:41:39.331638 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-w2xwx\" servicePort=\"100.68.221.231:80/TCP\"\nI0623 20:41:39.331648 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-k8prz\" servicePort=\"100.69.182.161:80/TCP\"\nI0623 20:41:39.331692 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-gbd75\" servicePort=\"100.70.87.44:80/TCP\"\nI0623 20:41:39.331705 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-rgkxh\" servicePort=\"100.64.101.52:80/TCP\"\nI0623 20:41:39.331972 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:39.333350 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-kdkdt\" portCount=1\nI0623 20:41:39.382861 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"60.441737ms\"\nI0623 20:41:39.385367 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-btxfr\" portCount=1\nI0623 20:41:39.434874 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-gnw2k\" portCount=1\nI0623 20:41:39.485577 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-jnlfc\" portCount=1\nI0623 20:41:39.532998 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-bkz27\" portCount=1\nI0623 20:41:39.580585 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vttbt\" portCount=1\nI0623 20:41:39.630793 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-smgwg\" portCount=1\nI0623 20:41:39.681445 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-ltm5g\" portCount=1\nI0623 20:41:39.730127 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-pl4kr\" portCount=1\nI0623 20:41:39.779748 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-wc2kr\" portCount=1\nI0623 20:41:39.831038 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-s7xjg\" portCount=1\nI0623 20:41:39.882391 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-s58f7\" portCount=1\nI0623 20:41:39.981616 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-lh245\" portCount=1\nI0623 20:41:40.030289 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-zc6v4\" portCount=1\nI0623 20:41:40.089292 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-68r9h\" portCount=1\nI0623 20:41:40.132834 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vwx9t\" portCount=1\nI0623 20:41:40.179658 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-mtkcd\" portCount=1\nI0623 20:41:40.237736 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5mn6v\" portCount=1\nI0623 20:41:40.279898 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-6sm6q\" portCount=1\nI0623 20:41:40.325675 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-kdkdt\" servicePort=\"100.68.112.56:80/TCP\"\nI0623 20:41:40.326349 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-jnlfc\" servicePort=\"100.64.63.25:80/TCP\"\nI0623 20:41:40.326566 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-wc2kr\" servicePort=\"100.66.211.68:80/TCP\"\nI0623 20:41:40.326686 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-lh245\" servicePort=\"100.71.48.174:80/TCP\"\nI0623 20:41:40.326979 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-mtkcd\" servicePort=\"100.66.159.221:80/TCP\"\nI0623 20:41:40.327076 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-6sm6q\" servicePort=\"100.67.136.133:80/TCP\"\nI0623 20:41:40.327175 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-btxfr\" servicePort=\"100.70.153.179:80/TCP\"\nI0623 20:41:40.328290 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-bkz27\" servicePort=\"100.67.119.226:80/TCP\"\nI0623 20:41:40.328408 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-ltm5g\" servicePort=\"100.65.30.129:80/TCP\"\nI0623 20:41:40.328504 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-pl4kr\" servicePort=\"100.71.194.112:80/TCP\"\nI0623 20:41:40.328584 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-s7xjg\" servicePort=\"100.67.151.178:80/TCP\"\nI0623 20:41:40.328665 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-zc6v4\" servicePort=\"100.69.140.46:80/TCP\"\nI0623 20:41:40.328752 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-68r9h\" servicePort=\"100.71.188.206:80/TCP\"\nI0623 20:41:40.328830 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-5mn6v\" servicePort=\"100.70.67.0:80/TCP\"\nI0623 20:41:40.328918 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-gnw2k\" servicePort=\"100.67.206.48:80/TCP\"\nI0623 20:41:40.329001 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-vttbt\" servicePort=\"100.69.216.60:80/TCP\"\nI0623 20:41:40.329112 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-s58f7\" servicePort=\"100.71.229.45:80/TCP\"\nI0623 20:41:40.329269 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-vwx9t\" servicePort=\"100.68.121.199:80/TCP\"\nI0623 20:41:40.329477 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-smgwg\" servicePort=\"100.65.59.47:80/TCP\"\nI0623 20:41:40.329832 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:40.333483 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-g6hdw\" portCount=1\nI0623 20:41:40.390296 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nfbg4\" portCount=1\nI0623 20:41:40.393660 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"68.004738ms\"\nI0623 20:41:40.430260 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-tjnm7\" portCount=1\nI0623 20:41:40.491167 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-zdhh9\" portCount=1\nI0623 20:41:40.532669 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fd6fm\" portCount=1\nI0623 20:41:40.581324 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-qcxx7\" portCount=1\nI0623 20:41:40.635662 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-hsjz5\" portCount=1\nI0623 20:41:40.686667 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-cnbs9\" portCount=1\nI0623 20:41:40.733831 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-2rc5x\" portCount=1\nI0623 20:41:40.777077 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v8lrt\" portCount=1\nI0623 20:41:40.832839 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-hlvhq\" portCount=1\nI0623 20:41:40.881612 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-8lvjt\" portCount=1\nI0623 20:41:40.928787 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-b55fw\" portCount=1\nI0623 20:41:40.980908 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7cdw7\" portCount=1\nI0623 20:41:41.032183 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7mg7p\" portCount=1\nI0623 20:41:41.080148 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-q6wqb\" portCount=1\nI0623 20:41:41.133039 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-lxlrl\" portCount=1\nI0623 20:41:41.180652 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-znv6s\" portCount=1\nI0623 20:41:41.233090 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9tpqh\" portCount=1\nI0623 20:41:41.285338 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-xpmwj\" portCount=1\nI0623 20:41:41.320923 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-znv6s\" servicePort=\"100.67.202.73:80/TCP\"\nI0623 20:41:41.321180 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-xpmwj\" servicePort=\"100.67.85.188:80/TCP\"\nI0623 20:41:41.321258 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-g6hdw\" servicePort=\"100.71.98.191:80/TCP\"\nI0623 20:41:41.321316 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-qcxx7\" servicePort=\"100.64.233.197:80/TCP\"\nI0623 20:41:41.321401 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-hsjz5\" servicePort=\"100.64.66.247:80/TCP\"\nI0623 20:41:41.321490 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-q6wqb\" servicePort=\"100.71.158.38:80/TCP\"\nI0623 20:41:41.321567 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-cnbs9\" servicePort=\"100.68.103.57:80/TCP\"\nI0623 20:41:41.321636 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-b55fw\" servicePort=\"100.68.32.170:80/TCP\"\nI0623 20:41:41.321650 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-7cdw7\" servicePort=\"100.70.62.29:80/TCP\"\nI0623 20:41:41.321743 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-lxlrl\" servicePort=\"100.66.114.251:80/TCP\"\nI0623 20:41:41.321974 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-nfbg4\" servicePort=\"100.64.113.85:80/TCP\"\nI0623 20:41:41.322132 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-tjnm7\" servicePort=\"100.70.69.38:80/TCP\"\nI0623 20:41:41.322249 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-zdhh9\" servicePort=\"100.70.130.7:80/TCP\"\nI0623 20:41:41.322360 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-fd6fm\" servicePort=\"100.68.85.200:80/TCP\"\nI0623 20:41:41.322464 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-v8lrt\" servicePort=\"100.67.183.203:80/TCP\"\nI0623 20:41:41.322564 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-hlvhq\" servicePort=\"100.71.169.214:80/TCP\"\nI0623 20:41:41.322602 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-9tpqh\" servicePort=\"100.68.243.226:80/TCP\"\nI0623 20:41:41.322741 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-2rc5x\" servicePort=\"100.69.111.161:80/TCP\"\nI0623 20:41:41.322757 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-8lvjt\" servicePort=\"100.64.28.145:80/TCP\"\nI0623 20:41:41.322770 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-7mg7p\" servicePort=\"100.71.114.2:80/TCP\"\nI0623 20:41:41.323021 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:41.334425 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-k4qdm\" portCount=1\nI0623 20:41:41.385601 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-2j6xq\" portCount=1\nI0623 20:41:41.391286 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"70.394689ms\"\nI0623 20:41:41.444919 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-l7t7s\" portCount=1\nI0623 20:41:41.485096 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7lt5m\" portCount=1\nI0623 20:41:41.538257 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-kkdrp\" portCount=1\nI0623 20:41:41.580426 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-pgrl2\" portCount=1\nI0623 20:41:41.634296 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nqsxc\" portCount=1\nI0623 20:41:41.685237 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9h6cb\" portCount=1\nI0623 20:41:41.732261 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v9lxk\" portCount=1\nI0623 20:41:41.781720 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vk4x6\" portCount=1\nI0623 20:41:41.810243 10 service.go:304] \"Service updated ports\" service=\"webhook-5124/e2e-test-webhook\" portCount=0\nI0623 20:41:41.830388 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-g4g2q\" portCount=1\nI0623 20:41:41.879195 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fbpwj\" portCount=1\nI0623 20:41:41.932949 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rnk5p\" portCount=1\nI0623 20:41:41.980243 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-p2fbc\" portCount=1\nI0623 20:41:42.030202 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7j6c7\" portCount=1\nI0623 20:41:42.140051 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-25q4b\" portCount=1\nI0623 20:41:42.164178 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-hc45x\" portCount=1\nI0623 20:41:42.206953 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-slhls\" portCount=1\nI0623 20:41:42.249624 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-cd8jw\" portCount=1\nI0623 20:41:42.309706 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v9ttm\" portCount=1\nI0623 20:41:42.311047 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-kkdrp\" servicePort=\"100.71.61.47:80/TCP\"\nI0623 20:41:42.311428 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-nqsxc\" servicePort=\"100.68.220.136:80/TCP\"\nI0623 20:41:42.311624 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-hc45x\" servicePort=\"100.66.65.190:80/TCP\"\nI0623 20:41:42.311772 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-v9ttm\" servicePort=\"100.67.25.74:80/TCP\"\nI0623 20:41:42.311965 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-2j6xq\" servicePort=\"100.71.142.113:80/TCP\"\nI0623 20:41:42.312317 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-g4g2q\" servicePort=\"100.71.177.165:80/TCP\"\nI0623 20:41:42.312767 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-7j6c7\" servicePort=\"100.70.163.201:80/TCP\"\nI0623 20:41:42.312925 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-slhls\" servicePort=\"100.70.42.39:80/TCP\"\nI0623 20:41:42.313065 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-l7t7s\" servicePort=\"100.65.34.73:80/TCP\"\nI0623 20:41:42.313212 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-pgrl2\" servicePort=\"100.68.230.103:80/TCP\"\nI0623 20:41:42.313578 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-v9lxk\" servicePort=\"100.65.223.78:80/TCP\"\nI0623 20:41:42.313740 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-fbpwj\" servicePort=\"100.65.113.242:80/TCP\"\nI0623 20:41:42.313888 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-cd8jw\" servicePort=\"100.69.194.136:80/TCP\"\nI0623 20:41:42.314429 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-7lt5m\" servicePort=\"100.67.86.224:80/TCP\"\nI0623 20:41:42.314604 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-9h6cb\" servicePort=\"100.71.53.73:80/TCP\"\nI0623 20:41:42.316687 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-vk4x6\" servicePort=\"100.69.173.174:80/TCP\"\nI0623 20:41:42.316834 10 service.go:444] \"Removing service port\" portName=\"webhook-5124/e2e-test-webhook\"\nI0623 20:41:42.316975 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-rnk5p\" servicePort=\"100.69.163.89:80/TCP\"\nI0623 20:41:42.317106 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-p2fbc\" servicePort=\"100.71.193.150:80/TCP\"\nI0623 20:41:42.317237 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-25q4b\" servicePort=\"100.67.151.145:80/TCP\"\nI0623 20:41:42.317359 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-k4qdm\" servicePort=\"100.66.251.143:80/TCP\"\nI0623 20:41:42.318292 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:42.386758 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-d2cd9\" portCount=1\nI0623 20:41:42.404250 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-724w9\" portCount=1\nI0623 20:41:42.445975 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-llwt4\" portCount=1\nI0623 20:41:42.453901 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"142.862419ms\"\nI0623 20:41:42.496214 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-ncx49\" portCount=1\nI0623 20:41:42.533660 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-jfm47\" portCount=1\nI0623 20:41:42.628312 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-gwvjg\" portCount=1\nI0623 20:41:42.680442 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-8sjjn\" portCount=1\nI0623 20:41:42.730375 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-t2k24\" portCount=1\nI0623 20:41:42.781661 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5cxjw\" portCount=1\nI0623 20:41:42.828706 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-d7nrf\" portCount=1\nI0623 20:41:42.892168 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-655rm\" portCount=1\nI0623 20:41:42.931633 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-2n5r9\" portCount=1\nI0623 20:41:42.985147 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-sdjlv\" portCount=1\nI0623 20:41:43.036183 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-n8l76\" portCount=1\nI0623 20:41:43.081926 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-dxxjb\" portCount=1\nI0623 20:41:43.128922 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-prqx5\" portCount=1\nI0623 20:41:43.181843 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-bkjq8\" portCount=1\nI0623 20:41:43.236610 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-d4g7l\" portCount=1\nI0623 20:41:43.283678 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5wnff\" portCount=1\nI0623 20:41:43.320439 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-d2cd9\" servicePort=\"100.70.167.155:80/TCP\"\nI0623 20:41:43.320467 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-t2k24\" servicePort=\"100.67.24.225:80/TCP\"\nI0623 20:41:43.320481 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-2n5r9\" servicePort=\"100.66.228.173:80/TCP\"\nI0623 20:41:43.320495 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-sdjlv\" servicePort=\"100.66.40.230:80/TCP\"\nI0623 20:41:43.320510 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-bkjq8\" servicePort=\"100.71.30.73:80/TCP\"\nI0623 20:41:43.320524 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-724w9\" servicePort=\"100.68.218.229:80/TCP\"\nI0623 20:41:43.320535 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-5cxjw\" servicePort=\"100.66.156.130:80/TCP\"\nI0623 20:41:43.320546 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-655rm\" servicePort=\"100.69.109.159:80/TCP\"\nI0623 20:41:43.320557 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-n8l76\" servicePort=\"100.70.58.231:80/TCP\"\nI0623 20:41:43.320569 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-prqx5\" servicePort=\"100.65.28.183:80/TCP\"\nI0623 20:41:43.320582 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-d4g7l\" servicePort=\"100.69.167.249:80/TCP\"\nI0623 20:41:43.320595 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-5wnff\" servicePort=\"100.67.21.125:80/TCP\"\nI0623 20:41:43.320607 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-jfm47\" servicePort=\"100.67.228.170:80/TCP\"\nI0623 20:41:43.320619 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-gwvjg\" servicePort=\"100.64.37.164:80/TCP\"\nI0623 20:41:43.320632 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-8sjjn\" servicePort=\"100.64.143.33:80/TCP\"\nI0623 20:41:43.320645 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-dxxjb\" servicePort=\"100.64.222.88:80/TCP\"\nI0623 20:41:43.320658 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-llwt4\" servicePort=\"100.67.164.212:80/TCP\"\nI0623 20:41:43.320671 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-ncx49\" servicePort=\"100.68.227.253:80/TCP\"\nI0623 20:41:43.320683 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-d7nrf\" servicePort=\"100.64.63.205:80/TCP\"\nI0623 20:41:43.321626 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:43.335874 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v99dj\" portCount=1\nI0623 20:41:43.394535 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"74.109703ms\"\nI0623 20:41:43.395239 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-4mdhk\" portCount=1\nI0623 20:41:43.437010 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9z8cc\" portCount=1\nI0623 20:41:43.483538 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-zx52k\" portCount=1\nI0623 20:41:43.531038 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-n2fqw\" portCount=1\nI0623 20:41:43.585144 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-mfw7n\" portCount=1\nI0623 20:41:43.640145 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7q4bt\" portCount=1\nI0623 20:41:43.691252 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-bg7pv\" portCount=1\nI0623 20:41:43.737015 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5f7mm\" portCount=1\nI0623 20:41:43.783467 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fqccx\" portCount=1\nI0623 20:41:43.854289 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7jpfh\" portCount=1\nI0623 20:41:43.882874 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fg9ct\" portCount=1\nI0623 20:41:43.929926 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-pbx8t\" portCount=1\nI0623 20:41:43.981723 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-hz94q\" portCount=1\nI0623 20:41:44.032853 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9gmk5\" portCount=1\nI0623 20:41:44.081783 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9t4r8\" portCount=1\nI0623 20:41:44.131769 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9b7g9\" portCount=1\nI0623 20:41:44.187090 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vxv66\" portCount=1\nI0623 20:41:44.241739 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-4dzcl\" portCount=1\nI0623 20:41:44.285673 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v7gjl\" portCount=1\nI0623 20:41:44.321213 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-5f7mm\" servicePort=\"100.67.65.187:80/TCP\"\nI0623 20:41:44.321243 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-7jpfh\" servicePort=\"100.65.70.115:80/TCP\"\nI0623 20:41:44.321288 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-fg9ct\" servicePort=\"100.66.8.96:80/TCP\"\nI0623 20:41:44.321308 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-v7gjl\" servicePort=\"100.65.179.59:80/TCP\"\nI0623 20:41:44.321320 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-4mdhk\" servicePort=\"100.64.79.74:80/TCP\"\nI0623 20:41:44.321331 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-7q4bt\" servicePort=\"100.66.207.136:80/TCP\"\nI0623 20:41:44.321345 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-bg7pv\" servicePort=\"100.67.76.75:80/TCP\"\nI0623 20:41:44.321356 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-hz94q\" servicePort=\"100.66.90.13:80/TCP\"\nI0623 20:41:44.321366 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-9t4r8\" servicePort=\"100.64.117.48:80/TCP\"\nI0623 20:41:44.321381 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-zx52k\" servicePort=\"100.67.190.154:80/TCP\"\nI0623 20:41:44.321399 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-9gmk5\" servicePort=\"100.65.222.26:80/TCP\"\nI0623 20:41:44.321411 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-vxv66\" servicePort=\"100.70.211.52:80/TCP\"\nI0623 20:41:44.321422 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-v99dj\" servicePort=\"100.69.183.226:80/TCP\"\nI0623 20:41:44.321432 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-9z8cc\" servicePort=\"100.68.27.235:80/TCP\"\nI0623 20:41:44.321443 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-n2fqw\" servicePort=\"100.64.168.23:80/TCP\"\nI0623 20:41:44.321454 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-mfw7n\" servicePort=\"100.67.19.209:80/TCP\"\nI0623 20:41:44.321464 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-fqccx\" servicePort=\"100.64.188.52:80/TCP\"\nI0623 20:41:44.321490 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-pbx8t\" servicePort=\"100.71.162.124:80/TCP\"\nI0623 20:41:44.321501 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-9b7g9\" servicePort=\"100.67.213.82:80/TCP\"\nI0623 20:41:44.321512 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-4dzcl\" servicePort=\"100.64.134.194:80/TCP\"\nI0623 20:41:44.321833 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:44.335283 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-lmxll\" portCount=1\nI0623 20:41:44.389334 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-76w87\" portCount=1\nI0623 20:41:44.417873 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"96.67496ms\"\nI0623 20:41:44.432717 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-449g4\" portCount=1\nI0623 20:41:44.496432 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-p8v6k\" portCount=1\nI0623 20:41:44.531494 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-sm75n\" portCount=1\nI0623 20:41:44.592585 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fjc46\" portCount=1\nI0623 20:41:44.634272 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-dpsr2\" portCount=1\nI0623 20:41:44.681728 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7xbhb\" portCount=1\nI0623 20:41:44.738584 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-w6ttc\" portCount=1\nI0623 20:41:44.781324 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-qk74w\" portCount=1\nI0623 20:41:44.845611 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-dq7ln\" portCount=1\nI0623 20:41:45.333491 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-449g4\" servicePort=\"100.66.193.153:80/TCP\"\nI0623 20:41:45.333517 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-p8v6k\" servicePort=\"100.69.9.154:80/TCP\"\nI0623 20:41:45.333528 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-sm75n\" servicePort=\"100.70.95.241:80/TCP\"\nI0623 20:41:45.333544 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-lmxll\" servicePort=\"100.67.203.70:80/TCP\"\nI0623 20:41:45.333555 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-76w87\" servicePort=\"100.64.62.15:80/TCP\"\nI0623 20:41:45.333603 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-7xbhb\" servicePort=\"100.68.57.114:80/TCP\"\nI0623 20:41:45.333622 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-w6ttc\" servicePort=\"100.66.16.8:80/TCP\"\nI0623 20:41:45.333645 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-qk74w\" servicePort=\"100.68.230.232:80/TCP\"\nI0623 20:41:45.333660 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-dq7ln\" servicePort=\"100.65.198.189:80/TCP\"\nI0623 20:41:45.333681 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-fjc46\" servicePort=\"100.65.220.102:80/TCP\"\nI0623 20:41:45.333697 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-dpsr2\" servicePort=\"100.67.116.69:80/TCP\"\nI0623 20:41:45.334160 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:45.399524 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"66.044644ms\"\nI0623 20:41:46.400602 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:46.468388 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"67.991968ms\"\nI0623 20:41:51.272416 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-229xf\" portCount=0\nI0623 20:41:51.272453 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-229xf\"\nI0623 20:41:51.272717 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:51.301683 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-25q4b\" portCount=0\nI0623 20:41:51.320213 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-2j6xq\" portCount=0\nI0623 20:41:51.335920 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-2n5r9\" portCount=0\nI0623 20:41:51.354873 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-2nxjp\" portCount=0\nI0623 20:41:51.365207 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-2r47p\" portCount=0\nI0623 20:41:51.382831 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-2rc5x\" portCount=0\nI0623 20:41:51.394189 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"121.732267ms\"\nI0623 20:41:51.394221 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-2j6xq\"\nI0623 20:41:51.394246 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-2n5r9\"\nI0623 20:41:51.394257 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-2nxjp\"\nI0623 20:41:51.394270 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-2r47p\"\nI0623 20:41:51.394279 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-2rc5x\"\nI0623 20:41:51.394290 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-25q4b\"\nI0623 20:41:51.394567 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:51.396098 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-2x8xk\" portCount=0\nI0623 20:41:51.411176 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-449g4\" portCount=0\nI0623 20:41:51.432209 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-4dzcl\" portCount=0\nI0623 20:41:51.446632 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-4k6ch\" portCount=0\nI0623 20:41:51.462452 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-4mdhk\" portCount=0\nI0623 20:41:51.496543 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-4v8ld\" portCount=0\nI0623 20:41:51.504752 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-58rk5\" portCount=0\nI0623 20:41:51.510215 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"115.992315ms\"\nI0623 20:41:51.519080 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5cxjw\" portCount=0\nI0623 20:41:51.534165 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5f7mm\" portCount=0\nI0623 20:41:51.547859 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5g2ql\" portCount=0\nI0623 20:41:51.560071 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5mn6v\" portCount=0\nI0623 20:41:51.575061 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5wgst\" portCount=0\nI0623 20:41:51.585140 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5wnff\" portCount=0\nI0623 20:41:51.596910 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5x85d\" portCount=0\nI0623 20:41:51.616474 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-655rm\" portCount=0\nI0623 20:41:51.631897 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-68r9h\" portCount=0\nI0623 20:41:51.668727 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-6dl9n\" portCount=0\nI0623 20:41:51.692971 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-6hs86\" portCount=0\nI0623 20:41:51.701518 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-6n5dd\" portCount=0\nI0623 20:41:51.712950 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-6nldp\" portCount=0\nI0623 20:41:51.722366 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-6sm6q\" portCount=0\nI0623 20:41:51.729135 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-724w9\" portCount=0\nI0623 20:41:51.738678 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-76w87\" portCount=0\nI0623 20:41:51.747099 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7cdw7\" portCount=0\nI0623 20:41:51.757793 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7j6c7\" portCount=0\nI0623 20:41:51.766644 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7jpfh\" portCount=0\nI0623 20:41:51.774488 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7lt5m\" portCount=0\nI0623 20:41:51.786689 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7mg7p\" portCount=0\nI0623 20:41:51.796334 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7q4bt\" portCount=0\nI0623 20:41:51.809257 10 service.go:304] \"Service updated ports\" service=\"services-9027/externalname-service\" portCount=0\nI0623 20:41:51.812749 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7xbhb\" portCount=0\nI0623 20:41:51.827881 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7z96f\" portCount=0\nI0623 20:41:51.844068 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-86nmm\" portCount=0\nI0623 20:41:51.852380 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-8kd85\" portCount=0\nI0623 20:41:51.866716 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-8lvjt\" portCount=0\nI0623 20:41:51.876682 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-8sjjn\" portCount=0\nI0623 20:41:51.889246 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-8xzjg\" portCount=0\nI0623 20:41:51.899280 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9b7g9\" portCount=0\nI0623 20:41:51.916331 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9gmk5\" portCount=0\nI0623 20:41:51.929959 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9h6cb\" portCount=0\nI0623 20:41:51.941105 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9nmbz\" portCount=0\nI0623 20:41:51.951767 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9p68w\" portCount=0\nI0623 20:41:51.976342 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9t4r8\" portCount=0\nI0623 20:41:51.989155 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9tpqh\" portCount=0\nI0623 20:41:52.002971 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9z8cc\" portCount=0\nI0623 20:41:52.013315 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-b22s4\" portCount=0\nI0623 20:41:52.026951 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-b55fw\" portCount=0\nI0623 20:41:52.034849 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-bg5xf\" portCount=0\nI0623 20:41:52.045010 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-bg7pv\" portCount=0\nI0623 20:41:52.052380 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-bhhr2\" portCount=0\nI0623 20:41:52.066222 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-bjxxw\" portCount=0\nI0623 20:41:52.082874 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-bkjq8\" portCount=0\nI0623 20:41:52.093002 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-bkz27\" portCount=0\nI0623 20:41:52.104297 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-br84t\" portCount=0\nI0623 20:41:52.115698 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-btt9z\" portCount=0\nI0623 20:41:52.123731 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-btxfr\" portCount=0\nI0623 20:41:52.134984 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-c4dsp\" portCount=0\nI0623 20:41:52.143675 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-c4nd4\" portCount=0\nI0623 20:41:52.150592 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-c6jfc\" portCount=0\nI0623 20:41:52.160501 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-cd8jw\" portCount=0\nI0623 20:41:52.171755 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-cnbs9\" portCount=0\nI0623 20:41:52.180475 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-d2cd9\" portCount=0\nI0623 20:41:52.202264 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-d4g7l\" portCount=0\nI0623 20:41:52.224038 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-d57tn\" portCount=0\nI0623 20:41:52.239100 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-d7nrf\" portCount=0\nI0623 20:41:52.246928 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-dckgs\" portCount=0\nI0623 20:41:52.256306 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-dlbmc\" portCount=0\nI0623 20:41:52.267314 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-dpsr2\" portCount=0\nI0623 20:41:52.300484 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-dq7ln\" portCount=0\nI0623 20:41:52.300826 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-6nldp\"\nI0623 20:41:52.301002 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-7mg7p\"\nI0623 20:41:52.301106 10 service.go:444] \"Removing service port\" portName=\"services-9027/externalname-service:http\"\nI0623 20:41:52.301578 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-8xzjg\"\nI0623 20:41:52.302075 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-cd8jw\"\nI0623 20:41:52.302485 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-cnbs9\"\nI0623 20:41:52.302625 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-4v8ld\"\nI0623 20:41:52.303071 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-dlbmc\"\nI0623 20:41:52.303251 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-bg7pv\"\nI0623 20:41:52.303413 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-bhhr2\"\nI0623 20:41:52.303472 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-4k6ch\"\nI0623 20:41:52.305824 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-6hs86\"\nI0623 20:41:52.305998 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-76w87\"\nI0623 20:41:52.306162 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-86nmm\"\nI0623 20:41:52.306304 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-8sjjn\"\nI0623 20:41:52.306439 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-bg5xf\"\nI0623 20:41:52.306567 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-bkjq8\"\nI0623 20:41:52.306697 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-9p68w\"\nI0623 20:41:52.306828 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-b22s4\"\nI0623 20:41:52.306969 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-bjxxw\"\nI0623 20:41:52.307124 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-dq7ln\"\nI0623 20:41:52.307266 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-724w9\"\nI0623 20:41:52.307400 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-9h6cb\"\nI0623 20:41:52.307555 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-9z8cc\"\nI0623 20:41:52.307699 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-4mdhk\"\nI0623 20:41:52.307852 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-7j6c7\"\nI0623 20:41:52.308003 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-c4nd4\"\nI0623 20:41:52.308154 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-d4g7l\"\nI0623 20:41:52.308304 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-5mn6v\"\nI0623 20:41:52.309134 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-c4dsp\"\nI0623 20:41:52.309307 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-d7nrf\"\nI0623 20:41:52.309454 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-dckgs\"\nI0623 20:41:52.309580 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-58rk5\"\nI0623 20:41:52.309596 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-5wgst\"\nI0623 20:41:52.309631 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-6sm6q\"\nI0623 20:41:52.309641 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-7q4bt\"\nI0623 20:41:52.309650 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-btxfr\"\nI0623 20:41:52.309658 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-d57tn\"\nI0623 20:41:52.309667 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-655rm\"\nI0623 20:41:52.309704 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-7lt5m\"\nI0623 20:41:52.309714 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-9b7g9\"\nI0623 20:41:52.309735 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-9tpqh\"\nI0623 20:41:52.309745 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-2x8xk\"\nI0623 20:41:52.309754 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-4dzcl\"\nI0623 20:41:52.309866 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-6n5dd\"\nI0623 20:41:52.309885 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-7jpfh\"\nI0623 20:41:52.313819 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-8kd85\"\nI0623 20:41:52.313846 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-b55fw\"\nI0623 20:41:52.313856 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-5wnff\"\nI0623 20:41:52.313865 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-5x85d\"\nI0623 20:41:52.313875 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-6dl9n\"\nI0623 20:41:52.313882 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-7z96f\"\nI0623 20:41:52.313903 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-9t4r8\"\nI0623 20:41:52.313913 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-bkz27\"\nI0623 20:41:52.313922 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-5cxjw\"\nI0623 20:41:52.313932 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-7cdw7\"\nI0623 20:41:52.313940 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-7xbhb\"\nI0623 20:41:52.313950 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-9gmk5\"\nI0623 20:41:52.313958 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-c6jfc\"\nI0623 20:41:52.313983 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-dpsr2\"\nI0623 20:41:52.313993 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-5g2ql\"\nI0623 20:41:52.314002 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-68r9h\"\nI0623 20:41:52.314011 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-8lvjt\"\nI0623 20:41:52.314019 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-br84t\"\nI0623 20:41:52.314045 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-d2cd9\"\nI0623 20:41:52.314054 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-449g4\"\nI0623 20:41:52.314062 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-5f7mm\"\nI0623 20:41:52.314071 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-9nmbz\"\nI0623 20:41:52.314085 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-btt9z\"\nI0623 20:41:52.314443 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:52.319711 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-dxxjb\" portCount=0\nI0623 20:41:52.330197 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fbphr\" portCount=0\nI0623 20:41:52.339720 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fbpwj\" portCount=0\nI0623 20:41:52.352510 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fd6fm\" portCount=0\nI0623 20:41:52.369952 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-ffmsr\" portCount=0\nI0623 20:41:52.379836 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fg4bp\" portCount=0\nI0623 20:41:52.392376 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fg9ct\" portCount=0\nI0623 20:41:52.413524 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fjc46\" portCount=0\nI0623 20:41:52.419649 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"118.823641ms\"\nI0623 20:41:52.431806 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-flp5b\" portCount=0\nI0623 20:41:52.440909 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fp6zr\" portCount=0\nI0623 20:41:52.448325 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fqccx\" portCount=0\nI0623 20:41:52.457256 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fs7v9\" portCount=0\nI0623 20:41:52.466468 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fwx9m\" portCount=0\nI0623 20:41:52.474338 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-g4g2q\" portCount=0\nI0623 20:41:52.483030 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-g6hdw\" portCount=0\nI0623 20:41:52.492229 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-gbd75\" portCount=0\nI0623 20:41:52.501263 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-gfwdg\" portCount=0\nI0623 20:41:52.506991 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-gl82d\" portCount=0\nI0623 20:41:52.522061 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-gnw2k\" portCount=0\nI0623 20:41:52.529972 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-gwvjg\" portCount=0\nI0623 20:41:52.540997 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-h7qwf\" portCount=0\nI0623 20:41:52.549806 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-h8blk\" portCount=0\nI0623 20:41:52.558704 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-h9bxw\" portCount=0\nI0623 20:41:52.589181 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-hc45x\" portCount=0\nI0623 20:41:52.605471 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-hhb55\" portCount=0\nI0623 20:41:52.626461 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-hlvhq\" portCount=0\nI0623 20:41:52.633270 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-hs667\" portCount=0\nI0623 20:41:52.651443 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-hsjz5\" portCount=0\nI0623 20:41:52.663892 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-hz94q\" portCount=0\nI0623 20:41:52.681211 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-jfm47\" portCount=0\nI0623 20:41:52.701006 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-jnlfc\" portCount=0\nI0623 20:41:52.718659 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-jq6fx\" portCount=0\nI0623 20:41:52.733168 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-k4qdm\" portCount=0\nI0623 20:41:52.753070 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-k8prz\" portCount=0\nI0623 20:41:52.765393 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-kb265\" portCount=0\nI0623 20:41:52.777343 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-kdkdt\" portCount=0\nI0623 20:41:52.784592 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-kkdrp\" portCount=0\nI0623 20:41:52.792329 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-kt757\" portCount=0\nI0623 20:41:52.800816 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-kv8gl\" portCount=0\nI0623 20:41:52.807825 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-l45zw\" portCount=0\nI0623 20:41:52.818820 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-l7t7s\" portCount=0\nI0623 20:41:52.827149 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-l9jpp\" portCount=0\nI0623 20:41:52.840990 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-lh245\" portCount=0\nI0623 20:41:52.852340 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-llwt4\" portCount=0\nI0623 20:41:52.861870 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-lmxll\" portCount=0\nI0623 20:41:52.872619 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-ltm5g\" portCount=0\nI0623 20:41:52.882163 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-lxlrl\" portCount=0\nI0623 20:41:52.891591 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-mfb49\" portCount=0\nI0623 20:41:52.899160 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-mfw7n\" portCount=0\nI0623 20:41:52.905867 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-mtkcd\" portCount=0\nI0623 20:41:52.914163 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-n2fqw\" portCount=0\nI0623 20:41:52.930729 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-n8l76\" portCount=0\nI0623 20:41:52.949719 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-ncx49\" portCount=0\nI0623 20:41:52.957171 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nfbg4\" portCount=0\nI0623 20:41:52.967053 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nkr6p\" portCount=0\nI0623 20:41:52.978804 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nqsxc\" portCount=0\nI0623 20:41:52.988862 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nvpl7\" portCount=0\nI0623 20:41:52.996583 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nx5js\" portCount=0\nI0623 20:41:53.007619 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nztnh\" portCount=0\nI0623 20:41:53.021180 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-p2fbc\" portCount=0\nI0623 20:41:53.033459 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-p2q78\" portCount=0\nI0623 20:41:53.043061 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-p8v6k\" portCount=0\nI0623 20:41:53.058260 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-pbx8t\" portCount=0\nI0623 20:41:53.071229 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-pbzq9\" portCount=0\nI0623 20:41:53.085680 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-pfdnh\" portCount=0\nI0623 20:41:53.122916 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-pgrl2\" portCount=0\nI0623 20:41:53.138657 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-pl4kr\" portCount=0\nI0623 20:41:53.173600 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-prqx5\" portCount=0\nI0623 20:41:53.236234 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-q2gmm\" portCount=0\nI0623 20:41:53.278385 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-q6wqb\" portCount=0\nI0623 20:41:53.278601 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-jnlfc\"\nI0623 20:41:53.278712 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-kt757\"\nI0623 20:41:53.278817 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-l45zw\"\nI0623 20:41:53.278913 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-mtkcd\"\nI0623 20:41:53.279015 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-nx5js\"\nI0623 20:41:53.279852 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-hhb55\"\nI0623 20:41:53.280001 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-lxlrl\"\nI0623 20:41:53.280100 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-n8l76\"\nI0623 20:41:53.280203 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-nqsxc\"\nI0623 20:41:53.280323 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-p8v6k\"\nI0623 20:41:53.280429 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-gl82d\"\nI0623 20:41:53.280522 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-k8prz\"\nI0623 20:41:53.280703 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-hsjz5\"\nI0623 20:41:53.280822 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-kkdrp\"\nI0623 20:41:53.280924 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-ltm5g\"\nI0623 20:41:53.281255 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-prqx5\"\nI0623 20:41:53.281382 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-q6wqb\"\nI0623 20:41:53.281480 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-fd6fm\"\nI0623 20:41:53.281587 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-gbd75\"\nI0623 20:41:53.281679 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-gwvjg\"\nI0623 20:41:53.281773 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-h8blk\"\nI0623 20:41:53.281880 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-k4qdm\"\nI0623 20:41:53.281978 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-kv8gl\"\nI0623 20:41:53.282133 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-p2q78\"\nI0623 20:41:53.282264 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-pfdnh\"\nI0623 20:41:53.282369 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-q2gmm\"\nI0623 20:41:53.282487 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-dxxjb\"\nI0623 20:41:53.282598 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-fg4bp\"\nI0623 20:41:53.282713 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-gfwdg\"\nI0623 20:41:53.282828 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-nkr6p\"\nI0623 20:41:53.283135 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-pbx8t\"\nI0623 20:41:53.283260 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-fg9ct\"\nI0623 20:41:53.283365 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-fwx9m\"\nI0623 20:41:53.283462 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-kb265\"\nI0623 20:41:53.283564 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-l7t7s\"\nI0623 20:41:53.283660 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-mfb49\"\nI0623 20:41:53.283761 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-mfw7n\"\nI0623 20:41:53.283872 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-p2fbc\"\nI0623 20:41:53.283978 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-n2fqw\"\nI0623 20:41:53.284085 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-fbphr\"\nI0623 20:41:53.284209 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-fjc46\"\nI0623 20:41:53.284318 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-g6hdw\"\nI0623 20:41:53.284434 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-hlvhq\"\nI0623 20:41:53.284529 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-hs667\"\nI0623 20:41:53.284633 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-hz94q\"\nI0623 20:41:53.284732 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-jfm47\"\nI0623 20:41:53.284833 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-llwt4\"\nI0623 20:41:53.284925 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-g4g2q\"\nI0623 20:41:53.285033 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-hc45x\"\nI0623 20:41:53.285115 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-l9jpp\"\nI0623 20:41:53.285193 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-ncx49\"\nI0623 20:41:53.285296 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-nvpl7\"\nI0623 20:41:53.285385 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-pgrl2\"\nI0623 20:41:53.285464 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-fbpwj\"\nI0623 20:41:53.285475 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-fqccx\"\nI0623 20:41:53.285484 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-fs7v9\"\nI0623 20:41:53.285494 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-pl4kr\"\nI0623 20:41:53.285503 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-fp6zr\"\nI0623 20:41:53.285511 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-pbzq9\"\nI0623 20:41:53.285518 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-ffmsr\"\nI0623 20:41:53.285526 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-h9bxw\"\nI0623 20:41:53.285533 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-lh245\"\nI0623 20:41:53.285540 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-nfbg4\"\nI0623 20:41:53.285549 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-gnw2k\"\nI0623 20:41:53.285556 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-h7qwf\"\nI0623 20:41:53.285563 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-jq6fx\"\nI0623 20:41:53.285572 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-kdkdt\"\nI0623 20:41:53.285579 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-flp5b\"\nI0623 20:41:53.285594 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-lmxll\"\nI0623 20:41:53.285603 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-nztnh\"\nI0623 20:41:53.286326 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:53.320392 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-q8bcw\" portCount=0\nI0623 20:41:53.350428 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-q8qg6\" portCount=0\nI0623 20:41:53.369323 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-qcxx7\" portCount=0\nI0623 20:41:53.403177 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-qhdm8\" portCount=0\nI0623 20:41:53.436005 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"157.402295ms\"\nI0623 20:41:53.443713 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-qk74w\" portCount=0\nI0623 20:41:53.484922 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-qk7cp\" portCount=0\nI0623 20:41:53.524062 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-r79hp\" portCount=0\nI0623 20:41:53.550541 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rgkxh\" portCount=0\nI0623 20:41:53.574350 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rkllg\" portCount=0\nI0623 20:41:53.587309 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rkv78\" portCount=0\nI0623 20:41:53.598230 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rmv65\" portCount=0\nI0623 20:41:53.613889 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rnk5p\" portCount=0\nI0623 20:41:53.633959 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rzgsp\" portCount=0\nI0623 20:41:53.687526 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-s2qn6\" portCount=0\nI0623 20:41:53.733836 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-s58f7\" portCount=0\nI0623 20:41:53.770137 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-s7xjg\" portCount=0\nI0623 20:41:53.793089 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-s829t\" portCount=0\nI0623 20:41:53.804536 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-sdjlv\" portCount=0\nI0623 20:41:53.813727 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-sf9qg\" portCount=0\nI0623 20:41:53.826546 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-slhls\" portCount=0\nI0623 20:41:53.836668 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-sm75n\" portCount=0\nI0623 20:41:53.847667 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-smgwg\" portCount=0\nI0623 20:41:53.857044 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-st8bv\" portCount=0\nI0623 20:41:53.865054 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-t2k24\" portCount=0\nI0623 20:41:53.876028 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-t2s25\" portCount=0\nI0623 20:41:53.887460 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-tjnm7\" portCount=0\nI0623 20:41:53.898806 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v7gjl\" portCount=0\nI0623 20:41:53.917464 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v7w9j\" portCount=0\nI0623 20:41:53.958920 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v8lrt\" portCount=0\nI0623 20:41:53.970816 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v99dj\" portCount=0\nI0623 20:41:53.990465 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v9lxk\" portCount=0\nI0623 20:41:53.997541 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v9ttm\" portCount=0\nI0623 20:41:54.008546 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vgk7t\" portCount=0\nI0623 20:41:54.021392 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vjwc8\" portCount=0\nI0623 20:41:54.031135 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vk4x6\" portCount=0\nI0623 20:41:54.047627 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vkxxx\" portCount=0\nI0623 20:41:54.060660 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vs4f5\" portCount=0\nI0623 20:41:54.073436 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vsvrd\" portCount=0\nI0623 20:41:54.082847 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vttbt\" portCount=0\nI0623 20:41:54.092815 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vwx9t\" portCount=0\nI0623 20:41:54.130264 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vxv66\" portCount=0\nI0623 20:41:54.158100 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vzz8n\" portCount=0\nI0623 20:41:54.177545 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-w2xwx\" portCount=0\nI0623 20:41:54.189030 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-w4wjs\" portCount=0\nI0623 20:41:54.200790 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-w5x7l\" portCount=0\nI0623 20:41:54.216273 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-w6ttc\" portCount=0\nI0623 20:41:54.250089 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-w744x\" portCount=0\nI0623 20:41:54.266782 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-wc2kr\" portCount=0\nI0623 20:41:54.286311 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-xpmwj\" portCount=0\nI0623 20:41:54.286615 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-vxv66\"\nI0623 20:41:54.286769 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-w2xwx\"\nI0623 20:41:54.286896 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-w5x7l\"\nI0623 20:41:54.287011 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-rnk5p\"\nI0623 20:41:54.287138 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-s7xjg\"\nI0623 20:41:54.287246 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-st8bv\"\nI0623 20:41:54.287376 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-qk7cp\"\nI0623 20:41:54.287488 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-rmv65\"\nI0623 20:41:54.287620 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-s2qn6\"\nI0623 20:41:54.287731 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-vk4x6\"\nI0623 20:41:54.287881 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-w4wjs\"\nI0623 20:41:54.288003 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-q8qg6\"\nI0623 20:41:54.288155 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-rzgsp\"\nI0623 20:41:54.288272 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-s829t\"\nI0623 20:41:54.288420 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-t2s25\"\nI0623 20:41:54.288544 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-v8lrt\"\nI0623 20:41:54.288672 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-vkxxx\"\nI0623 20:41:54.288787 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-qhdm8\"\nI0623 20:41:54.288897 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-s58f7\"\nI0623 20:41:54.289036 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-slhls\"\nI0623 20:41:54.289172 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-smgwg\"\nI0623 20:41:54.289373 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-tjnm7\"\nI0623 20:41:54.289516 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-v7gjl\"\nI0623 20:41:54.289664 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-vgk7t\"\nI0623 20:41:54.289778 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-vwx9t\"\nI0623 20:41:54.289901 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-r79hp\"\nI0623 20:41:54.290008 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-w6ttc\"\nI0623 20:41:54.290132 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-wc2kr\"\nI0623 20:41:54.290245 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-rkv78\"\nI0623 20:41:54.290363 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-qk74w\"\nI0623 20:41:54.290471 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-rgkxh\"\nI0623 20:41:54.290583 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-rkllg\"\nI0623 20:41:54.291394 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-sf9qg\"\nI0623 20:41:54.291559 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-sm75n\"\nI0623 20:41:54.291717 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-t2k24\"\nI0623 20:41:54.291864 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-v99dj\"\nI0623 20:41:54.292001 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-q8bcw\"\nI0623 20:41:54.292126 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-vs4f5\"\nI0623 20:41:54.292240 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-vsvrd\"\nI0623 20:41:54.292366 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-v9lxk\"\nI0623 20:41:54.292484 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-sdjlv\"\nI0623 20:41:54.292599 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-v7w9j\"\nI0623 20:41:54.292817 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-v9ttm\"\nI0623 20:41:54.292938 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-vjwc8\"\nI0623 20:41:54.293035 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-vttbt\"\nI0623 20:41:54.293188 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-vzz8n\"\nI0623 20:41:54.293307 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-w744x\"\nI0623 20:41:54.293417 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-qcxx7\"\nI0623 20:41:54.293521 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-xpmwj\"\nI0623 20:41:54.293929 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:54.298179 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-xsldq\" portCount=0\nI0623 20:41:54.316387 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-zc6v4\" portCount=0\nI0623 20:41:54.332936 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-zdhh9\" portCount=0\nI0623 20:41:54.358387 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-znv6s\" portCount=0\nI0623 20:41:54.378154 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-zx52k\" portCount=0\nI0623 20:41:54.401714 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-zxzjz\" portCount=0\nI0623 20:41:54.407438 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-zzc6j\" portCount=0\nI0623 20:41:54.418670 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"131.987002ms\"\nI0623 20:41:55.419559 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-zx52k\"\nI0623 20:41:55.419770 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-zxzjz\"\nI0623 20:41:55.419867 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-zzc6j\"\nI0623 20:41:55.419945 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-xsldq\"\nI0623 20:41:55.420023 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-zc6v4\"\nI0623 20:41:55.420096 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-zdhh9\"\nI0623 20:41:55.420176 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-znv6s\"\nI0623 20:41:55.421950 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:55.465372 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"45.835234ms\"\nI0623 20:42:04.984719 10 service.go:304] \"Service updated ports\" service=\"dns-4608/dns-test-service-3\" portCount=1\nI0623 20:42:04.984805 10 service.go:419] \"Adding new service port\" portName=\"dns-4608/dns-test-service-3:http\" servicePort=\"100.68.194.253:80/TCP\"\nI0623 20:42:04.984903 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:42:05.012127 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"27.368235ms\"\nI0623 20:42:09.953148 10 service.go:304] \"Service updated ports\" service=\"dns-4608/dns-test-service-3\" portCount=0\nI0623 20:42:09.953187 10 service.go:444] \"Removing service port\" portName=\"dns-4608/dns-test-service-3:http\"\nI0623 20:42:09.953308 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:42:09.983445 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.252712ms\"\nI0623 20:42:15.112462 10 service.go:304] \"Service updated ports\" service=\"webhook-8412/e2e-test-webhook\" portCount=1\nI0623 20:42:15.112501 10 service.go:419] \"Adding new service port\" portName=\"webhook-8412/e2e-test-webhook\" servicePort=\"100.64.166.205:8443/TCP\"\nI0623 20:42:15.112592 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:42:15.154771 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"42.272577ms\"\nI0623 20:42:15.154972 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:42:15.180919 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"26.117122ms\"\nI0623 20:42:19.579458 10 service.go:304] \"Service updated ports\" service=\"webhook-8412/e2e-test-webhook\" portCount=0\nI0623 20:42:19.579495 10 service.go:444] \"Removing service port\" portName=\"webhook-8412/e2e-test-webhook\"\nI0623 20:42:19.579593 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:42:19.706162 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"126.66035ms\"\nI0623 20:42:19.706288 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:42:19.791613 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"85.414559ms\"\nI0623 20:43:02.102965 10 service.go:304] \"Service updated ports\" service=\"services-7998/nodeport-reuse\" portCount=1\nI0623 20:43:02.103018 10 service.go:419] \"Adding new service port\" portName=\"services-7998/nodeport-reuse\" servicePort=\"100.70.26.102:80/TCP\"\nI0623 20:43:02.103111 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:02.126648 10 proxier.go:1604] \"Opened local port\" port={Description:nodePort for services-7998/nodeport-reuse IP: IPFamily:4 Port:30284 Protocol:TCP}\nE0623 20:43:02.126736 10 proxier.go:1600] \"can't open port, skipping it\" err=\"listen tcp4 :30284: bind: address already in use\" port={Description:nodePort for services-7998/nodeport-reuse IP: IPFamily:4 Port:30284 Protocol:TCP}\nI0623 20:43:02.136375 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.364905ms\"\nI0623 20:43:02.136485 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:02.175486 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"39.075907ms\"\nI0623 20:43:02.205855 10 service.go:304] \"Service updated ports\" service=\"services-7998/nodeport-reuse\" portCount=0\nI0623 20:43:03.175629 10 service.go:444] \"Removing service port\" portName=\"services-7998/nodeport-reuse\"\nI0623 20:43:03.175752 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:03.202407 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"26.794572ms\"\nI0623 20:43:05.836536 10 service.go:304] \"Service updated ports\" service=\"services-7998/nodeport-reuse\" portCount=1\nI0623 20:43:05.836598 10 service.go:419] \"Adding new service port\" portName=\"services-7998/nodeport-reuse\" servicePort=\"100.69.187.52:80/TCP\"\nI0623 20:43:05.836694 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:05.886505 10 proxier.go:1604] \"Opened local port\" port={Description:nodePort for services-7998/nodeport-reuse IP: IPFamily:4 Port:30284 Protocol:TCP}\nE0623 20:43:05.886646 10 proxier.go:1600] \"can't open port, skipping it\" err=\"listen tcp4 :30284: bind: address already in use\" port={Description:nodePort for services-7998/nodeport-reuse IP: IPFamily:4 Port:30284 Protocol:TCP}\nI0623 20:43:05.901591 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"65.002773ms\"\nI0623 20:43:05.901913 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:05.950005 10 service.go:304] \"Service updated ports\" service=\"services-7998/nodeport-reuse\" portCount=0\nI0623 20:43:05.958407 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"56.773201ms\"\nI0623 20:43:06.320248 10 service.go:304] \"Service updated ports\" service=\"webhook-2076/e2e-test-webhook\" portCount=1\nI0623 20:43:06.959168 10 service.go:444] \"Removing service port\" portName=\"services-7998/nodeport-reuse\"\nI0623 20:43:06.959210 10 service.go:419] \"Adding new service port\" portName=\"webhook-2076/e2e-test-webhook\" servicePort=\"100.65.14.108:8443/TCP\"\nI0623 20:43:06.959402 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:06.994649 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"35.485405ms\"\nI0623 20:43:08.185254 10 service.go:304] \"Service updated ports\" service=\"webhook-2076/e2e-test-webhook\" portCount=0\nI0623 20:43:08.185293 10 service.go:444] \"Removing service port\" portName=\"webhook-2076/e2e-test-webhook\"\nI0623 20:43:08.185386 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:08.224420 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"39.111336ms\"\nI0623 20:43:09.224627 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:09.257375 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.820314ms\"\nI0623 20:43:29.286966 10 service.go:304] \"Service updated ports\" service=\"kubectl-6650/agnhost-primary\" portCount=1\nI0623 20:43:29.287009 10 service.go:419] \"Adding new service port\" portName=\"kubectl-6650/agnhost-primary\" servicePort=\"100.66.11.72:6379/TCP\"\nI0623 20:43:29.287392 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:29.336260 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"49.252057ms\"\nI0623 20:43:29.336364 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:29.365034 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"28.744665ms\"\nI0623 20:43:31.608700 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:31.640965 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.391006ms\"\nI0623 20:43:41.667552 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:41.712659 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"45.198638ms\"\nI0623 20:43:41.715076 10 service.go:304] \"Service updated ports\" service=\"kubectl-6650/agnhost-primary\" portCount=0\nI0623 20:43:41.715104 10 service.go:444] \"Removing service port\" portName=\"kubectl-6650/agnhost-primary\"\nI0623 20:43:41.715188 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:41.763574 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"48.465154ms\"\nI0623 20:43:42.764187 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:42.807333 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"43.26726ms\"\nI0623 20:43:51.809191 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:51.920265 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"111.175462ms\"\nI0623 20:43:51.920403 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:51.974754 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"54.45747ms\"\nI0623 20:43:54.208113 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:54.341354 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"133.343869ms\"\nI0623 20:43:54.341494 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:54.386714 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"45.326142ms\"\nI0623 20:43:56.378780 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:56.444125 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"65.443956ms\"\nI0623 20:43:56.444281 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:56.495623 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"51.451981ms\"\nI0623 20:43:57.495870 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:57.523233 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"27.521737ms\"\nI0623 20:44:01.575724 10 service.go:304] \"Service updated ports\" service=\"webhook-6283/e2e-test-webhook\" portCount=1\nI0623 20:44:01.575778 10 service.go:419] \"Adding new service port\" portName=\"webhook-6283/e2e-test-webhook\" servicePort=\"100.68.214.105:8443/TCP\"\nI0623 20:44:01.575873 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:01.610491 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"34.727448ms\"\nI0623 20:44:01.610645 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:01.646442 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"35.911802ms\"\nI0623 20:44:04.200684 10 service.go:304] \"Service updated ports\" service=\"webhook-6283/e2e-test-webhook\" portCount=0\nI0623 20:44:04.200721 10 service.go:444] \"Removing service port\" portName=\"webhook-6283/e2e-test-webhook\"\nI0623 20:44:04.200814 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:04.262889 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"62.161356ms\"\nI0623 20:44:04.263002 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:04.324741 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"61.817576ms\"\nI0623 20:44:05.967336 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:06.042277 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"75.048079ms\"\nI0623 20:44:07.042483 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:07.069459 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"27.061701ms\"\nI0623 20:44:10.874561 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:10.974191 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"99.693517ms\"\nI0623 20:44:10.974329 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:11.023656 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"49.432469ms\"\nI0623 20:44:13.520392 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:13.659284 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"138.981245ms\"\nI0623 20:44:13.788439 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:13.908701 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"120.359642ms\"\nI0623 20:44:14.909221 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:14.947542 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"38.417053ms\"\nI0623 20:44:15.038716 10 service.go:304] \"Service updated ports\" service=\"kubectl-1666/agnhost-primary\" portCount=1\nI0623 20:44:15.949615 10 service.go:419] \"Adding new service port\" portName=\"kubectl-1666/agnhost-primary\" servicePort=\"100.66.109.188:6379/TCP\"\nI0623 20:44:15.949726 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:16.122240 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"172.652045ms\"\nI0623 20:44:22.267159 10 service.go:304] \"Service updated ports\" service=\"kubectl-1666/agnhost-primary\" portCount=0\nI0623 20:44:22.267215 10 service.go:444] \"Removing service port\" portName=\"kubectl-1666/agnhost-primary\"\nI0623 20:44:22.267302 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:22.416864 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"149.63769ms\"\nI0623 20:44:22.416984 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:22.479002 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"62.094749ms\"\nI0623 20:44:29.181700 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:29.313014 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"131.422207ms\"\nI0623 20:44:29.313162 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:29.410206 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"97.152093ms\"\nI0623 20:44:30.411246 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:30.436954 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"25.794014ms\"\nI0623 20:44:36.741975 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:36.835657 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"93.799941ms\"\nI0623 20:44:36.835749 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:36.876873 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"41.186973ms\"\nI0623 20:44:43.263082 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:43.372554 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"109.641193ms\"\nI0623 20:44:43.372735 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:43.408833 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"36.243095ms\"\nI0623 20:45:08.814519 10 service.go:304] \"Service updated ports\" service=\"endpointslice-9645/example-empty-selector\" portCount=1\nI0623 20:45:08.814564 10 service.go:419] \"Adding new service port\" portName=\"endpointslice-9645/example-empty-selector:example\" servicePort=\"100.68.222.150:80/TCP\"\nI0623 20:45:08.815598 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:08.870174 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"55.609283ms\"\nI0623 20:45:08.870273 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:08.914521 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"44.312911ms\"\nI0623 20:45:09.139350 10 service.go:304] \"Service updated ports\" service=\"endpointslice-9645/example-empty-selector\" portCount=0\nI0623 20:45:09.915322 10 service.go:444] \"Removing service port\" portName=\"endpointslice-9645/example-empty-selector:example\"\nI0623 20:45:09.915437 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:09.947647 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.359211ms\"\nI0623 20:45:18.830255 10 service.go:304] \"Service updated ports\" service=\"webhook-5436/e2e-test-webhook\" portCount=1\nI0623 20:45:18.831507 10 service.go:419] \"Adding new service port\" portName=\"webhook-5436/e2e-test-webhook\" servicePort=\"100.71.239.170:8443/TCP\"\nI0623 20:45:18.831621 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:18.887071 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"56.744768ms\"\nI0623 20:45:18.887231 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:18.925794 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"38.681989ms\"\nI0623 20:45:21.717674 10 service.go:304] \"Service updated ports\" service=\"webhook-5436/e2e-test-webhook\" portCount=0\nI0623 20:45:21.717706 10 service.go:444] \"Removing service port\" portName=\"webhook-5436/e2e-test-webhook\"\nI0623 20:45:21.717800 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:21.767127 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"49.417213ms\"\nI0623 20:45:21.767294 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:21.793132 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"25.917582ms\"\nI0623 20:45:22.904680 10 service.go:304] \"Service updated ports\" service=\"endpointslice-9225/example-int-port\" portCount=1\nI0623 20:45:22.904726 10 service.go:419] \"Adding new service port\" portName=\"endpointslice-9225/example-int-port:example\" servicePort=\"100.71.162.227:80/TCP\"\nI0623 20:45:22.906130 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:22.962844 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"58.120054ms\"\nI0623 20:45:23.014982 10 service.go:304] \"Service updated ports\" service=\"endpointslice-9225/example-named-port\" portCount=1\nI0623 20:45:23.125787 10 service.go:304] \"Service updated ports\" service=\"endpointslice-9225/example-no-match\" portCount=1\nI0623 20:45:23.963211 10 service.go:419] \"Adding new service port\" portName=\"endpointslice-9225/example-named-port:http\" servicePort=\"100.65.238.44:80/TCP\"\nI0623 20:45:23.963238 10 service.go:419] \"Adding new service port\" portName=\"endpointslice-9225/example-no-match:example-no-match\" servicePort=\"100.70.155.91:80/TCP\"\nI0623 20:45:23.963342 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:24.007390 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"44.222263ms\"\nI0623 20:45:32.371574 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:32.415645 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"44.170244ms\"\nI0623 20:45:33.777121 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:33.844242 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"67.204786ms\"\nI0623 20:45:33.844372 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:33.916608 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"72.329453ms\"\nI0623 20:45:49.713136 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:49.758734 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"45.723328ms\"\nI0623 20:45:52.830207 10 service.go:304] \"Service updated ports\" service=\"kubectl-7653/agnhost-replica\" portCount=1\nI0623 20:45:52.830255 10 service.go:419] \"Adding new service port\" portName=\"kubectl-7653/agnhost-replica\" servicePort=\"100.71.236.214:6379/TCP\"\nI0623 20:45:52.830538 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:52.886185 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"55.931916ms\"\nI0623 20:45:52.886302 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:52.936934 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"50.715038ms\"\nI0623 20:45:53.472289 10 service.go:304] \"Service updated ports\" service=\"kubectl-7653/agnhost-primary\" portCount=1\nI0623 20:45:53.937472 10 service.go:419] \"Adding new service port\" portName=\"kubectl-7653/agnhost-primary\" servicePort=\"100.69.44.249:6379/TCP\"\nI0623 20:45:53.937563 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:53.977030 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"39.581985ms\"\nI0623 20:45:54.122572 10 service.go:304] \"Service updated ports\" service=\"kubectl-7653/frontend\" portCount=1\nI0623 20:45:54.977923 10 service.go:419] \"Adding new service port\" portName=\"kubectl-7653/frontend\" servicePort=\"100.67.26.185:80/TCP\"\nI0623 20:45:54.978086 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:55.030230 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"52.336889ms\"\nI0623 20:45:56.031817 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:56.078141 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"46.445761ms\"\nI0623 20:45:58.428702 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:58.479205 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"50.593447ms\"\nI0623 20:45:59.332502 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:59.372781 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"40.389075ms\"\nI0623 20:46:02.076922 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:02.114801 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"37.951067ms\"\nI0623 20:46:02.564104 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:02.615262 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"51.272863ms\"\nI0623 20:46:04.365981 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:04.400267 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"34.383185ms\"\nI0623 20:46:05.162696 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:05.196809 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"34.222735ms\"\nI0623 20:46:06.991535 10 service.go:304] \"Service updated ports\" service=\"kubectl-7653/agnhost-replica\" portCount=0\nI0623 20:46:06.991574 10 service.go:444] \"Removing service port\" portName=\"kubectl-7653/agnhost-replica\"\nI0623 20:46:06.991672 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:07.039205 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"47.611462ms\"\nI0623 20:46:07.039431 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:07.065209 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"25.973338ms\"\nI0623 20:46:07.539029 10 service.go:304] \"Service updated ports\" service=\"kubectl-7653/agnhost-primary\" portCount=0\nI0623 20:46:08.059229 10 service.go:304] \"Service updated ports\" service=\"kubectl-7653/frontend\" portCount=0\nI0623 20:46:08.059269 10 service.go:444] \"Removing service port\" portName=\"kubectl-7653/agnhost-primary\"\nI0623 20:46:08.059279 10 service.go:444] \"Removing service port\" portName=\"kubectl-7653/frontend\"\nI0623 20:46:08.059385 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:08.105348 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"46.073372ms\"\nI0623 20:46:09.105568 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:09.223255 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"117.808804ms\"\nI0623 20:46:09.938993 10 service.go:304] \"Service updated ports\" service=\"endpointslice-9225/example-int-port\" portCount=0\nI0623 20:46:09.949837 10 service.go:304] \"Service updated ports\" service=\"endpointslice-9225/example-named-port\" portCount=0\nI0623 20:46:09.967818 10 service.go:304] \"Service updated ports\" service=\"endpointslice-9225/example-no-match\" portCount=0\nI0623 20:46:10.223533 10 service.go:444] \"Removing service port\" portName=\"endpointslice-9225/example-int-port:example\"\nI0623 20:46:10.223559 10 service.go:444] \"Removing service port\" portName=\"endpointslice-9225/example-named-port:http\"\nI0623 20:46:10.223568 10 service.go:444] \"Removing service port\" portName=\"endpointslice-9225/example-no-match:example-no-match\"\nI0623 20:46:10.223693 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:10.269950 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"46.443768ms\"\nI0623 20:46:42.339708 10 service.go:304] \"Service updated ports\" service=\"deployment-7705/test-rolling-update-with-lb\" portCount=0\nI0623 20:46:42.339742 10 service.go:444] \"Removing service port\" portName=\"deployment-7705/test-rolling-update-with-lb\"\nI0623 20:46:42.339843 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:42.387572 10 service_health.go:107] \"Closing healthcheck\" service=\"deployment-7705/test-rolling-update-with-lb\" port=32659\nI0623 20:46:42.387766 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"48.022489ms\"\nE0623 20:46:42.389669 10 service_health.go:187] \"Healthcheck closed\" err=\"accept tcp [::]:32659: use of closed network connection\" service=\"deployment-7705/test-rolling-update-with-lb\"\nI0623 20:46:53.827362 10 service.go:304] \"Service updated ports\" service=\"services-5554/service-proxy-toggled\" portCount=1\nI0623 20:46:53.827402 10 service.go:419] \"Adding new service port\" portName=\"services-5554/service-proxy-toggled\" servicePort=\"100.66.99.114:80/TCP\"\nI0623 20:46:53.827493 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:53.861469 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"34.069239ms\"\nI0623 20:46:53.861561 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:53.885472 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"23.971356ms\"\nI0623 20:46:55.787925 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:55.850808 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"62.974944ms\"\nI0623 20:46:56.286737 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:56.333275 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"46.640472ms\"\nI0623 20:46:58.965072 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:58.995540 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.576948ms\"\nI0623 20:47:17.988140 10 service.go:304] \"Service updated ports\" service=\"services-1379/sourceip-test\" portCount=1\nI0623 20:47:17.988186 10 service.go:419] \"Adding new service port\" portName=\"services-1379/sourceip-test\" servicePort=\"100.65.24.199:8080/TCP\"\nI0623 20:47:17.988275 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:18.018972 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.791008ms\"\nI0623 20:47:18.019189 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:18.044516 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"25.514786ms\"\nI0623 20:47:22.100824 10 service.go:304] \"Service updated ports\" service=\"services-5554/service-proxy-toggled\" portCount=0\nI0623 20:47:22.100859 10 service.go:444] \"Removing service port\" portName=\"services-5554/service-proxy-toggled\"\nI0623 20:47:22.100954 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:22.190668 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"89.802288ms\"\nI0623 20:47:22.190796 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:22.229810 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"39.107339ms\"\nI0623 20:47:23.230510 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:23.263215 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.803354ms\"\nI0623 20:47:24.131803 10 service.go:304] \"Service updated ports\" service=\"dns-9470/test-service-2\" portCount=1\nI0623 20:47:24.131850 10 service.go:419] \"Adding new service port\" portName=\"dns-9470/test-service-2:http\" servicePort=\"100.68.154.148:80/TCP\"\nI0623 20:47:24.131940 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:24.166166 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"34.320854ms\"\nI0623 20:47:25.166624 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:25.202612 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"36.056415ms\"\nI0623 20:47:27.113004 10 service.go:304] \"Service updated ports\" service=\"services-7270/e2e-svc-a-7lqxb\" portCount=1\nI0623 20:47:27.113506 10 service.go:419] \"Adding new service port\" portName=\"services-7270/e2e-svc-a-7lqxb:http\" servicePort=\"100.65.92.21:8001/TCP\"\nI0623 20:47:27.113803 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:27.148785 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"35.292735ms\"\nI0623 20:47:27.234524 10 service.go:304] \"Service updated ports\" service=\"services-7270/e2e-svc-b-mdgln\" portCount=1\nI0623 20:47:27.234563 10 service.go:419] \"Adding new service port\" portName=\"services-7270/e2e-svc-b-mdgln:http\" servicePort=\"100.68.217.207:8002/TCP\"\nI0623 20:47:27.234622 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:27.310698 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"76.131362ms\"\nI0623 20:47:27.341724 10 service.go:304] \"Service updated ports\" service=\"services-7270/e2e-svc-c-q426s\" portCount=1\nI0623 20:47:27.558631 10 service.go:304] \"Service updated ports\" service=\"services-7270/e2e-svc-a-7lqxb\" portCount=0\nI0623 20:47:27.572507 10 service.go:304] \"Service updated ports\" service=\"services-7270/e2e-svc-b-mdgln\" portCount=0\nI0623 20:47:28.155139 10 service.go:444] \"Removing service port\" portName=\"services-7270/e2e-svc-b-mdgln:http\"\nI0623 20:47:28.155173 10 service.go:419] \"Adding new service port\" portName=\"services-7270/e2e-svc-c-q426s:http\" servicePort=\"100.71.168.217:8003/TCP\"\nI0623 20:47:28.155184 10 service.go:444] \"Removing service port\" portName=\"services-7270/e2e-svc-a-7lqxb:http\"\nI0623 20:47:28.155298 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:28.263388 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"108.258595ms\"\nI0623 20:47:30.253589 10 service.go:304] \"Service updated ports\" service=\"services-5554/service-proxy-toggled\" portCount=1\nI0623 20:47:30.253630 10 service.go:419] \"Adding new service port\" portName=\"services-5554/service-proxy-toggled\" servicePort=\"100.66.99.114:80/TCP\"\nI0623 20:47:30.253724 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:30.288449 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"34.820235ms\"\nI0623 20:47:30.288537 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:30.319265 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.7866ms\"\nI0623 20:47:33.016409 10 service.go:304] \"Service updated ports\" service=\"services-7270/e2e-svc-c-q426s\" portCount=0\nI0623 20:47:33.016448 10 service.go:444] \"Removing service port\" portName=\"services-7270/e2e-svc-c-q426s:http\"\nI0623 20:47:33.016548 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:33.074411 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"57.952378ms\"\nI0623 20:47:36.026568 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:36.069494 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"43.023958ms\"\nI0623 20:47:36.069615 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:36.111198 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"41.668078ms\"\nI0623 20:47:36.135255 10 service.go:304] \"Service updated ports\" service=\"services-1379/sourceip-test\" portCount=0\nI0623 20:47:37.111775 10 service.go:444] \"Removing service port\" portName=\"services-1379/sourceip-test\"\nI0623 20:47:37.111898 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:37.150706 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"38.945968ms\"\nI0623 20:47:56.086999 10 service.go:304] \"Service updated ports\" service=\"services-3166/externalname-service\" portCount=1\nI0623 20:47:56.087050 10 service.go:419] \"Adding new service port\" portName=\"services-3166/externalname-service:http\" servicePort=\"100.67.45.245:80/TCP\"\nI0623 20:47:56.087754 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:56.142488 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"55.4433ms\"\nI0623 20:47:56.142608 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:56.250330 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"107.801944ms\"\nI0623 20:47:58.263787 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:58.292666 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"29.038343ms\"\nI0623 20:48:00.181392 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:00.230552 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"49.266815ms\"\nI0623 20:48:00.554761 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:00.606153 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"51.499837ms\"\nI0623 20:48:00.619366 10 service.go:304] \"Service updated ports\" service=\"services-5554/service-proxy-toggled\" portCount=0\nI0623 20:48:01.606375 10 service.go:444] \"Removing service port\" portName=\"services-5554/service-proxy-toggled\"\nI0623 20:48:01.606523 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:01.648110 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"41.748105ms\"\nI0623 20:48:02.400027 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:02.478863 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"78.961716ms\"\nI0623 20:48:02.499998 10 service.go:304] \"Service updated ports\" service=\"dns-9470/test-service-2\" portCount=0\nI0623 20:48:03.480024 10 service.go:444] \"Removing service port\" portName=\"dns-9470/test-service-2:http\"\nI0623 20:48:03.480140 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:03.517960 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"37.963622ms\"\nI0623 20:48:10.687513 10 service.go:304] \"Service updated ports\" service=\"services-3166/externalname-service\" portCount=0\nI0623 20:48:10.687560 10 service.go:444] \"Removing service port\" portName=\"services-3166/externalname-service:http\"\nI0623 20:48:10.687651 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:10.720551 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.995111ms\"\nI0623 20:48:10.720772 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:10.746490 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"25.910961ms\"\nI0623 20:48:14.177605 10 service.go:304] \"Service updated ports\" service=\"services-4421/hairpin-test\" portCount=1\nI0623 20:48:14.177643 10 service.go:419] \"Adding new service port\" portName=\"services-4421/hairpin-test\" servicePort=\"100.65.193.127:8080/TCP\"\nI0623 20:48:14.177732 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:14.206905 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"29.267161ms\"\nI0623 20:48:14.207091 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:14.231201 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"24.267839ms\"\nI0623 20:48:15.040326 10 service.go:304] \"Service updated ports\" service=\"crd-webhook-1988/e2e-test-crd-conversion-webhook\" portCount=1\nI0623 20:48:15.231351 10 service.go:419] \"Adding new service port\" portName=\"crd-webhook-1988/e2e-test-crd-conversion-webhook\" servicePort=\"100.69.123.237:9443/TCP\"\nI0623 20:48:15.231455 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:15.259581 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"28.261988ms\"\nI0623 20:48:20.051300 10 service.go:304] \"Service updated ports\" service=\"crd-webhook-1988/e2e-test-crd-conversion-webhook\" portCount=0\nI0623 20:48:20.051335 10 service.go:444] \"Removing service port\" portName=\"crd-webhook-1988/e2e-test-crd-conversion-webhook\"\nI0623 20:48:20.051426 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:20.142741 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"91.39676ms\"\nI0623 20:48:20.142870 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:20.174622 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"31.842171ms\"\nI0623 20:48:21.174836 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:21.230604 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"55.85541ms\"\nI0623 20:48:32.234609 10 service.go:304] \"Service updated ports\" service=\"services-4421/hairpin-test\" portCount=0\nI0623 20:48:32.234644 10 service.go:444] \"Removing service port\" portName=\"services-4421/hairpin-test\"\nI0623 20:48:32.234735 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:32.282631 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"47.977384ms\"\nI0623 20:48:32.282759 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:32.325564 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"42.893779ms\"\nI0623 20:49:16.234289 10 service.go:304] \"Service updated ports\" service=\"services-3970/affinity-clusterip-transition\" portCount=1\nI0623 20:49:16.234334 10 service.go:419] \"Adding new service port\" portName=\"services-3970/affinity-clusterip-transition\" servicePort=\"100.71.253.32:80/TCP\"\nI0623 20:49:16.234426 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:16.287495 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"53.160671ms\"\nI0623 20:49:16.287610 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:16.325994 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"38.46693ms\"\nI0623 20:49:21.180653 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:21.221591 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"41.052133ms\"\nI0623 20:49:23.864184 10 service.go:304] \"Service updated ports\" service=\"webhook-1103/e2e-test-webhook\" portCount=1\nI0623 20:49:23.864552 10 service.go:419] \"Adding new service port\" portName=\"webhook-1103/e2e-test-webhook\" servicePort=\"100.64.122.216:8443/TCP\"\nI0623 20:49:23.864660 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:23.901437 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"36.896272ms\"\nI0623 20:49:23.901555 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:23.947430 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"45.962379ms\"\nI0623 20:49:25.952944 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:26.004600 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"51.761872ms\"\nI0623 20:49:27.562319 10 service.go:304] \"Service updated ports\" service=\"webhook-1103/e2e-test-webhook\" portCount=0\nI0623 20:49:27.562351 10 service.go:444] \"Removing service port\" portName=\"webhook-1103/e2e-test-webhook\"\nI0623 20:49:27.562414 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:27.624358 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"61.995754ms\"\nI0623 20:49:27.624481 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:27.653330 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"28.940591ms\"\nI0623 20:49:37.941595 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:37.998553 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"57.067138ms\"\nI0623 20:49:54.500060 10 service.go:304] \"Service updated ports\" service=\"services-3970/affinity-clusterip-transition\" portCount=1\nI0623 20:49:54.500104 10 service.go:421] \"Updating existing service port\" portName=\"services-3970/affinity-clusterip-transition\" servicePort=\"100.71.253.32:80/TCP\"\nI0623 20:49:54.500197 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:54.551946 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"51.841508ms\"\nI0623 20:49:54.604462 10 service.go:304] \"Service updated ports\" service=\"webhook-4362/e2e-test-webhook\" portCount=1\nI0623 20:49:54.604503 10 service.go:419] \"Adding new service port\" portName=\"webhook-4362/e2e-test-webhook\" servicePort=\"100.65.181.95:8443/TCP\"\nI0623 20:49:54.604595 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:54.654249 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"49.748601ms\"\nI0623 20:49:55.655340 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:55.722538 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"67.310741ms\"\nI0623 20:49:56.471947 10 service.go:304] \"Service updated ports\" service=\"services-3970/affinity-clusterip-transition\" portCount=1\nI0623 20:49:56.723191 10 service.go:421] \"Updating existing service port\" portName=\"services-3970/affinity-clusterip-transition\" servicePort=\"100.71.253.32:80/TCP\"\nI0623 20:49:56.723393 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:56.752853 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"29.689013ms\"\nI0623 20:49:57.086203 10 service.go:304] \"Service updated ports\" service=\"services-8531/test-service-xxk9r\" portCount=1\nI0623 20:49:57.406965 10 service.go:304] \"Service updated ports\" service=\"services-8531/test-service-xxk9r\" portCount=1\nI0623 20:49:57.734773 10 service.go:304] \"Service updated ports\" service=\"services-8531/test-service-xxk9r\" portCount=1\nI0623 20:49:57.734832 10 service.go:419] \"Adding new service port\" portName=\"services-8531/test-service-xxk9r:http\" servicePort=\"100.67.61.137:80/TCP\"\nI0623 20:49:57.735016 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:57.786197 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"51.373307ms\"\nI0623 20:49:57.952614 10 service.go:304] \"Service updated ports\" service=\"services-8531/test-service-xxk9r\" portCount=1\nI0623 20:49:58.018852 10 service.go:304] \"Service updated ports\" service=\"webhook-4362/e2e-test-webhook\" portCount=0\nI0623 20:49:58.167132 10 service.go:304] \"Service updated ports\" service=\"services-8531/test-service-xxk9r\" portCount=0\nI0623 20:49:58.786613 10 service.go:444] \"Removing service port\" portName=\"services-8531/test-service-xxk9r:http\"\nI0623 20:49:58.786635 10 service.go:444] \"Removing service port\" portName=\"webhook-4362/e2e-test-webhook\"\nI0623 20:49:58.786801 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:58.835110 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"48.5253ms\"\nI0623 20:50:00.955762 10 service.go:304] \"Service updated ports\" service=\"webhook-4966/e2e-test-webhook\" portCount=1\nI0623 20:50:00.955802 10 service.go:419] \"Adding new service port\" portName=\"webhook-4966/e2e-test-webhook\" servicePort=\"100.69.197.22:8443/TCP\"\nI0623 20:50:00.955901 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:00.996910 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"41.094649ms\"\nI0623 20:50:00.997038 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:01.032419 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"35.465992ms\"\nI0623 20:50:14.083528 10 service.go:304] \"Service updated ports\" service=\"webhook-4966/e2e-test-webhook\" portCount=0\nI0623 20:50:14.083562 10 service.go:444] \"Removing service port\" portName=\"webhook-4966/e2e-test-webhook\"\nI0623 20:50:14.083657 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:14.122168 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"38.602662ms\"\nI0623 20:50:14.122364 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:14.151171 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"28.976366ms\"\nI0623 20:50:26.695604 10 service.go:304] \"Service updated ports\" service=\"services-2567/nodeport-test\" portCount=1\nI0623 20:50:26.695645 10 service.go:419] \"Adding new service port\" portName=\"services-2567/nodeport-test:http\" servicePort=\"100.70.90.16:80/TCP\"\nI0623 20:50:26.695739 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:26.731130 10 proxier.go:1604] \"Opened local port\" port={Description:nodePort for services-2567/nodeport-test:http IP: IPFamily:4 Port:31661 Protocol:TCP}\nE0623 20:50:26.731189 10 proxier.go:1600] \"can't open port, skipping it\" err=\"listen tcp4 :31661: bind: address already in use\" port={Description:nodePort for services-2567/nodeport-test:http IP: IPFamily:4 Port:31661 Protocol:TCP}\nI0623 20:50:26.736922 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"41.282617ms\"\nI0623 20:50:26.737020 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:26.768367 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"31.419676ms\"\nI0623 20:50:27.134878 10 service.go:304] \"Service updated ports\" service=\"services-8188/nodeport-service\" portCount=1\nI0623 20:50:27.247773 10 service.go:304] \"Service updated ports\" service=\"services-8188/externalsvc\" portCount=1\nI0623 20:50:27.768761 10 service.go:419] \"Adding new service port\" portName=\"services-8188/nodeport-service\" servicePort=\"100.71.105.87:80/TCP\"\nI0623 20:50:27.768790 10 service.go:419] \"Adding new service port\" portName=\"services-8188/externalsvc\" servicePort=\"100.70.186.196:80/TCP\"\nI0623 20:50:27.768960 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:27.829048 10 proxier.go:1604] \"Opened local port\" port={Description:nodePort for services-8188/nodeport-service IP: IPFamily:4 Port:31297 Protocol:TCP}\nE0623 20:50:27.829113 10 proxier.go:1600] \"can't open port, skipping it\" err=\"listen tcp4 :31297: bind: address already in use\" port={Description:nodePort for services-8188/nodeport-service IP: IPFamily:4 Port:31297 Protocol:TCP}\nI0623 20:50:27.842142 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"73.447075ms\"\nI0623 20:50:29.430064 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:29.459445 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"29.509518ms\"\nI0623 20:50:30.441283 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:30.470994 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"29.799034ms\"\nI0623 20:50:31.471684 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:31.508533 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"36.998131ms\"\nI0623 20:50:33.941882 10 service.go:304] \"Service updated ports\" service=\"services-8188/nodeport-service\" portCount=0\nI0623 20:50:33.941918 10 service.go:444] \"Removing service port\" portName=\"services-8188/nodeport-service\"\nI0623 20:50:33.942014 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:34.078933 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"137.004058ms\"\nI0623 20:50:34.079048 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:34.177327 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"98.359553ms\"\nI0623 20:50:36.374563 10 service.go:304] \"Service updated ports\" service=\"webhook-9460/e2e-test-webhook\" portCount=1\nI0623 20:50:36.374609 10 service.go:419] \"Adding new service port\" portName=\"webhook-9460/e2e-test-webhook\" servicePort=\"100.64.104.148:8443/TCP\"\nI0623 20:50:36.374743 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:36.419844 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"45.232599ms\"\nI0623 20:50:36.420019 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:36.470535 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"50.631909ms\"\nI0623 20:50:39.804998 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:39.841370 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"36.498848ms\"\nI0623 20:50:41.504290 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:41.578829 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"74.671332ms\"\nI0623 20:50:41.578949 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:41.628021 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"49.154867ms\"\nI0623 20:50:41.653890 10 service.go:304] \"Service updated ports\" service=\"webhook-9460/e2e-test-webhook\" portCount=0\nI0623 20:50:41.910492 10 service.go:304] \"Service updated ports\" service=\"services-8188/externalsvc\" portCount=0\nI0623 20:50:42.628642 10 service.go:444] \"Removing service port\" portName=\"webhook-9460/e2e-test-webhook\"\nI0623 20:50:42.628677 10 service.go:444] \"Removing service port\" portName=\"services-8188/externalsvc\"\nI0623 20:50:42.628825 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:42.733034 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"104.368044ms\"\nI0623 20:50:48.452150 10 service.go:304] \"Service updated ports\" service=\"services-2567/nodeport-test\" portCount=0\nI0623 20:50:48.452184 10 service.go:444] \"Removing service port\" portName=\"services-2567/nodeport-test:http\"\nI0623 20:50:48.452308 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:48.506400 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"54.208047ms\"\nI0623 20:50:48.506525 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:48.555348 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"48.911313ms\"\nI0623 20:51:27.416257 10 service.go:304] \"Service updated ports\" service=\"aggregator-3684/sample-api\" portCount=1\nI0623 20:51:27.416303 10 service.go:419] \"Adding new service port\" portName=\"aggregator-3684/sample-api\" servicePort=\"100.64.60.129:7443/TCP\"\nI0623 20:51:27.416881 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:27.449676 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.376ms\"\nI0623 20:51:27.449849 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:27.484696 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"34.987974ms\"\nI0623 20:51:41.514085 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:41.570527 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"56.579271ms\"\nI0623 20:51:43.342403 10 service.go:304] \"Service updated ports\" service=\"dns-1013/test-service-2\" portCount=1\nI0623 20:51:43.342462 10 service.go:419] \"Adding new service port\" portName=\"dns-1013/test-service-2:http\" servicePort=\"100.69.186.173:80/TCP\"\nI0623 20:51:43.342570 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:43.419288 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"76.830379ms\"\nI0623 20:51:43.419410 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:43.463111 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"43.78421ms\"\nI0623 20:51:44.526410 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:44.608678 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"82.3976ms\"\nI0623 20:51:44.719256 10 service.go:304] \"Service updated ports\" service=\"aggregator-3684/sample-api\" portCount=0\nI0623 20:51:45.608981 10 service.go:444] \"Removing service port\" portName=\"aggregator-3684/sample-api\"\nI0623 20:51:45.609117 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:45.645178 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"36.210877ms\"\nI0623 20:51:46.646085 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:46.684800 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"38.826319ms\"\nI0623 20:51:47.685304 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:47.745255 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"60.071248ms\"\nI0623 20:51:49.032360 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:49.072268 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"40.010384ms\"\nI0623 20:51:50.176273 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:50.206473 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.295296ms\"\nI0623 20:51:56.492501 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:56.530622 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"38.264441ms\"\nI0623 20:51:59.137266 10 service.go:304] \"Service updated ports\" service=\"services-2298/service-headless-toggled\" portCount=1\nI0623 20:51:59.137329 10 service.go:419] \"Adding new service port\" portName=\"services-2298/service-headless-toggled\" servicePort=\"100.68.99.227:80/TCP\"\nI0623 20:51:59.137444 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:59.196997 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"59.66693ms\"\nI0623 20:51:59.197091 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:59.224724 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"27.696104ms\"\nI0623 20:52:01.458739 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:01.506248 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"47.703775ms\"\nI0623 20:52:02.193463 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:02.225549 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.192083ms\"\nI0623 20:52:02.460917 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:02.493896 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.083546ms\"\nI0623 20:52:03.494542 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:03.535056 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"40.668598ms\"\nI0623 20:52:04.535588 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:04.571288 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"35.798731ms\"\nI0623 20:52:05.689755 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:05.719342 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"29.698462ms\"\nI0623 20:52:06.235326 10 service.go:304] \"Service updated ports\" service=\"services-3970/affinity-clusterip-transition\" portCount=0\nI0623 20:52:06.719849 10 service.go:444] \"Removing service port\" portName=\"services-3970/affinity-clusterip-transition\"\nI0623 20:52:06.720071 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:06.750205 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.37207ms\"\nI0623 20:52:07.752051 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:07.820575 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"68.668928ms\"\nI0623 20:52:12.345896 10 service.go:304] \"Service updated ports\" service=\"services-690/endpoint-test2\" portCount=1\nI0623 20:52:12.345942 10 service.go:419] \"Adding new service port\" portName=\"services-690/endpoint-test2\" servicePort=\"100.71.229.33:80/TCP\"\nI0623 20:52:12.346076 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:12.383674 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"37.736834ms\"\nI0623 20:52:12.383901 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:12.417667 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.955551ms\"\nI0623 20:52:15.104440 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:15.145870 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"41.578926ms\"\nI0623 20:52:18.113363 10 service.go:304] \"Service updated ports\" service=\"sctp-8452/sctp-endpoint-test\" portCount=1\nI0623 20:52:18.113410 10 service.go:419] \"Adding new service port\" portName=\"sctp-8452/sctp-endpoint-test\" servicePort=\"100.70.62.41:5060/SCTP\"\nI0623 20:52:18.113509 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:18.168383 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"54.972682ms\"\nI0623 20:52:18.168558 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:18.195942 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"27.523733ms\"\nI0623 20:52:21.601370 10 service.go:304] \"Service updated ports\" service=\"services-477/clusterip-service\" portCount=1\nI0623 20:52:21.601413 10 service.go:419] \"Adding new service port\" portName=\"services-477/clusterip-service\" servicePort=\"100.71.170.12:80/TCP\"\nI0623 20:52:21.601510 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:21.637686 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"36.271787ms\"\nI0623 20:52:21.637815 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:21.678860 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"41.140008ms\"\nI0623 20:52:21.713585 10 service.go:304] \"Service updated ports\" service=\"services-477/externalsvc\" portCount=1\nI0623 20:52:22.679281 10 service.go:419] \"Adding new service port\" portName=\"services-477/externalsvc\" servicePort=\"100.66.82.124:80/TCP\"\nI0623 20:52:22.679399 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:22.806852 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"127.605877ms\"\nI0623 20:52:23.221948 10 service.go:304] \"Service updated ports\" service=\"dns-1013/test-service-2\" portCount=0\nI0623 20:52:23.807133 10 service.go:444] \"Removing service port\" portName=\"dns-1013/test-service-2:http\"\nI0623 20:52:23.807383 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:23.833308 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"26.191047ms\"\nI0623 20:52:24.834219 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:24.861463 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"27.371819ms\"\nI0623 20:52:25.344691 10 service.go:304] \"Service updated ports\" service=\"services-2298/service-headless-toggled\" portCount=0\nI0623 20:52:25.861560 10 service.go:444] \"Removing service port\" portName=\"services-2298/service-headless-toggled\"\nI0623 20:52:25.861719 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:25.888193 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"26.653825ms\"\nI0623 20:52:26.888423 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:26.918667 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.358469ms\"\nI0623 20:52:28.296583 10 service.go:304] \"Service updated ports\" service=\"services-477/clusterip-service\" portCount=0\nI0623 20:52:28.296625 10 service.go:444] \"Removing service port\" portName=\"services-477/clusterip-service\"\nI0623 20:52:28.296838 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:28.330665 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"34.037764ms\"\nI0623 20:52:29.330852 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:29.375479 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"44.694928ms\"\nI0623 20:52:30.482886 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:30.515201 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.424991ms\"\nI0623 20:52:31.134402 10 service.go:304] \"Service updated ports\" service=\"services-2298/service-headless-toggled\" portCount=1\nI0623 20:52:31.134464 10 service.go:419] \"Adding new service port\" portName=\"services-2298/service-headless-toggled\" servicePort=\"100.68.99.227:80/TCP\"\nI0623 20:52:31.134571 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:31.190548 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"56.103057ms\"\nI0623 20:52:35.543080 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:35.614989 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"72.048293ms\"\nI0623 20:52:35.615114 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:35.661541 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"46.517935ms\"\nI0623 20:52:36.662716 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:36.713059 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"50.49081ms\"\nI0623 20:52:37.646865 10 service.go:304] \"Service updated ports\" service=\"sctp-8452/sctp-endpoint-test\" portCount=0\nI0623 20:52:37.646899 10 service.go:444] \"Removing service port\" portName=\"sctp-8452/sctp-endpoint-test\"\nI0623 20:52:37.647029 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:37.696912 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"50.007827ms\"\nI0623 20:52:38.697756 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:38.723526 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"25.862245ms\"\nI0623 20:52:39.686955 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:39.793183 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"105.834815ms\"\nI0623 20:52:39.859131 10 service.go:304] \"Service updated ports\" service=\"services-690/endpoint-test2\" portCount=0\nI0623 20:52:40.023492 10 service.go:304] \"Service updated ports\" service=\"services-477/externalsvc\" portCount=0\nI0623 20:52:40.793321 10 service.go:444] \"Removing service port\" portName=\"services-477/externalsvc\"\nI0623 20:52:40.793344 10 service.go:444] \"Removing service port\" portName=\"services-690/endpoint-test2\"\nI0623 20:52:40.793596 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:40.837580 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"44.272166ms\"\nI0623 20:52:51.056288 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:51.148848 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"92.666906ms\"\nI0623 20:52:51.149683 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:51.208656 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"59.770739ms\"\nI0623 20:52:51.268466 10 service.go:304] \"Service updated ports\" service=\"services-2298/service-headless-toggled\" portCount=0\nI0623 20:52:52.208834 10 service.go:444] \"Removing service port\" portName=\"services-2298/service-headless-toggled\"\nI0623 20:52:52.208993 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:52.262173 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"53.375043ms\"\nI0623 20:52:59.621722 10 service.go:304] \"Service updated ports\" service=\"webhook-9845/e2e-test-webhook\" portCount=1\nI0623 20:52:59.621759 10 service.go:419] \"Adding new service port\" portName=\"webhook-9845/e2e-test-webhook\" servicePort=\"100.70.93.105:8443/TCP\"\nI0623 20:52:59.621854 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:59.656119 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"34.357759ms\"\nI0623 20:52:59.656237 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:59.696944 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"40.794322ms\"\nI0623 20:53:00.554307 10 service.go:304] \"Service updated ports\" service=\"services-5186/affinity-nodeport\" portCount=1\nI0623 20:53:00.698528 10 service.go:419] \"Adding new service port\" portName=\"services-5186/affinity-nodeport\" servicePort=\"100.70.188.65:80/TCP\"\nI0623 20:53:00.698652 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:00.748959 10 proxier.go:1604] \"Opened local port\" port={Description:nodePort for services-5186/affinity-nodeport IP: IPFamily:4 Port:30665 Protocol:TCP}\nE0623 20:53:00.749028 10 proxier.go:1600] \"can't open port, skipping it\" err=\"listen tcp4 :30665: bind: address already in use\" port={Description:nodePort for services-5186/affinity-nodeport IP: IPFamily:4 Port:30665 Protocol:TCP}\nI0623 20:53:00.758341 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"59.843993ms\"\nI0623 20:53:02.561050 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:02.590812 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"29.865844ms\"\nI0623 20:53:05.960847 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:05.990122 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"29.392298ms\"\nI0623 20:53:11.283131 10 service.go:304] \"Service updated ports\" service=\"webhook-1829/e2e-test-webhook\" portCount=1\nI0623 20:53:11.283185 10 service.go:419] \"Adding new service port\" portName=\"webhook-1829/e2e-test-webhook\" servicePort=\"100.64.142.12:8443/TCP\"\nI0623 20:53:11.283288 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:11.316672 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.494681ms\"\nI0623 20:53:11.316886 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:11.343125 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"26.423116ms\"\nI0623 20:53:12.892593 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:12.926796 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"34.332401ms\"\nI0623 20:53:13.269084 10 service.go:304] \"Service updated ports\" service=\"webhook-1829/e2e-test-webhook\" portCount=0\nI0623 20:53:13.383038 10 service.go:444] \"Removing service port\" portName=\"webhook-1829/e2e-test-webhook\"\nI0623 20:53:13.383166 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:13.502314 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"119.280188ms\"\nI0623 20:53:15.281043 10 service.go:304] \"Service updated ports\" service=\"webhook-9845/e2e-test-webhook\" portCount=0\nI0623 20:53:15.281080 10 service.go:444] \"Removing service port\" portName=\"webhook-9845/e2e-test-webhook\"\nI0623 20:53:15.281180 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:15.314984 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.901318ms\"\nI0623 20:53:15.315193 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:15.344268 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"29.252474ms\"\nI0623 20:53:17.308422 10 service.go:304] \"Service updated ports\" service=\"services-1408/nodeport-range-test\" portCount=1\nI0623 20:53:17.308469 10 service.go:419] \"Adding new service port\" portName=\"services-1408/nodeport-range-test\" servicePort=\"100.71.217.131:80/TCP\"\nI0623 20:53:17.308589 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:17.352457 10 proxier.go:1604] \"Opened local port\" port={Description:nodePort for services-1408/nodeport-range-test IP: IPFamily:4 Port:31072 Protocol:TCP}\nE0623 20:53:17.352521 10 proxier.go:1600] \"can't open port, skipping it\" err=\"listen tcp4 :31072: bind: address already in use\" port={Description:nodePort for services-1408/nodeport-range-test IP: IPFamily:4 Port:31072 Protocol:TCP}\nI0623 20:53:17.358944 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"50.480245ms\"\nI0623 20:53:17.359060 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:17.392327 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.351259ms\"\nI0623 20:53:17.634524 10 service.go:304] \"Service updated ports\" service=\"services-1408/nodeport-range-test\" portCount=0\nI0623 20:53:18.392929 10 service.go:444] \"Removing service port\" portName=\"services-1408/nodeport-range-test\"\nI0623 20:53:18.393065 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:18.491086 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"98.173733ms\"\nI0623 20:53:28.065862 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:28.119064 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"53.337612ms\"\nI0623 20:53:28.119200 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:28.144663 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"25.569388ms\"\nI0623 20:53:29.600145 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:29.645198 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"45.162879ms\"\nI0623 20:53:30.645384 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:30.685162 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"39.878752ms\"\nI0623 20:53:31.686407 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:31.688575 10 service.go:304] \"Service updated ports\" service=\"conntrack-4166/svc-udp\" portCount=1\nI0623 20:53:31.785277 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"99.019954ms\"\nI0623 20:53:32.786160 10 service.go:419] \"Adding new service port\" portName=\"conntrack-4166/svc-udp:udp\" servicePort=\"100.71.10.59:80/UDP\"\nI0623 20:53:32.786364 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:32.858777 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"72.630478ms\"\nI0623 20:53:34.762458 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:34.830175 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"67.818584ms\"\nI0623 20:53:34.965188 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:35.014594 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"49.50136ms\"\nI0623 20:53:35.170243 10 service.go:304] \"Service updated ports\" service=\"services-5186/affinity-nodeport\" portCount=0\nI0623 20:53:36.014956 10 service.go:444] \"Removing service port\" portName=\"services-5186/affinity-nodeport\"\nI0623 20:53:36.015202 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:36.042774 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"27.823443ms\"\nI0623 20:53:36.124251 10 service.go:304] \"Service updated ports\" service=\"services-6940/affinity-clusterip\" portCount=1\nI0623 20:53:37.043782 10 service.go:419] \"Adding new service port\" portName=\"services-6940/affinity-clusterip\" servicePort=\"100.69.45.173:80/TCP\"\nI0623 20:53:37.044006 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:37.069847 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"26.092631ms\"\nI0623 20:53:39.301277 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:39.353700 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"52.529828ms\"\nI0623 20:53:39.600691 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:39.600986 10 service.go:304] \"Service updated ports\" service=\"proxy-804/test-service\" portCount=1\nI0623 20:53:39.649842 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"49.254632ms\"\nI0623 20:53:40.650686 10 service.go:419] \"Adding new service port\" portName=\"proxy-804/test-service\" servicePort=\"100.68.242.234:80/TCP\"\nI0623 20:53:40.650947 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:40.679165 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"28.501607ms\"\nI0623 20:53:41.803091 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:41.838880 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"35.912084ms\"\nI0623 20:53:43.313318 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:43.316556 10 service.go:304] \"Service updated ports\" service=\"services-8134/affinity-nodeport-transition\" portCount=1\nI0623 20:53:43.366633 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"53.408537ms\"\nI0623 20:53:43.366685 10 service.go:419] \"Adding new service port\" portName=\"services-8134/affinity-nodeport-transition\" servicePort=\"100.68.194.50:80/TCP\"\nI0623 20:53:43.366783 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:43.396233 10 proxier.go:1604] \"Opened local port\" port={Description:nodePort for services-8134/affinity-nodeport-transition IP: IPFamily:4 Port:30929 Protocol:TCP}\nE0623 20:53:43.396309 10 proxier.go:1600] \"can't open port, skipping it\" err=\"listen tcp4 :30929: bind: address already in use\" port={Description:nodePort for services-8134/affinity-nodeport-transition IP: IPFamily:4 Port:30929 Protocol:TCP}\nI0623 20:53:43.408659 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"41.993174ms\"\nI0623 20:53:46.104384 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:46.168785 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"64.514079ms\"\nI0623 20:53:46.545645 10 service.go:304] \"Service updated ports\" service=\"proxy-804/test-service\" portCount=0\nI0623 20:53:46.545682 10 service.go:444] \"Removing service port\" portName=\"proxy-804/test-service\"\nI0623 20:53:46.545783 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:46.596732 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"51.042269ms\"\nI0623 20:53:47.598230 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:47.715579 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"117.47519ms\"\nI0623 20:53:48.402548 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:48.487471 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"85.03977ms\"\nI0623 20:53:50.374703 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:50.419974 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"45.387772ms\"\nI0623 20:54:01.148210 10 service.go:304] \"Service updated ports\" service=\"services-8134/affinity-nodeport-transition\" portCount=1\nI0623 20:54:01.148256 10 service.go:421] \"Updating existing service port\" portName=\"services-8134/affinity-nodeport-transition\" servicePort=\"100.68.194.50:80/TCP\"\nI0623 20:54:01.148892 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:01.178336 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.086297ms\"\nI0623 20:54:02.697694 10 service.go:304] \"Service updated ports\" service=\"services-8134/affinity-nodeport-transition\" portCount=1\nI0623 20:54:02.697734 10 service.go:421] \"Updating existing service port\" portName=\"services-8134/affinity-nodeport-transition\" servicePort=\"100.68.194.50:80/TCP\"\nI0623 20:54:02.698112 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:02.749193 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"51.462577ms\"\nI0623 20:54:04.575422 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:04.628020 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"52.723999ms\"\nI0623 20:54:04.628247 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:04.656215 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"28.165731ms\"\nI0623 20:54:05.764835 10 service.go:304] \"Service updated ports\" service=\"sctp-1659/sctp-clusterip\" portCount=1\nI0623 20:54:05.764883 10 service.go:419] \"Adding new service port\" portName=\"sctp-1659/sctp-clusterip\" servicePort=\"100.68.238.153:5060/SCTP\"\nI0623 20:54:05.765005 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:05.814155 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"49.264312ms\"\nI0623 20:54:06.815009 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:06.846314 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"31.424059ms\"\nI0623 20:54:07.847382 10 proxier.go:811] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-4166/svc-udp:udp\" clusterIP=\"100.71.10.59\"\nI0623 20:54:07.847401 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:07.925507 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"78.313391ms\"\nI0623 20:54:08.585066 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:08.640432 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"55.478858ms\"\nI0623 20:54:09.641823 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:09.683935 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"42.224801ms\"\nI0623 20:54:12.202591 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:12.232159 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"29.691306ms\"\nI0623 20:54:12.609310 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:12.652316 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"43.104931ms\"\nI0623 20:54:12.758269 10 service.go:304] \"Service updated ports\" service=\"services-8134/affinity-nodeport-transition\" portCount=0\nI0623 20:54:13.511622 10 service.go:444] \"Removing service port\" portName=\"services-8134/affinity-nodeport-transition\"\nI0623 20:54:13.511735 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:13.560516 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"48.898488ms\"\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-0-144.eu-west-1.compute.internal ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-0-238.eu-west-1.compute.internal ====\n2022/06/23 20:32:47 Running command:\nCommand env: (log-file=/var/log/kube-proxy.log, also-stdout=true, redirect-stderr=true)\nRun from directory: \nExecutable path: /usr/local/bin/kube-proxy\nArgs (comma-delimited): /usr/local/bin/kube-proxy,--cluster-cidr=100.96.0.0/11,--conntrack-max-per-core=131072,--hostname-override=ip-172-20-0-238.eu-west-1.compute.internal,--kubeconfig=/var/lib/kube-proxy/kubeconfig,--master=https://api.internal.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io,--oom-score-adj=-998,--v=2\n2022/06/23 20:32:47 Now listening for interrupts\nI0623 20:32:47.666943 10 flags.go:64] FLAG: --add-dir-header=\"false\"\nI0623 20:32:47.667138 10 flags.go:64] FLAG: --alsologtostderr=\"false\"\nI0623 20:32:47.667220 10 flags.go:64] FLAG: --bind-address=\"0.0.0.0\"\nI0623 20:32:47.667260 10 flags.go:64] FLAG: --bind-address-hard-fail=\"false\"\nI0623 20:32:47.667632 10 flags.go:64] FLAG: --boot-id-file=\"/proc/sys/kernel/random/boot_id\"\nI0623 20:32:47.667644 10 flags.go:64] FLAG: --cleanup=\"false\"\nI0623 20:32:47.667651 10 flags.go:64] FLAG: --cluster-cidr=\"100.96.0.0/11\"\nI0623 20:32:47.667671 10 flags.go:64] FLAG: --config=\"\"\nI0623 20:32:47.667675 10 flags.go:64] FLAG: --config-sync-period=\"15m0s\"\nI0623 20:32:47.667682 10 flags.go:64] FLAG: --conntrack-max-per-core=\"131072\"\nI0623 20:32:47.667688 10 flags.go:64] FLAG: --conntrack-min=\"131072\"\nI0623 20:32:47.667692 10 flags.go:64] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI0623 20:32:47.667696 10 flags.go:64] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI0623 20:32:47.667700 10 flags.go:64] FLAG: --detect-local-mode=\"\"\nI0623 20:32:47.667706 10 flags.go:64] FLAG: --feature-gates=\"\"\nI0623 20:32:47.667712 10 flags.go:64] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI0623 20:32:47.667752 10 flags.go:64] FLAG: --healthz-port=\"10256\"\nI0623 20:32:47.667757 10 flags.go:64] FLAG: --help=\"false\"\nI0623 20:32:47.667762 10 flags.go:64] FLAG: --hostname-override=\"ip-172-20-0-238.eu-west-1.compute.internal\"\nI0623 20:32:47.667769 10 flags.go:64] FLAG: --iptables-masquerade-bit=\"14\"\nI0623 20:32:47.667773 10 flags.go:64] FLAG: --iptables-min-sync-period=\"1s\"\nI0623 20:32:47.667778 10 flags.go:64] FLAG: --iptables-sync-period=\"30s\"\nI0623 20:32:47.667783 10 flags.go:64] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI0623 20:32:47.667792 10 flags.go:64] FLAG: --ipvs-min-sync-period=\"0s\"\nI0623 20:32:47.667796 10 flags.go:64] FLAG: --ipvs-scheduler=\"\"\nI0623 20:32:47.667801 10 flags.go:64] FLAG: --ipvs-strict-arp=\"false\"\nI0623 20:32:47.667806 10 flags.go:64] FLAG: --ipvs-sync-period=\"30s\"\nI0623 20:32:47.667809 10 flags.go:64] FLAG: --ipvs-tcp-timeout=\"0s\"\nI0623 20:32:47.667814 10 flags.go:64] FLAG: --ipvs-tcpfin-timeout=\"0s\"\nI0623 20:32:47.667820 10 flags.go:64] FLAG: --ipvs-udp-timeout=\"0s\"\nI0623 20:32:47.667824 10 flags.go:64] FLAG: --kube-api-burst=\"10\"\nI0623 20:32:47.667828 10 flags.go:64] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0623 20:32:47.667833 10 flags.go:64] FLAG: --kube-api-qps=\"5\"\nI0623 20:32:47.667842 10 flags.go:64] FLAG: --kubeconfig=\"/var/lib/kube-proxy/kubeconfig\"\nI0623 20:32:47.667846 10 flags.go:64] FLAG: --log-backtrace-at=\":0\"\nI0623 20:32:47.667856 10 flags.go:64] FLAG: --log-dir=\"\"\nI0623 20:32:47.667861 10 flags.go:64] FLAG: --log-file=\"\"\nI0623 20:32:47.667866 10 flags.go:64] FLAG: --log-file-max-size=\"1800\"\nI0623 20:32:47.667872 10 flags.go:64] FLAG: --log-flush-frequency=\"5s\"\nI0623 20:32:47.667876 10 flags.go:64] FLAG: --logtostderr=\"true\"\nI0623 20:32:47.667881 10 flags.go:64] FLAG: --machine-id-file=\"/etc/machine-id,/var/lib/dbus/machine-id\"\nI0623 20:32:47.667889 10 flags.go:64] FLAG: --masquerade-all=\"false\"\nI0623 20:32:47.667893 10 flags.go:64] FLAG: --master=\"https://api.internal.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io\"\nI0623 20:32:47.667898 10 flags.go:64] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI0623 20:32:47.667903 10 flags.go:64] FLAG: --metrics-port=\"10249\"\nI0623 20:32:47.667908 10 flags.go:64] FLAG: --nodeport-addresses=\"[]\"\nI0623 20:32:47.667926 10 flags.go:64] FLAG: --one-output=\"false\"\nI0623 20:32:47.667931 10 flags.go:64] FLAG: --oom-score-adj=\"-998\"\nI0623 20:32:47.667936 10 flags.go:64] FLAG: --profiling=\"false\"\nI0623 20:32:47.667942 10 flags.go:64] FLAG: --proxy-mode=\"\"\nI0623 20:32:47.667955 10 flags.go:64] FLAG: --proxy-port-range=\"\"\nI0623 20:32:47.667963 10 flags.go:64] FLAG: --show-hidden-metrics-for-version=\"\"\nI0623 20:32:47.667967 10 flags.go:64] FLAG: --skip-headers=\"false\"\nI0623 20:32:47.667972 10 flags.go:64] FLAG: --skip-log-headers=\"false\"\nI0623 20:32:47.667978 10 flags.go:64] FLAG: --stderrthreshold=\"2\"\nI0623 20:32:47.667983 10 flags.go:64] FLAG: --udp-timeout=\"250ms\"\nI0623 20:32:47.667987 10 flags.go:64] FLAG: --v=\"2\"\nI0623 20:32:47.667995 10 flags.go:64] FLAG: --version=\"false\"\nI0623 20:32:47.668002 10 flags.go:64] FLAG: --vmodule=\"\"\nI0623 20:32:47.668008 10 flags.go:64] FLAG: --write-config-to=\"\"\nI0623 20:32:47.668026 10 server.go:225] \"Warning, all flags other than --config, --write-config-to, and --cleanup are deprecated, please begin using a config file ASAP\"\nI0623 20:32:47.668111 10 feature_gate.go:245] feature gates: &{map[]}\nI0623 20:32:47.668212 10 feature_gate.go:245] feature gates: &{map[]}\nI0623 20:32:47.716374 10 node.go:163] Successfully retrieved node IP: 172.20.0.238\nI0623 20:32:47.716571 10 server_others.go:138] \"Detected node IP\" address=\"172.20.0.238\"\nI0623 20:32:47.716708 10 server_others.go:561] \"Unknown proxy mode, assuming iptables proxy\" proxyMode=\"\"\nI0623 20:32:47.716874 10 server_others.go:175] \"DetectLocalMode\" LocalMode=\"ClusterCIDR\"\nI0623 20:32:47.750574 10 server_others.go:206] \"Using iptables Proxier\"\nI0623 20:32:47.750601 10 server_others.go:213] \"kube-proxy running in dual-stack mode\" ipFamily=IPv4\nI0623 20:32:47.750607 10 server_others.go:214] \"Creating dualStackProxier for iptables\"\nI0623 20:32:47.750619 10 server_others.go:491] \"Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6\"\nI0623 20:32:47.750898 10 utils.go:400] \"Changed sysctl\" name=\"net/ipv4/conf/all/route_localnet\" before=0 after=1\nI0623 20:32:47.750946 10 proxier.go:282] \"Using iptables mark for masquerade\" ipFamily=IPv4 mark=\"0x00004000\"\nI0623 20:32:47.751164 10 proxier.go:328] \"Iptables sync params\" ipFamily=IPv4 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0623 20:32:47.751215 10 proxier.go:338] \"Iptables supports --random-fully\" ipFamily=IPv4\nI0623 20:32:47.751381 10 proxier.go:282] \"Using iptables mark for masquerade\" ipFamily=IPv6 mark=\"0x00004000\"\nI0623 20:32:47.751417 10 proxier.go:328] \"Iptables sync params\" ipFamily=IPv6 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0623 20:32:47.751435 10 proxier.go:338] \"Iptables supports --random-fully\" ipFamily=IPv6\nI0623 20:32:47.751676 10 server.go:656] \"Version info\" version=\"v1.23.1\"\nI0623 20:32:47.754134 10 conntrack.go:100] \"Set sysctl\" entry=\"net/netfilter/nf_conntrack_max\" value=262144\nI0623 20:32:47.754169 10 conntrack.go:52] \"Setting nf_conntrack_max\" nf_conntrack_max=262144\nI0623 20:32:47.754367 10 mount_linux.go:208] Detected OS without systemd\nI0623 20:32:47.754779 10 conntrack.go:83] \"Setting conntrack hashsize\" conntrack hashsize=65536\nI0623 20:32:47.774860 10 conntrack.go:100] \"Set sysctl\" entry=\"net/netfilter/nf_conntrack_tcp_timeout_close_wait\" value=3600\nI0623 20:32:47.775243 10 config.go:317] \"Starting service config controller\"\nI0623 20:32:47.775347 10 shared_informer.go:240] Waiting for caches to sync for service config\nI0623 20:32:47.775489 10 config.go:226] \"Starting endpoint slice config controller\"\nI0623 20:32:47.775558 10 shared_informer.go:240] Waiting for caches to sync for endpoint slice config\nI0623 20:32:47.778420 10 service.go:304] \"Service updated ports\" service=\"kube-system/aws-load-balancer-webhook-service\" portCount=1\nI0623 20:32:47.778460 10 service.go:304] \"Service updated ports\" service=\"kube-system/hubble-relay\" portCount=1\nI0623 20:32:47.778598 10 service.go:304] \"Service updated ports\" service=\"kube-system/kube-dns-upstream\" portCount=2\nI0623 20:32:47.778729 10 service.go:304] \"Service updated ports\" service=\"kube-system/metrics-server\" portCount=1\nI0623 20:32:47.778827 10 service.go:304] \"Service updated ports\" service=\"default/kubernetes\" portCount=1\nI0623 20:32:47.778904 10 service.go:304] \"Service updated ports\" service=\"kube-system/cert-manager\" portCount=1\nI0623 20:32:47.778981 10 service.go:304] \"Service updated ports\" service=\"kube-system/cert-manager-webhook\" portCount=1\nI0623 20:32:47.779013 10 service.go:304] \"Service updated ports\" service=\"kube-system/cluster-autoscaler\" portCount=1\nI0623 20:32:47.779103 10 service.go:304] \"Service updated ports\" service=\"kube-system/kube-dns\" portCount=3\nI0623 20:32:47.876396 10 shared_informer.go:247] Caches are synced for endpoint slice config \nI0623 20:32:47.876634 10 proxier.go:786] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0623 20:32:47.876665 10 proxier.go:786] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0623 20:32:47.876464 10 shared_informer.go:247] Caches are synced for service config \nI0623 20:32:47.876744 10 service.go:419] \"Adding new service port\" portName=\"kube-system/aws-load-balancer-webhook-service\" servicePort=\"100.65.122.219:443/TCP\"\nI0623 20:32:47.876773 10 service.go:419] \"Adding new service port\" portName=\"kube-system/metrics-server:https\" servicePort=\"100.71.178.109:443/TCP\"\nI0623 20:32:47.876789 10 service.go:419] \"Adding new service port\" portName=\"default/kubernetes:https\" servicePort=\"100.64.0.1:443/TCP\"\nI0623 20:32:47.876805 10 service.go:419] \"Adding new service port\" portName=\"kube-system/cert-manager:tcp-prometheus-servicemonitor\" servicePort=\"100.71.134.198:9402/TCP\"\nI0623 20:32:47.876825 10 service.go:419] \"Adding new service port\" portName=\"kube-system/cert-manager-webhook:https\" servicePort=\"100.67.123.196:443/TCP\"\nI0623 20:32:47.876844 10 service.go:419] \"Adding new service port\" portName=\"kube-system/kube-dns:dns\" servicePort=\"100.64.0.10:53/UDP\"\nI0623 20:32:47.876862 10 service.go:419] \"Adding new service port\" portName=\"kube-system/kube-dns:dns-tcp\" servicePort=\"100.64.0.10:53/TCP\"\nI0623 20:32:47.876888 10 service.go:419] \"Adding new service port\" portName=\"kube-system/kube-dns:metrics\" servicePort=\"100.64.0.10:9153/TCP\"\nI0623 20:32:47.876920 10 service.go:419] \"Adding new service port\" portName=\"kube-system/hubble-relay\" servicePort=\"100.71.215.29:80/TCP\"\nI0623 20:32:47.876944 10 service.go:419] \"Adding new service port\" portName=\"kube-system/kube-dns-upstream:dns\" servicePort=\"100.71.8.82:53/UDP\"\nI0623 20:32:47.876978 10 service.go:419] \"Adding new service port\" portName=\"kube-system/kube-dns-upstream:dns-tcp\" servicePort=\"100.71.8.82:53/TCP\"\nI0623 20:32:47.876993 10 service.go:419] \"Adding new service port\" portName=\"kube-system/cluster-autoscaler:http\" servicePort=\"100.65.46.33:8085/TCP\"\nI0623 20:32:47.877169 10 proxier.go:811] \"Stale service\" protocol=\"udp\" servicePortName=\"kube-system/kube-dns-upstream:dns\" clusterIP=\"100.71.8.82\"\nI0623 20:32:47.877261 10 proxier.go:811] \"Stale service\" protocol=\"udp\" servicePortName=\"kube-system/kube-dns:dns\" clusterIP=\"100.64.0.10\"\nI0623 20:32:47.877297 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:32:47.937952 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"61.256125ms\"\nI0623 20:32:47.937982 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:32:47.976500 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"38.518043ms\"\nI0623 20:33:58.430820 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:33:58.462868 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.092294ms\"\nI0623 20:33:58.462929 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:33:58.492144 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"29.244953ms\"\nI0623 20:33:59.492744 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:33:59.552961 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"60.337884ms\"\nI0623 20:34:00.458614 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:34:00.489389 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.797685ms\"\nI0623 20:34:01.489991 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:34:01.537429 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"47.809783ms\"\nI0623 20:34:02.537795 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:34:02.566382 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"28.68944ms\"\nI0623 20:34:03.566681 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:34:03.653413 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"86.813106ms\"\nI0623 20:34:04.653721 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:34:04.696336 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"42.722144ms\"\nI0623 20:34:08.032628 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:34:08.065334 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.780397ms\"\nI0623 20:34:08.065429 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:34:08.100125 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"34.757569ms\"\nI0623 20:34:09.286437 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:34:09.320828 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"34.466522ms\"\nI0623 20:34:10.146414 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:34:10.183360 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"37.065143ms\"\nI0623 20:34:11.184050 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:34:11.217678 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.696962ms\"\nI0623 20:34:38.543409 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:34:38.590675 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"47.175573ms\"\nI0623 20:34:39.587749 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:34:39.633375 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"45.67008ms\"\nI0623 20:34:41.359292 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:34:41.387845 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"28.600261ms\"\nI0623 20:34:41.399129 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:34:41.430079 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"31.017466ms\"\nI0623 20:34:42.944581 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:34:42.982719 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"38.183335ms\"\nI0623 20:35:19.642398 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:35:19.678471 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"36.119586ms\"\nI0623 20:36:03.010606 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:36:03.099678 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"89.11421ms\"\nI0623 20:36:08.200005 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:36:08.270698 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"70.730793ms\"\nI0623 20:36:09.205504 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:36:09.279660 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"74.195695ms\"\nI0623 20:37:56.634998 10 service.go:304] \"Service updated ports\" service=\"services-420/multi-endpoint-test\" portCount=2\nI0623 20:37:56.635054 10 service.go:419] \"Adding new service port\" portName=\"services-420/multi-endpoint-test:portname2\" servicePort=\"100.69.164.220:81/TCP\"\nI0623 20:37:56.635072 10 service.go:419] \"Adding new service port\" portName=\"services-420/multi-endpoint-test:portname1\" servicePort=\"100.69.164.220:80/TCP\"\nI0623 20:37:56.635254 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:37:56.676119 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"41.071353ms\"\nI0623 20:37:56.676179 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:37:56.713777 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"37.621167ms\"\nI0623 20:38:01.562446 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:01.592843 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.460213ms\"\nI0623 20:38:02.079665 10 service.go:304] \"Service updated ports\" service=\"services-1829/up-down-1\" portCount=1\nI0623 20:38:02.079713 10 service.go:419] \"Adding new service port\" portName=\"services-1829/up-down-1\" servicePort=\"100.69.26.183:80/TCP\"\nI0623 20:38:02.079748 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:02.125700 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"45.97023ms\"\nI0623 20:38:03.126694 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:03.362717 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"236.073613ms\"\nI0623 20:38:09.830132 10 service.go:304] \"Service updated ports\" service=\"endpointslicemirroring-3428/example-custom-endpoints\" portCount=1\nI0623 20:38:09.830183 10 service.go:419] \"Adding new service port\" portName=\"endpointslicemirroring-3428/example-custom-endpoints:example\" servicePort=\"100.64.224.103:80/TCP\"\nI0623 20:38:09.830218 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:10.043316 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"213.130348ms\"\nI0623 20:38:10.043409 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:10.208753 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"165.391058ms\"\nI0623 20:38:11.210803 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:11.347107 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"136.403547ms\"\nI0623 20:38:15.368479 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:15.400395 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"31.984242ms\"\nI0623 20:38:15.953076 10 service.go:304] \"Service updated ports\" service=\"endpointslicemirroring-3428/example-custom-endpoints\" portCount=0\nI0623 20:38:15.953124 10 service.go:444] \"Removing service port\" portName=\"endpointslicemirroring-3428/example-custom-endpoints:example\"\nI0623 20:38:15.953164 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:15.991460 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"38.332922ms\"\nI0623 20:38:16.991797 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:17.046688 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"54.99451ms\"\nI0623 20:38:17.642755 10 service.go:304] \"Service updated ports\" service=\"services-1829/up-down-2\" portCount=1\nI0623 20:38:17.642789 10 service.go:419] \"Adding new service port\" portName=\"services-1829/up-down-2\" servicePort=\"100.67.92.239:80/TCP\"\nI0623 20:38:17.642840 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:17.687651 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"44.857031ms\"\nI0623 20:38:18.688523 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:18.720216 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"31.73907ms\"\nI0623 20:38:22.600005 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:22.768640 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"168.668086ms\"\nI0623 20:38:24.561780 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:24.595643 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.906088ms\"\nI0623 20:38:32.706342 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:32.747309 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"41.012064ms\"\nI0623 20:38:37.490864 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:37.671363 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"180.56818ms\"\nI0623 20:38:39.374927 10 service.go:304] \"Service updated ports\" service=\"webhook-4936/e2e-test-webhook\" portCount=1\nI0623 20:38:39.374976 10 service.go:419] \"Adding new service port\" portName=\"webhook-4936/e2e-test-webhook\" servicePort=\"100.65.222.85:8443/TCP\"\nI0623 20:38:39.375017 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:39.424867 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"49.890528ms\"\nI0623 20:38:39.424954 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:39.489437 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"64.521494ms\"\nI0623 20:38:40.443092 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:40.474637 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"31.577096ms\"\nI0623 20:38:41.362991 10 service.go:304] \"Service updated ports\" service=\"webhook-4936/e2e-test-webhook\" portCount=0\nI0623 20:38:41.434846 10 service.go:444] \"Removing service port\" portName=\"webhook-4936/e2e-test-webhook\"\nI0623 20:38:41.434927 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:41.447267 10 service.go:304] \"Service updated ports\" service=\"services-420/multi-endpoint-test\" portCount=0\nI0623 20:38:41.493035 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"58.198837ms\"\nI0623 20:38:42.493198 10 service.go:444] \"Removing service port\" portName=\"services-420/multi-endpoint-test:portname2\"\nI0623 20:38:42.493233 10 service.go:444] \"Removing service port\" portName=\"services-420/multi-endpoint-test:portname1\"\nI0623 20:38:42.493284 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:42.542030 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"48.851107ms\"\nI0623 20:38:43.058880 10 service.go:304] \"Service updated ports\" service=\"proxy-554/proxy-service-cqzbb\" portCount=4\nI0623 20:38:43.542718 10 service.go:419] \"Adding new service port\" portName=\"proxy-554/proxy-service-cqzbb:tlsportname1\" servicePort=\"100.71.180.244:443/TCP\"\nI0623 20:38:43.542746 10 service.go:419] \"Adding new service port\" portName=\"proxy-554/proxy-service-cqzbb:tlsportname2\" servicePort=\"100.71.180.244:444/TCP\"\nI0623 20:38:43.542760 10 service.go:419] \"Adding new service port\" portName=\"proxy-554/proxy-service-cqzbb:portname1\" servicePort=\"100.71.180.244:80/TCP\"\nI0623 20:38:43.542773 10 service.go:419] \"Adding new service port\" portName=\"proxy-554/proxy-service-cqzbb:portname2\" servicePort=\"100.71.180.244:81/TCP\"\nI0623 20:38:43.542827 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:43.663986 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"121.305522ms\"\nI0623 20:38:49.877088 10 service.go:304] \"Service updated ports\" service=\"crd-webhook-170/e2e-test-crd-conversion-webhook\" portCount=1\nI0623 20:38:49.877147 10 service.go:419] \"Adding new service port\" portName=\"crd-webhook-170/e2e-test-crd-conversion-webhook\" servicePort=\"100.67.15.125:9443/TCP\"\nI0623 20:38:49.877191 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:49.910223 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.08139ms\"\nI0623 20:38:49.910289 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:49.944848 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"34.591829ms\"\nI0623 20:38:52.498916 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:52.537115 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"38.261869ms\"\nI0623 20:38:53.785926 10 service.go:304] \"Service updated ports\" service=\"webhook-9292/e2e-test-webhook\" portCount=1\nI0623 20:38:53.785973 10 service.go:419] \"Adding new service port\" portName=\"webhook-9292/e2e-test-webhook\" servicePort=\"100.70.4.151:8443/TCP\"\nI0623 20:38:53.786017 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:53.824603 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"38.632239ms\"\nI0623 20:38:53.824688 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:53.855585 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.941841ms\"\nI0623 20:38:54.515304 10 service.go:304] \"Service updated ports\" service=\"crd-webhook-170/e2e-test-crd-conversion-webhook\" portCount=0\nI0623 20:38:54.857427 10 service.go:444] \"Removing service port\" portName=\"crd-webhook-170/e2e-test-crd-conversion-webhook\"\nI0623 20:38:54.857562 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:54.910065 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"52.667826ms\"\nI0623 20:38:55.567635 10 service.go:304] \"Service updated ports\" service=\"pods-563/fooservice\" portCount=1\nI0623 20:38:55.911053 10 service.go:419] \"Adding new service port\" portName=\"pods-563/fooservice\" servicePort=\"100.68.33.78:8765/TCP\"\nI0623 20:38:55.911129 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:55.941829 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.799634ms\"\nI0623 20:38:56.806947 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:38:57.070282 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"263.423571ms\"\nI0623 20:39:02.091557 10 service.go:304] \"Service updated ports\" service=\"services-3955/affinity-nodeport-timeout\" portCount=1\nI0623 20:39:02.091626 10 service.go:419] \"Adding new service port\" portName=\"services-3955/affinity-nodeport-timeout\" servicePort=\"100.68.194.236:80/TCP\"\nI0623 20:39:02.091688 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:02.135198 10 proxier.go:1604] \"Opened local port\" port={Description:nodePort for services-3955/affinity-nodeport-timeout IP: IPFamily:4 Port:32444 Protocol:TCP}\nE0623 20:39:02.135274 10 proxier.go:1600] \"can't open port, skipping it\" err=\"listen tcp4 :32444: bind: address already in use\" port={Description:nodePort for services-3955/affinity-nodeport-timeout IP: IPFamily:4 Port:32444 Protocol:TCP}\nI0623 20:39:02.146653 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"55.045882ms\"\nI0623 20:39:02.146730 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:02.191000 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"44.303934ms\"\nI0623 20:39:04.846029 10 service.go:304] \"Service updated ports\" service=\"webhook-9292/e2e-test-webhook\" portCount=0\nI0623 20:39:04.846067 10 service.go:444] \"Removing service port\" portName=\"webhook-9292/e2e-test-webhook\"\nI0623 20:39:04.846109 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:04.915693 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"69.612772ms\"\nI0623 20:39:04.915778 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:05.029431 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"113.683122ms\"\nI0623 20:39:05.949427 10 service.go:304] \"Service updated ports\" service=\"conntrack-6630/svc-udp\" portCount=1\nI0623 20:39:05.949475 10 service.go:419] \"Adding new service port\" portName=\"conntrack-6630/svc-udp:udp\" servicePort=\"100.65.41.44:80/UDP\"\nI0623 20:39:05.949559 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:05.975942 10 proxier.go:1604] \"Opened local port\" port={Description:nodePort for conntrack-6630/svc-udp:udp IP: IPFamily:4 Port:31222 Protocol:UDP}\nE0623 20:39:05.975985 10 proxier.go:1600] \"can't open port, skipping it\" err=\"listen udp4 :31222: bind: address already in use\" port={Description:nodePort for conntrack-6630/svc-udp:udp IP: IPFamily:4 Port:31222 Protocol:UDP}\nI0623 20:39:05.987144 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"37.674931ms\"\nI0623 20:39:06.987743 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:07.020439 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.736727ms\"\nI0623 20:39:07.957168 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:08.022436 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"65.328979ms\"\nI0623 20:39:08.076701 10 service.go:304] \"Service updated ports\" service=\"pods-563/fooservice\" portCount=0\nI0623 20:39:09.023762 10 service.go:444] \"Removing service port\" portName=\"pods-563/fooservice\"\nI0623 20:39:09.024000 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:09.098892 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"75.199656ms\"\nI0623 20:39:12.513992 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:12.559551 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"45.624658ms\"\nI0623 20:39:12.559648 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:12.592035 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.446347ms\"\nI0623 20:39:13.568426 10 proxier.go:811] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-6630/svc-udp:udp\" clusterIP=\"100.65.41.44\"\nI0623 20:39:13.568469 10 proxier.go:821] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-6630/svc-udp:udp\" nodePort=31222\nI0623 20:39:13.568478 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:13.644046 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"75.692971ms\"\nI0623 20:39:14.044711 10 service.go:304] \"Service updated ports\" service=\"proxy-554/proxy-service-cqzbb\" portCount=0\nI0623 20:39:14.645050 10 service.go:444] \"Removing service port\" portName=\"proxy-554/proxy-service-cqzbb:portname2\"\nI0623 20:39:14.645074 10 service.go:444] \"Removing service port\" portName=\"proxy-554/proxy-service-cqzbb:tlsportname1\"\nI0623 20:39:14.645080 10 service.go:444] \"Removing service port\" portName=\"proxy-554/proxy-service-cqzbb:tlsportname2\"\nI0623 20:39:14.645086 10 service.go:444] \"Removing service port\" portName=\"proxy-554/proxy-service-cqzbb:portname1\"\nI0623 20:39:14.645148 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:14.676531 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"31.502227ms\"\nI0623 20:39:15.677504 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:15.712269 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"34.855664ms\"\nI0623 20:39:16.714690 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:16.755423 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"40.805048ms\"\nI0623 20:39:19.518454 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:19.555523 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"37.132402ms\"\nI0623 20:39:19.693887 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:19.735037 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"41.203792ms\"\nI0623 20:39:19.899937 10 service.go:304] \"Service updated ports\" service=\"services-1829/up-down-1\" portCount=0\nI0623 20:39:20.590624 10 service.go:304] \"Service updated ports\" service=\"kubectl-4553/rm2\" portCount=1\nI0623 20:39:20.590661 10 service.go:444] \"Removing service port\" portName=\"services-1829/up-down-1\"\nI0623 20:39:20.590683 10 service.go:419] \"Adding new service port\" portName=\"kubectl-4553/rm2\" servicePort=\"100.71.108.133:1234/TCP\"\nI0623 20:39:20.590728 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:20.637786 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"47.114801ms\"\nI0623 20:39:21.638849 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:21.693456 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"54.676177ms\"\nI0623 20:39:23.473715 10 service.go:304] \"Service updated ports\" service=\"kubectl-4553/rm3\" portCount=1\nI0623 20:39:23.473757 10 service.go:419] \"Adding new service port\" portName=\"kubectl-4553/rm3\" servicePort=\"100.64.145.251:2345/TCP\"\nI0623 20:39:23.473805 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:23.508898 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"35.14463ms\"\nI0623 20:39:24.509089 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:24.545396 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"36.362992ms\"\nI0623 20:39:26.252821 10 service.go:304] \"Service updated ports\" service=\"webhook-8265/e2e-test-webhook\" portCount=1\nI0623 20:39:26.252885 10 service.go:419] \"Adding new service port\" portName=\"webhook-8265/e2e-test-webhook\" servicePort=\"100.67.115.44:8443/TCP\"\nI0623 20:39:26.252934 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:26.337780 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"84.905102ms\"\nI0623 20:39:26.337874 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:26.570742 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"232.916455ms\"\nI0623 20:39:27.139387 10 service.go:304] \"Service updated ports\" service=\"webhook-3058/e2e-test-webhook\" portCount=1\nI0623 20:39:27.571468 10 service.go:419] \"Adding new service port\" portName=\"webhook-3058/e2e-test-webhook\" servicePort=\"100.65.60.72:8443/TCP\"\nI0623 20:39:27.571553 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:27.625516 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"54.076441ms\"\nI0623 20:39:28.694562 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:28.728386 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.905693ms\"\nI0623 20:39:29.197852 10 service.go:304] \"Service updated ports\" service=\"webhook-3058/e2e-test-webhook\" portCount=0\nI0623 20:39:29.728537 10 service.go:444] \"Removing service port\" portName=\"webhook-3058/e2e-test-webhook\"\nI0623 20:39:29.728621 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:29.776668 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"48.148574ms\"\nI0623 20:39:30.776941 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:30.837783 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"60.953976ms\"\nI0623 20:39:31.147339 10 service.go:304] \"Service updated ports\" service=\"kubectl-4553/rm2\" portCount=0\nI0623 20:39:31.168564 10 service.go:304] \"Service updated ports\" service=\"kubectl-4553/rm3\" portCount=0\nI0623 20:39:31.213593 10 service.go:304] \"Service updated ports\" service=\"webhook-8265/e2e-test-webhook\" portCount=0\nI0623 20:39:31.838609 10 service.go:444] \"Removing service port\" portName=\"webhook-8265/e2e-test-webhook\"\nI0623 20:39:31.838632 10 service.go:444] \"Removing service port\" portName=\"kubectl-4553/rm2\"\nI0623 20:39:31.838641 10 service.go:444] \"Removing service port\" portName=\"kubectl-4553/rm3\"\nI0623 20:39:31.838804 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:31.916793 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"78.24479ms\"\nI0623 20:39:34.118213 10 service.go:304] \"Service updated ports\" service=\"conntrack-4900/svc-udp\" portCount=1\nI0623 20:39:34.118251 10 service.go:419] \"Adding new service port\" portName=\"conntrack-4900/svc-udp:udp\" servicePort=\"100.65.56.191:80/UDP\"\nI0623 20:39:34.118280 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:34.153131 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"34.598309ms\"\nI0623 20:39:34.153363 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:34.192028 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"38.85084ms\"\nI0623 20:39:35.826077 10 service.go:304] \"Service updated ports\" service=\"services-1829/up-down-3\" portCount=1\nI0623 20:39:35.826129 10 service.go:419] \"Adding new service port\" portName=\"services-1829/up-down-3\" servicePort=\"100.66.44.56:80/TCP\"\nI0623 20:39:35.826172 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:35.912058 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"85.922846ms\"\nI0623 20:39:36.912931 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:36.982711 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"69.830574ms\"\nI0623 20:39:38.407092 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:38.458792 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"51.746232ms\"\nI0623 20:39:38.744475 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:38.788105 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"43.69372ms\"\nI0623 20:39:42.749405 10 proxier.go:811] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-4900/svc-udp:udp\" clusterIP=\"100.65.56.191\"\nI0623 20:39:42.749436 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:42.787070 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"37.766262ms\"\nI0623 20:39:43.950403 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:44.009429 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"59.073827ms\"\nI0623 20:39:47.573374 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:47.666408 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"93.091934ms\"\nI0623 20:39:47.666496 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:47.715483 10 service.go:304] \"Service updated ports\" service=\"conntrack-6630/svc-udp\" portCount=0\nI0623 20:39:47.720181 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"53.732181ms\"\nI0623 20:39:48.720540 10 service.go:444] \"Removing service port\" portName=\"conntrack-6630/svc-udp:udp\"\nI0623 20:39:48.720614 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:48.760445 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"39.927045ms\"\nI0623 20:39:49.862820 10 service.go:304] \"Service updated ports\" service=\"webhook-493/e2e-test-webhook\" portCount=1\nI0623 20:39:49.862871 10 service.go:419] \"Adding new service port\" portName=\"webhook-493/e2e-test-webhook\" servicePort=\"100.70.159.91:8443/TCP\"\nI0623 20:39:49.862934 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:50.015133 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"152.255762ms\"\nI0623 20:39:51.018708 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:51.094865 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"76.236948ms\"\nI0623 20:39:55.475918 10 service.go:304] \"Service updated ports\" service=\"webhook-493/e2e-test-webhook\" portCount=0\nI0623 20:39:55.475954 10 service.go:444] \"Removing service port\" portName=\"webhook-493/e2e-test-webhook\"\nI0623 20:39:55.476001 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:55.493987 10 service.go:304] \"Service updated ports\" service=\"conntrack-6602/boom-server\" portCount=1\nI0623 20:39:55.511403 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"35.442679ms\"\nI0623 20:39:55.511470 10 service.go:419] \"Adding new service port\" portName=\"conntrack-6602/boom-server\" servicePort=\"100.68.150.93:9000/TCP\"\nI0623 20:39:55.511709 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:55.547356 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"35.915449ms\"\nI0623 20:39:56.556683 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:39:56.631565 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"74.975967ms\"\nI0623 20:40:02.303266 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:40:02.431138 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"127.987437ms\"\nI0623 20:40:02.431238 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:40:02.591638 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"160.454609ms\"\nI0623 20:40:03.922025 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:40:03.966573 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"44.56139ms\"\nI0623 20:40:04.600012 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:40:04.645169 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"45.239223ms\"\nI0623 20:40:05.639465 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:40:05.674188 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"34.745274ms\"\nI0623 20:40:05.988351 10 service.go:304] \"Service updated ports\" service=\"services-3955/affinity-nodeport-timeout\" portCount=0\nI0623 20:40:06.674612 10 service.go:444] \"Removing service port\" portName=\"services-3955/affinity-nodeport-timeout\"\nI0623 20:40:06.674684 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:40:06.706937 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.337359ms\"\nI0623 20:40:07.385238 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:40:07.436260 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"51.077039ms\"\nI0623 20:40:07.499251 10 service.go:304] \"Service updated ports\" service=\"conntrack-4900/svc-udp\" portCount=0\nI0623 20:40:08.436440 10 service.go:444] \"Removing service port\" portName=\"conntrack-4900/svc-udp:udp\"\nI0623 20:40:08.436491 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:40:08.471805 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"35.40857ms\"\nI0623 20:40:19.526884 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:40:19.577768 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"50.945516ms\"\nI0623 20:40:19.577844 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:40:19.615634 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"37.825845ms\"\nI0623 20:40:19.875129 10 service.go:304] \"Service updated ports\" service=\"services-1829/up-down-2\" portCount=0\nI0623 20:40:19.911893 10 service.go:304] \"Service updated ports\" service=\"services-1829/up-down-3\" portCount=0\nI0623 20:40:20.615861 10 service.go:444] \"Removing service port\" portName=\"services-1829/up-down-2\"\nI0623 20:40:20.615888 10 service.go:444] \"Removing service port\" portName=\"services-1829/up-down-3\"\nI0623 20:40:20.615980 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:40:20.662279 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"46.434611ms\"\nI0623 20:40:45.088686 10 service.go:304] \"Service updated ports\" service=\"services-3910/nodeport-collision-1\" portCount=1\nI0623 20:40:45.088736 10 service.go:419] \"Adding new service port\" portName=\"services-3910/nodeport-collision-1\" servicePort=\"100.65.103.29:80/TCP\"\nI0623 20:40:45.088779 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:40:45.123407 10 proxier.go:1604] \"Opened local port\" port={Description:nodePort for services-3910/nodeport-collision-1 IP: IPFamily:4 Port:30429 Protocol:TCP}\nE0623 20:40:45.123749 10 proxier.go:1600] \"can't open port, skipping it\" err=\"listen tcp4 :30429: bind: address already in use\" port={Description:nodePort for services-3910/nodeport-collision-1 IP: IPFamily:4 Port:30429 Protocol:TCP}\nI0623 20:40:45.128901 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"40.167778ms\"\nI0623 20:40:45.129136 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:40:45.164920 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"35.845236ms\"\nI0623 20:40:45.310327 10 service.go:304] \"Service updated ports\" service=\"services-3910/nodeport-collision-1\" portCount=0\nI0623 20:40:45.439712 10 service.go:304] \"Service updated ports\" service=\"services-3910/nodeport-collision-2\" portCount=1\nI0623 20:40:46.165540 10 service.go:444] \"Removing service port\" portName=\"services-3910/nodeport-collision-1\"\nI0623 20:40:46.165616 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:40:46.206988 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"41.466715ms\"\nI0623 20:40:47.985595 10 service.go:304] \"Service updated ports\" service=\"webhook-6221/e2e-test-webhook\" portCount=1\nI0623 20:40:47.985643 10 service.go:419] \"Adding new service port\" portName=\"webhook-6221/e2e-test-webhook\" servicePort=\"100.67.43.131:8443/TCP\"\nI0623 20:40:47.985684 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:40:48.026506 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"40.867189ms\"\nI0623 20:40:49.027227 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:40:49.058440 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"31.269295ms\"\nI0623 20:40:50.187267 10 service.go:304] \"Service updated ports\" service=\"webhook-6221/e2e-test-webhook\" portCount=0\nI0623 20:40:50.187296 10 service.go:444] \"Removing service port\" portName=\"webhook-6221/e2e-test-webhook\"\nI0623 20:40:50.187328 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:40:50.220017 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.712817ms\"\nI0623 20:40:50.223840 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:40:50.259367 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"35.55268ms\"\nI0623 20:40:50.799013 10 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ingqp5jc\"\nI0623 20:40:50.908992 10 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ingzjz9t\"\nI0623 20:40:51.012905 10 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ingq7j7f\"\nI0623 20:40:51.649631 10 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ingq7j7f\"\nI0623 20:40:51.862330 10 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ingq7j7f\"\nI0623 20:40:51.969729 10 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ingq7j7f\"\nI0623 20:40:52.291359 10 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ingqp5jc\"\nI0623 20:40:52.303388 10 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ingzjz9t\"\nI0623 20:40:58.870702 10 service.go:304] \"Service updated ports\" service=\"deployment-7705/test-rolling-update-with-lb\" portCount=1\nI0623 20:40:58.870836 10 service.go:419] \"Adding new service port\" portName=\"deployment-7705/test-rolling-update-with-lb\" servicePort=\"100.65.65.99:80/TCP\"\nI0623 20:40:58.870889 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:40:58.905453 10 proxier.go:1604] \"Opened local port\" port={Description:nodePort for deployment-7705/test-rolling-update-with-lb IP: IPFamily:4 Port:32298 Protocol:TCP}\nE0623 20:40:58.905526 10 proxier.go:1600] \"can't open port, skipping it\" err=\"listen tcp4 :32298: bind: address already in use\" port={Description:nodePort for deployment-7705/test-rolling-update-with-lb IP: IPFamily:4 Port:32298 Protocol:TCP}\nI0623 20:40:58.913220 10 service_health.go:124] \"Opening healthcheck\" service=\"deployment-7705/test-rolling-update-with-lb\" port=32659\nI0623 20:40:58.913304 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"42.555289ms\"\nI0623 20:40:58.913389 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:40:58.959960 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"46.625494ms\"\nI0623 20:41:03.645684 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:03.658916 10 service.go:304] \"Service updated ports\" service=\"conntrack-6602/boom-server\" portCount=0\nI0623 20:41:03.695121 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"49.478096ms\"\nI0623 20:41:03.695167 10 service.go:444] \"Removing service port\" portName=\"conntrack-6602/boom-server\"\nI0623 20:41:03.695228 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:03.740155 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"44.988563ms\"\nI0623 20:41:27.829400 10 service.go:304] \"Service updated ports\" service=\"services-9027/externalname-service\" portCount=1\nI0623 20:41:27.829450 10 service.go:419] \"Adding new service port\" portName=\"services-9027/externalname-service:http\" servicePort=\"100.69.249.193:80/TCP\"\nI0623 20:41:27.829490 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:27.855077 10 proxier.go:1604] \"Opened local port\" port={Description:nodePort for services-9027/externalname-service:http IP: IPFamily:4 Port:31424 Protocol:TCP}\nE0623 20:41:27.855470 10 proxier.go:1600] \"can't open port, skipping it\" err=\"listen tcp4 :31424: bind: address already in use\" port={Description:nodePort for services-9027/externalname-service:http IP: IPFamily:4 Port:31424 Protocol:TCP}\nI0623 20:41:27.859926 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.485954ms\"\nI0623 20:41:27.860084 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:27.896696 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"36.735074ms\"\nI0623 20:41:35.094885 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:35.145627 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"50.715819ms\"\nI0623 20:41:36.301972 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:36.339268 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"37.333506ms\"\nI0623 20:41:36.686209 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-zzc6j\" portCount=1\nI0623 20:41:36.686269 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-zzc6j\" servicePort=\"100.64.57.153:80/TCP\"\nI0623 20:41:36.686314 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:36.737647 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"51.375474ms\"\nI0623 20:41:36.864479 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-dckgs\" portCount=1\nI0623 20:41:36.872205 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vzz8n\" portCount=1\nI0623 20:41:36.883803 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vs4f5\" portCount=1\nI0623 20:41:36.901803 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-hs667\" portCount=1\nI0623 20:41:36.907359 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-h9bxw\" portCount=1\nI0623 20:41:36.959310 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-6nldp\" portCount=1\nI0623 20:41:36.968227 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-gfwdg\" portCount=1\nI0623 20:41:36.986772 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7z96f\" portCount=1\nI0623 20:41:36.998229 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-sf9qg\" portCount=1\nI0623 20:41:37.006903 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-6hs86\" portCount=1\nI0623 20:41:37.019290 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-jq6fx\" portCount=1\nI0623 20:41:37.026461 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-bjxxw\" portCount=1\nI0623 20:41:37.040677 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-ffmsr\" portCount=1\nI0623 20:41:37.045426 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-w4wjs\" portCount=1\nI0623 20:41:37.054524 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fwx9m\" portCount=1\nI0623 20:41:37.066504 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-s2qn6\" portCount=1\nI0623 20:41:37.078195 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fp6zr\" portCount=1\nI0623 20:41:37.083669 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-mfb49\" portCount=1\nI0623 20:41:37.094879 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-l9jpp\" portCount=1\nI0623 20:41:37.109472 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-86nmm\" portCount=1\nI0623 20:41:37.116858 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-r79hp\" portCount=1\nI0623 20:41:37.131482 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-2x8xk\" portCount=1\nI0623 20:41:37.139348 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-s829t\" portCount=1\nI0623 20:41:37.148053 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vgk7t\" portCount=1\nI0623 20:41:37.160324 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-d57tn\" portCount=1\nI0623 20:41:37.170738 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-zxzjz\" portCount=1\nI0623 20:41:37.185225 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-dlbmc\" portCount=1\nI0623 20:41:37.200859 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-6dl9n\" portCount=1\nI0623 20:41:37.231209 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-2nxjp\" portCount=1\nI0623 20:41:37.234249 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-kt757\" portCount=1\nI0623 20:41:37.312699 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vjwc8\" portCount=1\nI0623 20:41:37.312841 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-hs667\" servicePort=\"100.64.171.16:80/TCP\"\nI0623 20:41:37.312912 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-6nldp\" servicePort=\"100.66.83.83:80/TCP\"\nI0623 20:41:37.312926 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-fwx9m\" servicePort=\"100.69.130.77:80/TCP\"\nI0623 20:41:37.312940 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-d57tn\" servicePort=\"100.64.107.147:80/TCP\"\nI0623 20:41:37.313082 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-vjwc8\" servicePort=\"100.71.24.122:80/TCP\"\nI0623 20:41:37.313136 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-dlbmc\" servicePort=\"100.69.130.185:80/TCP\"\nI0623 20:41:37.313229 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-gfwdg\" servicePort=\"100.64.29.104:80/TCP\"\nI0623 20:41:37.313244 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-6hs86\" servicePort=\"100.68.103.118:80/TCP\"\nI0623 20:41:37.313306 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-s2qn6\" servicePort=\"100.66.41.101:80/TCP\"\nI0623 20:41:37.313322 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-l9jpp\" servicePort=\"100.69.249.116:80/TCP\"\nI0623 20:41:37.313334 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-86nmm\" servicePort=\"100.70.92.49:80/TCP\"\nI0623 20:41:37.313390 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-zxzjz\" servicePort=\"100.70.53.136:80/TCP\"\nI0623 20:41:37.313595 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-fp6zr\" servicePort=\"100.66.60.187:80/TCP\"\nI0623 20:41:37.313647 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-s829t\" servicePort=\"100.69.207.231:80/TCP\"\nI0623 20:41:37.313748 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-2nxjp\" servicePort=\"100.67.162.220:80/TCP\"\nI0623 20:41:37.314031 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-vzz8n\" servicePort=\"100.65.88.14:80/TCP\"\nI0623 20:41:37.314054 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-h9bxw\" servicePort=\"100.68.103.117:80/TCP\"\nI0623 20:41:37.314067 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-vgk7t\" servicePort=\"100.64.138.93:80/TCP\"\nI0623 20:41:37.314137 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-7z96f\" servicePort=\"100.66.165.104:80/TCP\"\nI0623 20:41:37.314191 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-sf9qg\" servicePort=\"100.70.144.222:80/TCP\"\nI0623 20:41:37.314219 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-jq6fx\" servicePort=\"100.68.67.4:80/TCP\"\nI0623 20:41:37.314232 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-bjxxw\" servicePort=\"100.70.143.244:80/TCP\"\nI0623 20:41:37.314301 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-w4wjs\" servicePort=\"100.71.97.46:80/TCP\"\nI0623 20:41:37.314315 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-6dl9n\" servicePort=\"100.68.68.170:80/TCP\"\nI0623 20:41:37.314409 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-kt757\" servicePort=\"100.70.55.115:80/TCP\"\nI0623 20:41:37.314464 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-dckgs\" servicePort=\"100.66.17.128:80/TCP\"\nI0623 20:41:37.314500 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-vs4f5\" servicePort=\"100.69.87.107:80/TCP\"\nI0623 20:41:37.314572 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-ffmsr\" servicePort=\"100.64.252.86:80/TCP\"\nI0623 20:41:37.314640 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-mfb49\" servicePort=\"100.65.106.42:80/TCP\"\nI0623 20:41:37.314657 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-r79hp\" servicePort=\"100.68.29.96:80/TCP\"\nI0623 20:41:37.314806 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-2x8xk\" servicePort=\"100.68.125.212:80/TCP\"\nI0623 20:41:37.315143 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:37.328933 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-gl82d\" portCount=1\nI0623 20:41:37.335530 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5wgst\" portCount=1\nI0623 20:41:37.341536 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-2r47p\" portCount=1\nI0623 20:41:37.347538 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-q8bcw\" portCount=1\nI0623 20:41:37.356495 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"43.702806ms\"\nI0623 20:41:37.359073 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-q2gmm\" portCount=1\nI0623 20:41:37.375049 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-pfdnh\" portCount=1\nI0623 20:41:37.380867 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-kb265\" portCount=1\nI0623 20:41:37.386368 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-bg5xf\" portCount=1\nI0623 20:41:37.387234 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-bhhr2\" portCount=1\nI0623 20:41:37.405132 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9p68w\" portCount=1\nI0623 20:41:37.407615 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nvpl7\" portCount=1\nI0623 20:41:37.416616 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rkv78\" portCount=1\nI0623 20:41:37.420308 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-c6jfc\" portCount=1\nI0623 20:41:37.422218 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-p2q78\" portCount=1\nI0623 20:41:37.436190 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rmv65\" portCount=1\nI0623 20:41:37.455920 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nx5js\" portCount=1\nI0623 20:41:37.470974 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-229xf\" portCount=1\nI0623 20:41:37.477391 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-t2s25\" portCount=1\nI0623 20:41:37.483907 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5x85d\" portCount=1\nI0623 20:41:37.490012 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nkr6p\" portCount=1\nI0623 20:41:37.513042 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nztnh\" portCount=1\nI0623 20:41:37.525781 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-h8blk\" portCount=1\nI0623 20:41:37.538458 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-pbzq9\" portCount=1\nI0623 20:41:37.544090 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-4k6ch\" portCount=1\nI0623 20:41:37.551892 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-btt9z\" portCount=1\nI0623 20:41:37.590826 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-hhb55\" portCount=1\nI0623 20:41:37.634650 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-st8bv\" portCount=1\nI0623 20:41:37.684579 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-kv8gl\" portCount=1\nI0623 20:41:37.730654 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9nmbz\" portCount=1\nI0623 20:41:37.782791 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-qk7cp\" portCount=1\nI0623 20:41:37.830790 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-w744x\" portCount=1\nI0623 20:41:37.893795 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-w5x7l\" portCount=1\nI0623 20:41:37.927368 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-h7qwf\" portCount=1\nI0623 20:41:37.982138 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rzgsp\" portCount=1\nI0623 20:41:38.031770 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-c4nd4\" portCount=1\nI0623 20:41:38.081146 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-flp5b\" portCount=1\nI0623 20:41:38.130057 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-8xzjg\" portCount=1\nI0623 20:41:38.179505 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-br84t\" portCount=1\nI0623 20:41:38.226723 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-xsldq\" portCount=1\nI0623 20:41:38.281929 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-6n5dd\" portCount=1\nI0623 20:41:38.319130 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-rzgsp\" servicePort=\"100.69.69.92:80/TCP\"\nI0623 20:41:38.319466 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-8xzjg\" servicePort=\"100.70.150.140:80/TCP\"\nI0623 20:41:38.319630 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-kb265\" servicePort=\"100.68.25.246:80/TCP\"\nI0623 20:41:38.319764 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-229xf\" servicePort=\"100.71.33.239:80/TCP\"\nI0623 20:41:38.319857 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-4k6ch\" servicePort=\"100.70.181.26:80/TCP\"\nI0623 20:41:38.319965 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-kv8gl\" servicePort=\"100.67.91.51:80/TCP\"\nI0623 20:41:38.319983 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-bg5xf\" servicePort=\"100.71.219.42:80/TCP\"\nI0623 20:41:38.320006 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-5x85d\" servicePort=\"100.66.132.252:80/TCP\"\nI0623 20:41:38.320025 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-xsldq\" servicePort=\"100.65.223.177:80/TCP\"\nI0623 20:41:38.320123 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-gl82d\" servicePort=\"100.65.51.184:80/TCP\"\nI0623 20:41:38.320142 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-p2q78\" servicePort=\"100.68.248.111:80/TCP\"\nI0623 20:41:38.320157 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-nztnh\" servicePort=\"100.70.130.238:80/TCP\"\nI0623 20:41:38.320169 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-h8blk\" servicePort=\"100.64.198.101:80/TCP\"\nI0623 20:41:38.320183 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-flp5b\" servicePort=\"100.66.18.253:80/TCP\"\nI0623 20:41:38.320288 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-q2gmm\" servicePort=\"100.69.59.83:80/TCP\"\nI0623 20:41:38.320389 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-bhhr2\" servicePort=\"100.67.135.174:80/TCP\"\nI0623 20:41:38.320772 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-t2s25\" servicePort=\"100.69.161.2:80/TCP\"\nI0623 20:41:38.320799 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-nkr6p\" servicePort=\"100.65.237.250:80/TCP\"\nI0623 20:41:38.320813 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-9nmbz\" servicePort=\"100.70.75.249:80/TCP\"\nI0623 20:41:38.320920 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-qk7cp\" servicePort=\"100.71.182.181:80/TCP\"\nI0623 20:41:38.320933 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-h7qwf\" servicePort=\"100.68.158.28:80/TCP\"\nI0623 20:41:38.320948 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-c4nd4\" servicePort=\"100.66.151.121:80/TCP\"\nI0623 20:41:38.321065 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-rkv78\" servicePort=\"100.67.47.142:80/TCP\"\nI0623 20:41:38.321087 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-nx5js\" servicePort=\"100.68.253.4:80/TCP\"\nI0623 20:41:38.321100 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-hhb55\" servicePort=\"100.64.125.21:80/TCP\"\nI0623 20:41:38.321114 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-st8bv\" servicePort=\"100.66.173.32:80/TCP\"\nI0623 20:41:38.321163 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-br84t\" servicePort=\"100.71.230.51:80/TCP\"\nI0623 20:41:38.321204 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-nvpl7\" servicePort=\"100.66.41.128:80/TCP\"\nI0623 20:41:38.321219 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-btt9z\" servicePort=\"100.69.197.223:80/TCP\"\nI0623 20:41:38.321281 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-w744x\" servicePort=\"100.66.186.186:80/TCP\"\nI0623 20:41:38.321301 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-w5x7l\" servicePort=\"100.70.14.74:80/TCP\"\nI0623 20:41:38.321315 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-2r47p\" servicePort=\"100.66.194.151:80/TCP\"\nI0623 20:41:38.321327 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-q8bcw\" servicePort=\"100.69.229.251:80/TCP\"\nI0623 20:41:38.321340 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-pfdnh\" servicePort=\"100.65.187.254:80/TCP\"\nI0623 20:41:38.321352 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-rmv65\" servicePort=\"100.70.83.218:80/TCP\"\nI0623 20:41:38.321462 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-6n5dd\" servicePort=\"100.67.151.34:80/TCP\"\nI0623 20:41:38.321582 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-5wgst\" servicePort=\"100.71.180.107:80/TCP\"\nI0623 20:41:38.321607 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-9p68w\" servicePort=\"100.66.47.111:80/TCP\"\nI0623 20:41:38.321620 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-c6jfc\" servicePort=\"100.68.175.149:80/TCP\"\nI0623 20:41:38.321635 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-pbzq9\" servicePort=\"100.65.39.85:80/TCP\"\nI0623 20:41:38.321989 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:38.331324 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vsvrd\" portCount=1\nI0623 20:41:38.362649 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"43.532872ms\"\nI0623 20:41:38.382366 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vkxxx\" portCount=1\nI0623 20:41:38.431374 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rkllg\" portCount=1\nI0623 20:41:38.485320 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-b22s4\" portCount=1\nI0623 20:41:38.529031 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-4v8ld\" portCount=1\nI0623 20:41:38.584470 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v7w9j\" portCount=1\nI0623 20:41:38.629572 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-58rk5\" portCount=1\nI0623 20:41:38.682925 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-qhdm8\" portCount=1\nI0623 20:41:38.729170 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-c4dsp\" portCount=1\nI0623 20:41:38.779968 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5g2ql\" portCount=1\nI0623 20:41:38.842583 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fbphr\" portCount=1\nI0623 20:41:38.885478 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-q8qg6\" portCount=1\nI0623 20:41:38.930149 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-8kd85\" portCount=1\nI0623 20:41:38.979982 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-w2xwx\" portCount=1\nI0623 20:41:39.042992 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-l45zw\" portCount=1\nI0623 20:41:39.081659 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-k8prz\" portCount=1\nI0623 20:41:39.133680 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-gbd75\" portCount=1\nI0623 20:41:39.153139 10 service.go:304] \"Service updated ports\" service=\"webhook-5124/e2e-test-webhook\" portCount=1\nI0623 20:41:39.178868 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fs7v9\" portCount=1\nI0623 20:41:39.228570 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rgkxh\" portCount=1\nI0623 20:41:39.283006 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fg4bp\" portCount=1\nI0623 20:41:39.321534 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-vkxxx\" servicePort=\"100.66.73.30:80/TCP\"\nI0623 20:41:39.321558 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-4v8ld\" servicePort=\"100.68.172.178:80/TCP\"\nI0623 20:41:39.321567 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-k8prz\" servicePort=\"100.69.182.161:80/TCP\"\nI0623 20:41:39.321581 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-gbd75\" servicePort=\"100.70.87.44:80/TCP\"\nI0623 20:41:39.321594 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-l45zw\" servicePort=\"100.67.138.4:80/TCP\"\nI0623 20:41:39.321604 10 service.go:419] \"Adding new service port\" portName=\"webhook-5124/e2e-test-webhook\" servicePort=\"100.68.71.95:8443/TCP\"\nI0623 20:41:39.321611 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-fs7v9\" servicePort=\"100.70.218.176:80/TCP\"\nI0623 20:41:39.321619 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-vsvrd\" servicePort=\"100.64.212.63:80/TCP\"\nI0623 20:41:39.321627 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-58rk5\" servicePort=\"100.70.128.84:80/TCP\"\nI0623 20:41:39.321635 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-qhdm8\" servicePort=\"100.70.241.233:80/TCP\"\nI0623 20:41:39.321643 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-5g2ql\" servicePort=\"100.64.201.119:80/TCP\"\nI0623 20:41:39.321651 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-8kd85\" servicePort=\"100.71.111.210:80/TCP\"\nI0623 20:41:39.321664 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-fg4bp\" servicePort=\"100.69.117.194:80/TCP\"\nI0623 20:41:39.321675 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-rkllg\" servicePort=\"100.65.189.196:80/TCP\"\nI0623 20:41:39.321683 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-b22s4\" servicePort=\"100.64.4.34:80/TCP\"\nI0623 20:41:39.321690 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-v7w9j\" servicePort=\"100.68.196.30:80/TCP\"\nI0623 20:41:39.321697 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-rgkxh\" servicePort=\"100.64.101.52:80/TCP\"\nI0623 20:41:39.321737 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-c4dsp\" servicePort=\"100.67.172.75:80/TCP\"\nI0623 20:41:39.321748 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-fbphr\" servicePort=\"100.65.114.33:80/TCP\"\nI0623 20:41:39.321779 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-q8qg6\" servicePort=\"100.64.18.92:80/TCP\"\nI0623 20:41:39.321789 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-w2xwx\" servicePort=\"100.68.221.231:80/TCP\"\nI0623 20:41:39.322649 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:39.331990 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-kdkdt\" portCount=1\nI0623 20:41:39.370391 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"48.877223ms\"\nI0623 20:41:39.384026 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-btxfr\" portCount=1\nI0623 20:41:39.432740 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-gnw2k\" portCount=1\nI0623 20:41:39.483806 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-jnlfc\" portCount=1\nI0623 20:41:39.533068 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-bkz27\" portCount=1\nI0623 20:41:39.579320 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vttbt\" portCount=1\nI0623 20:41:39.629627 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-smgwg\" portCount=1\nI0623 20:41:39.679657 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-ltm5g\" portCount=1\nI0623 20:41:39.728994 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-pl4kr\" portCount=1\nI0623 20:41:39.778914 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-wc2kr\" portCount=1\nI0623 20:41:39.829201 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-s7xjg\" portCount=1\nI0623 20:41:39.879751 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-s58f7\" portCount=1\nI0623 20:41:39.979957 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-lh245\" portCount=1\nI0623 20:41:40.028623 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-zc6v4\" portCount=1\nI0623 20:41:40.086162 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-68r9h\" portCount=1\nI0623 20:41:40.131652 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vwx9t\" portCount=1\nI0623 20:41:40.177887 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-mtkcd\" portCount=1\nI0623 20:41:40.236237 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5mn6v\" portCount=1\nI0623 20:41:40.278489 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-6sm6q\" portCount=1\nI0623 20:41:40.324609 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-smgwg\" servicePort=\"100.65.59.47:80/TCP\"\nI0623 20:41:40.324961 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-pl4kr\" servicePort=\"100.71.194.112:80/TCP\"\nI0623 20:41:40.325121 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-s7xjg\" servicePort=\"100.67.151.178:80/TCP\"\nI0623 20:41:40.325280 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-zc6v4\" servicePort=\"100.69.140.46:80/TCP\"\nI0623 20:41:40.325440 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-68r9h\" servicePort=\"100.71.188.206:80/TCP\"\nI0623 20:41:40.325498 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-5mn6v\" servicePort=\"100.70.67.0:80/TCP\"\nI0623 20:41:40.325551 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-gnw2k\" servicePort=\"100.67.206.48:80/TCP\"\nI0623 20:41:40.325613 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-bkz27\" servicePort=\"100.67.119.226:80/TCP\"\nI0623 20:41:40.325703 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-s58f7\" servicePort=\"100.71.229.45:80/TCP\"\nI0623 20:41:40.325760 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-mtkcd\" servicePort=\"100.66.159.221:80/TCP\"\nI0623 20:41:40.325815 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-jnlfc\" servicePort=\"100.64.63.25:80/TCP\"\nI0623 20:41:40.325969 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-btxfr\" servicePort=\"100.70.153.179:80/TCP\"\nI0623 20:41:40.326086 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-ltm5g\" servicePort=\"100.65.30.129:80/TCP\"\nI0623 20:41:40.326182 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-wc2kr\" servicePort=\"100.66.211.68:80/TCP\"\nI0623 20:41:40.326262 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-kdkdt\" servicePort=\"100.68.112.56:80/TCP\"\nI0623 20:41:40.326359 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-lh245\" servicePort=\"100.71.48.174:80/TCP\"\nI0623 20:41:40.326411 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-vwx9t\" servicePort=\"100.68.121.199:80/TCP\"\nI0623 20:41:40.326487 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-6sm6q\" servicePort=\"100.67.136.133:80/TCP\"\nI0623 20:41:40.326606 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-vttbt\" servicePort=\"100.69.216.60:80/TCP\"\nI0623 20:41:40.326920 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:40.331705 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-g6hdw\" portCount=1\nI0623 20:41:40.379524 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"54.934935ms\"\nI0623 20:41:40.388613 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nfbg4\" portCount=1\nI0623 20:41:40.428910 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-tjnm7\" portCount=1\nI0623 20:41:40.489851 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-zdhh9\" portCount=1\nI0623 20:41:40.531597 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fd6fm\" portCount=1\nI0623 20:41:40.579324 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-qcxx7\" portCount=1\nI0623 20:41:40.633431 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-hsjz5\" portCount=1\nI0623 20:41:40.681796 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-cnbs9\" portCount=1\nI0623 20:41:40.732307 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-2rc5x\" portCount=1\nI0623 20:41:40.778005 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v8lrt\" portCount=1\nI0623 20:41:40.832474 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-hlvhq\" portCount=1\nI0623 20:41:40.881868 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-8lvjt\" portCount=1\nI0623 20:41:40.927521 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-b55fw\" portCount=1\nI0623 20:41:40.978840 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7cdw7\" portCount=1\nI0623 20:41:41.031103 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7mg7p\" portCount=1\nI0623 20:41:41.078131 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-q6wqb\" portCount=1\nI0623 20:41:41.131703 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-lxlrl\" portCount=1\nI0623 20:41:41.179037 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-znv6s\" portCount=1\nI0623 20:41:41.233181 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9tpqh\" portCount=1\nI0623 20:41:41.281863 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-xpmwj\" portCount=1\nI0623 20:41:41.319344 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-nfbg4\" servicePort=\"100.64.113.85:80/TCP\"\nI0623 20:41:41.319372 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-zdhh9\" servicePort=\"100.70.130.7:80/TCP\"\nI0623 20:41:41.319384 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-fd6fm\" servicePort=\"100.68.85.200:80/TCP\"\nI0623 20:41:41.319395 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-cnbs9\" servicePort=\"100.68.103.57:80/TCP\"\nI0623 20:41:41.319406 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-8lvjt\" servicePort=\"100.64.28.145:80/TCP\"\nI0623 20:41:41.319417 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-7cdw7\" servicePort=\"100.70.62.29:80/TCP\"\nI0623 20:41:41.319430 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-tjnm7\" servicePort=\"100.70.69.38:80/TCP\"\nI0623 20:41:41.319441 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-2rc5x\" servicePort=\"100.69.111.161:80/TCP\"\nI0623 20:41:41.319452 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-hlvhq\" servicePort=\"100.71.169.214:80/TCP\"\nI0623 20:41:41.319463 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-7mg7p\" servicePort=\"100.71.114.2:80/TCP\"\nI0623 20:41:41.319486 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-q6wqb\" servicePort=\"100.71.158.38:80/TCP\"\nI0623 20:41:41.319497 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-g6hdw\" servicePort=\"100.71.98.191:80/TCP\"\nI0623 20:41:41.319509 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-qcxx7\" servicePort=\"100.64.233.197:80/TCP\"\nI0623 20:41:41.319520 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-hsjz5\" servicePort=\"100.64.66.247:80/TCP\"\nI0623 20:41:41.319530 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-v8lrt\" servicePort=\"100.67.183.203:80/TCP\"\nI0623 20:41:41.319542 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-b55fw\" servicePort=\"100.68.32.170:80/TCP\"\nI0623 20:41:41.319554 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-lxlrl\" servicePort=\"100.66.114.251:80/TCP\"\nI0623 20:41:41.319564 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-znv6s\" servicePort=\"100.67.202.73:80/TCP\"\nI0623 20:41:41.319578 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-9tpqh\" servicePort=\"100.68.243.226:80/TCP\"\nI0623 20:41:41.319590 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-xpmwj\" servicePort=\"100.67.85.188:80/TCP\"\nI0623 20:41:41.320079 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:41.332085 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-k4qdm\" portCount=1\nI0623 20:41:41.376160 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"56.836651ms\"\nI0623 20:41:41.384476 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-2j6xq\" portCount=1\nI0623 20:41:41.444874 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-l7t7s\" portCount=1\nI0623 20:41:41.483171 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7lt5m\" portCount=1\nI0623 20:41:41.538684 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-kkdrp\" portCount=1\nI0623 20:41:41.578348 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-pgrl2\" portCount=1\nI0623 20:41:41.631873 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nqsxc\" portCount=1\nI0623 20:41:41.683901 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9h6cb\" portCount=1\nI0623 20:41:41.732311 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v9lxk\" portCount=1\nI0623 20:41:41.780717 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vk4x6\" portCount=1\nI0623 20:41:41.809287 10 service.go:304] \"Service updated ports\" service=\"webhook-5124/e2e-test-webhook\" portCount=0\nI0623 20:41:41.828962 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-g4g2q\" portCount=1\nI0623 20:41:41.879769 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fbpwj\" portCount=1\nI0623 20:41:41.930077 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rnk5p\" portCount=1\nI0623 20:41:41.979203 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-p2fbc\" portCount=1\nI0623 20:41:42.029082 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7j6c7\" portCount=1\nI0623 20:41:42.136568 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-25q4b\" portCount=1\nI0623 20:41:42.162199 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-hc45x\" portCount=1\nI0623 20:41:42.203409 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-slhls\" portCount=1\nI0623 20:41:42.250518 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-cd8jw\" portCount=1\nI0623 20:41:42.308288 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v9ttm\" portCount=1\nI0623 20:41:42.308372 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-hc45x\" servicePort=\"100.66.65.190:80/TCP\"\nI0623 20:41:42.308388 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-v9ttm\" servicePort=\"100.67.25.74:80/TCP\"\nI0623 20:41:42.308420 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-2j6xq\" servicePort=\"100.71.142.113:80/TCP\"\nI0623 20:41:42.308429 10 service.go:444] \"Removing service port\" portName=\"webhook-5124/e2e-test-webhook\"\nI0623 20:41:42.308439 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-g4g2q\" servicePort=\"100.71.177.165:80/TCP\"\nI0623 20:41:42.308447 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-vk4x6\" servicePort=\"100.69.173.174:80/TCP\"\nI0623 20:41:42.308455 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-fbpwj\" servicePort=\"100.65.113.242:80/TCP\"\nI0623 20:41:42.308467 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-rnk5p\" servicePort=\"100.69.163.89:80/TCP\"\nI0623 20:41:42.308479 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-cd8jw\" servicePort=\"100.69.194.136:80/TCP\"\nI0623 20:41:42.308489 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-kkdrp\" servicePort=\"100.71.61.47:80/TCP\"\nI0623 20:41:42.308497 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-nqsxc\" servicePort=\"100.68.220.136:80/TCP\"\nI0623 20:41:42.308506 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-9h6cb\" servicePort=\"100.71.53.73:80/TCP\"\nI0623 20:41:42.308514 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-7lt5m\" servicePort=\"100.67.86.224:80/TCP\"\nI0623 20:41:42.308522 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-25q4b\" servicePort=\"100.67.151.145:80/TCP\"\nI0623 20:41:42.308531 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-v9lxk\" servicePort=\"100.65.223.78:80/TCP\"\nI0623 20:41:42.308539 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-p2fbc\" servicePort=\"100.71.193.150:80/TCP\"\nI0623 20:41:42.308549 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-7j6c7\" servicePort=\"100.70.163.201:80/TCP\"\nI0623 20:41:42.308562 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-slhls\" servicePort=\"100.70.42.39:80/TCP\"\nI0623 20:41:42.308571 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-k4qdm\" servicePort=\"100.66.251.143:80/TCP\"\nI0623 20:41:42.308579 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-l7t7s\" servicePort=\"100.65.34.73:80/TCP\"\nI0623 20:41:42.308587 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-pgrl2\" servicePort=\"100.68.230.103:80/TCP\"\nI0623 20:41:42.308820 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:42.373141 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"64.806558ms\"\nI0623 20:41:42.378531 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-d2cd9\" portCount=1\nI0623 20:41:42.402430 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-724w9\" portCount=1\nI0623 20:41:42.444358 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-llwt4\" portCount=1\nI0623 20:41:42.494948 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-ncx49\" portCount=1\nI0623 20:41:42.532302 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-jfm47\" portCount=1\nI0623 20:41:42.629301 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-gwvjg\" portCount=1\nI0623 20:41:42.679349 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-8sjjn\" portCount=1\nI0623 20:41:42.728643 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-t2k24\" portCount=1\nI0623 20:41:42.780611 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5cxjw\" portCount=1\nI0623 20:41:42.827736 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-d7nrf\" portCount=1\nI0623 20:41:42.890835 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-655rm\" portCount=1\nI0623 20:41:42.930221 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-2n5r9\" portCount=1\nI0623 20:41:42.984250 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-sdjlv\" portCount=1\nI0623 20:41:43.034361 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-n8l76\" portCount=1\nI0623 20:41:43.079166 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-dxxjb\" portCount=1\nI0623 20:41:43.127775 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-prqx5\" portCount=1\nI0623 20:41:43.179826 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-bkjq8\" portCount=1\nI0623 20:41:43.234351 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-d4g7l\" portCount=1\nI0623 20:41:43.283831 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5wnff\" portCount=1\nI0623 20:41:43.319683 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-d2cd9\" servicePort=\"100.70.167.155:80/TCP\"\nI0623 20:41:43.319745 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-llwt4\" servicePort=\"100.67.164.212:80/TCP\"\nI0623 20:41:43.319760 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-655rm\" servicePort=\"100.69.109.159:80/TCP\"\nI0623 20:41:43.319773 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-8sjjn\" servicePort=\"100.64.143.33:80/TCP\"\nI0623 20:41:43.319809 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-t2k24\" servicePort=\"100.67.24.225:80/TCP\"\nI0623 20:41:43.319826 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-5cxjw\" servicePort=\"100.66.156.130:80/TCP\"\nI0623 20:41:43.319914 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-sdjlv\" servicePort=\"100.66.40.230:80/TCP\"\nI0623 20:41:43.319973 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-n8l76\" servicePort=\"100.70.58.231:80/TCP\"\nI0623 20:41:43.320008 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-dxxjb\" servicePort=\"100.64.222.88:80/TCP\"\nI0623 20:41:43.320066 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-ncx49\" servicePort=\"100.68.227.253:80/TCP\"\nI0623 20:41:43.320111 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-gwvjg\" servicePort=\"100.64.37.164:80/TCP\"\nI0623 20:41:43.320184 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-prqx5\" servicePort=\"100.65.28.183:80/TCP\"\nI0623 20:41:43.320247 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-5wnff\" servicePort=\"100.67.21.125:80/TCP\"\nI0623 20:41:43.320289 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-d7nrf\" servicePort=\"100.64.63.205:80/TCP\"\nI0623 20:41:43.320346 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-2n5r9\" servicePort=\"100.66.228.173:80/TCP\"\nI0623 20:41:43.320388 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-bkjq8\" servicePort=\"100.71.30.73:80/TCP\"\nI0623 20:41:43.320450 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-d4g7l\" servicePort=\"100.69.167.249:80/TCP\"\nI0623 20:41:43.320493 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-724w9\" servicePort=\"100.68.218.229:80/TCP\"\nI0623 20:41:43.320552 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-jfm47\" servicePort=\"100.67.228.170:80/TCP\"\nI0623 20:41:43.320889 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:43.334726 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v99dj\" portCount=1\nI0623 20:41:43.390125 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"70.453718ms\"\nI0623 20:41:43.393591 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-4mdhk\" portCount=1\nI0623 20:41:43.436252 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9z8cc\" portCount=1\nI0623 20:41:43.481742 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-zx52k\" portCount=1\nI0623 20:41:43.529526 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-n2fqw\" portCount=1\nI0623 20:41:43.584520 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-mfw7n\" portCount=1\nI0623 20:41:43.640715 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7q4bt\" portCount=1\nI0623 20:41:43.689417 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-bg7pv\" portCount=1\nI0623 20:41:43.746456 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5f7mm\" portCount=1\nI0623 20:41:43.782920 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fqccx\" portCount=1\nI0623 20:41:43.853130 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7jpfh\" portCount=1\nI0623 20:41:43.880692 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fg9ct\" portCount=1\nI0623 20:41:43.928369 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-pbx8t\" portCount=1\nI0623 20:41:43.979960 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-hz94q\" portCount=1\nI0623 20:41:44.031619 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9gmk5\" portCount=1\nI0623 20:41:44.079513 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9t4r8\" portCount=1\nI0623 20:41:44.130455 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9b7g9\" portCount=1\nI0623 20:41:44.185438 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vxv66\" portCount=1\nI0623 20:41:44.242332 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-4dzcl\" portCount=1\nI0623 20:41:44.284919 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v7gjl\" portCount=1\nI0623 20:41:44.320477 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-vxv66\" servicePort=\"100.70.211.52:80/TCP\"\nI0623 20:41:44.320505 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-4mdhk\" servicePort=\"100.64.79.74:80/TCP\"\nI0623 20:41:44.320517 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-n2fqw\" servicePort=\"100.64.168.23:80/TCP\"\nI0623 20:41:44.320529 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-7jpfh\" servicePort=\"100.65.70.115:80/TCP\"\nI0623 20:41:44.320541 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-hz94q\" servicePort=\"100.66.90.13:80/TCP\"\nI0623 20:41:44.320552 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-9gmk5\" servicePort=\"100.65.222.26:80/TCP\"\nI0623 20:41:44.320564 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-zx52k\" servicePort=\"100.67.190.154:80/TCP\"\nI0623 20:41:44.320578 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-bg7pv\" servicePort=\"100.67.76.75:80/TCP\"\nI0623 20:41:44.320591 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-5f7mm\" servicePort=\"100.67.65.187:80/TCP\"\nI0623 20:41:44.320602 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-fg9ct\" servicePort=\"100.66.8.96:80/TCP\"\nI0623 20:41:44.320922 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-pbx8t\" servicePort=\"100.71.162.124:80/TCP\"\nI0623 20:41:44.320946 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-9t4r8\" servicePort=\"100.64.117.48:80/TCP\"\nI0623 20:41:44.320958 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-9b7g9\" servicePort=\"100.67.213.82:80/TCP\"\nI0623 20:41:44.320971 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-4dzcl\" servicePort=\"100.64.134.194:80/TCP\"\nI0623 20:41:44.320983 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-v7gjl\" servicePort=\"100.65.179.59:80/TCP\"\nI0623 20:41:44.321000 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-v99dj\" servicePort=\"100.69.183.226:80/TCP\"\nI0623 20:41:44.321012 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-9z8cc\" servicePort=\"100.68.27.235:80/TCP\"\nI0623 20:41:44.321029 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-mfw7n\" servicePort=\"100.67.19.209:80/TCP\"\nI0623 20:41:44.321040 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-7q4bt\" servicePort=\"100.66.207.136:80/TCP\"\nI0623 20:41:44.321050 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-fqccx\" servicePort=\"100.64.188.52:80/TCP\"\nI0623 20:41:44.321369 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:44.338403 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-lmxll\" portCount=1\nI0623 20:41:44.389772 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-76w87\" portCount=1\nI0623 20:41:44.391660 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"71.206821ms\"\nI0623 20:41:44.431135 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-449g4\" portCount=1\nI0623 20:41:44.494457 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-p8v6k\" portCount=1\nI0623 20:41:44.530042 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-sm75n\" portCount=1\nI0623 20:41:44.593799 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fjc46\" portCount=1\nI0623 20:41:44.632779 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-dpsr2\" portCount=1\nI0623 20:41:44.684410 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7xbhb\" portCount=1\nI0623 20:41:44.735779 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-w6ttc\" portCount=1\nI0623 20:41:44.779904 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-qk74w\" portCount=1\nI0623 20:41:44.844196 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-dq7ln\" portCount=1\nI0623 20:41:45.331333 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-dq7ln\" servicePort=\"100.65.198.189:80/TCP\"\nI0623 20:41:45.331652 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-lmxll\" servicePort=\"100.67.203.70:80/TCP\"\nI0623 20:41:45.331831 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-449g4\" servicePort=\"100.66.193.153:80/TCP\"\nI0623 20:41:45.331968 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-sm75n\" servicePort=\"100.70.95.241:80/TCP\"\nI0623 20:41:45.332100 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-dpsr2\" servicePort=\"100.67.116.69:80/TCP\"\nI0623 20:41:45.332259 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-7xbhb\" servicePort=\"100.68.57.114:80/TCP\"\nI0623 20:41:45.332405 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-w6ttc\" servicePort=\"100.66.16.8:80/TCP\"\nI0623 20:41:45.332535 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-qk74w\" servicePort=\"100.68.230.232:80/TCP\"\nI0623 20:41:45.332658 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-76w87\" servicePort=\"100.64.62.15:80/TCP\"\nI0623 20:41:45.332751 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-p8v6k\" servicePort=\"100.69.9.154:80/TCP\"\nI0623 20:41:45.332863 10 service.go:419] \"Adding new service port\" portName=\"svc-latency-7847/latency-svc-fjc46\" servicePort=\"100.65.220.102:80/TCP\"\nI0623 20:41:45.333321 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:45.408812 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"77.483335ms\"\nI0623 20:41:46.409211 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:46.490178 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"81.198911ms\"\nI0623 20:41:51.270122 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-229xf\" portCount=0\nI0623 20:41:51.270151 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-229xf\"\nI0623 20:41:51.270316 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:51.301730 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-25q4b\" portCount=0\nI0623 20:41:51.315131 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-2j6xq\" portCount=0\nI0623 20:41:51.333968 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-2n5r9\" portCount=0\nI0623 20:41:51.351954 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-2nxjp\" portCount=0\nI0623 20:41:51.358350 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"88.18991ms\"\nI0623 20:41:51.358388 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-25q4b\"\nI0623 20:41:51.358476 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-2j6xq\"\nI0623 20:41:51.358496 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-2n5r9\"\nI0623 20:41:51.358507 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-2nxjp\"\nI0623 20:41:51.358861 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:51.363646 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-2r47p\" portCount=0\nI0623 20:41:51.381938 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-2rc5x\" portCount=0\nI0623 20:41:51.402325 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-2x8xk\" portCount=0\nI0623 20:41:51.410172 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-449g4\" portCount=0\nI0623 20:41:51.434948 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-4dzcl\" portCount=0\nI0623 20:41:51.438521 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"80.135738ms\"\nI0623 20:41:51.445664 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-4k6ch\" portCount=0\nI0623 20:41:51.455469 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-4mdhk\" portCount=0\nI0623 20:41:51.494721 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-4v8ld\" portCount=0\nI0623 20:41:51.505885 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-58rk5\" portCount=0\nI0623 20:41:51.518062 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5cxjw\" portCount=0\nI0623 20:41:51.529988 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5f7mm\" portCount=0\nI0623 20:41:51.546707 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5g2ql\" portCount=0\nI0623 20:41:51.558880 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5mn6v\" portCount=0\nI0623 20:41:51.574043 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5wgst\" portCount=0\nI0623 20:41:51.586009 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5wnff\" portCount=0\nI0623 20:41:51.594829 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-5x85d\" portCount=0\nI0623 20:41:51.615365 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-655rm\" portCount=0\nI0623 20:41:51.629804 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-68r9h\" portCount=0\nI0623 20:41:51.669395 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-6dl9n\" portCount=0\nI0623 20:41:51.690288 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-6hs86\" portCount=0\nI0623 20:41:51.699782 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-6n5dd\" portCount=0\nI0623 20:41:51.711471 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-6nldp\" portCount=0\nI0623 20:41:51.720720 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-6sm6q\" portCount=0\nI0623 20:41:51.730285 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-724w9\" portCount=0\nI0623 20:41:51.736491 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-76w87\" portCount=0\nI0623 20:41:51.745214 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7cdw7\" portCount=0\nI0623 20:41:51.755752 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7j6c7\" portCount=0\nI0623 20:41:51.764864 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7jpfh\" portCount=0\nI0623 20:41:51.771868 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7lt5m\" portCount=0\nI0623 20:41:51.786009 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7mg7p\" portCount=0\nI0623 20:41:51.796808 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7q4bt\" portCount=0\nI0623 20:41:51.807904 10 service.go:304] \"Service updated ports\" service=\"services-9027/externalname-service\" portCount=0\nI0623 20:41:51.810450 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7xbhb\" portCount=0\nI0623 20:41:51.826087 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-7z96f\" portCount=0\nI0623 20:41:51.845264 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-86nmm\" portCount=0\nI0623 20:41:51.850601 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-8kd85\" portCount=0\nI0623 20:41:51.865352 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-8lvjt\" portCount=0\nI0623 20:41:51.874531 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-8sjjn\" portCount=0\nI0623 20:41:51.887686 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-8xzjg\" portCount=0\nI0623 20:41:51.900449 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9b7g9\" portCount=0\nI0623 20:41:51.915417 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9gmk5\" portCount=0\nI0623 20:41:51.929991 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9h6cb\" portCount=0\nI0623 20:41:51.940107 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9nmbz\" portCount=0\nI0623 20:41:51.950338 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9p68w\" portCount=0\nI0623 20:41:51.975679 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9t4r8\" portCount=0\nI0623 20:41:51.987245 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9tpqh\" portCount=0\nI0623 20:41:52.001487 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-9z8cc\" portCount=0\nI0623 20:41:52.018053 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-b22s4\" portCount=0\nI0623 20:41:52.025403 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-b55fw\" portCount=0\nI0623 20:41:52.033462 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-bg5xf\" portCount=0\nI0623 20:41:52.043917 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-bg7pv\" portCount=0\nI0623 20:41:52.051471 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-bhhr2\" portCount=0\nI0623 20:41:52.064406 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-bjxxw\" portCount=0\nI0623 20:41:52.080534 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-bkjq8\" portCount=0\nI0623 20:41:52.091864 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-bkz27\" portCount=0\nI0623 20:41:52.102360 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-br84t\" portCount=0\nI0623 20:41:52.114376 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-btt9z\" portCount=0\nI0623 20:41:52.125998 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-btxfr\" portCount=0\nI0623 20:41:52.132206 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-c4dsp\" portCount=0\nI0623 20:41:52.141460 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-c4nd4\" portCount=0\nI0623 20:41:52.151906 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-c6jfc\" portCount=0\nI0623 20:41:52.160437 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-cd8jw\" portCount=0\nI0623 20:41:52.170310 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-cnbs9\" portCount=0\nI0623 20:41:52.179419 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-d2cd9\" portCount=0\nI0623 20:41:52.201393 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-d4g7l\" portCount=0\nI0623 20:41:52.224660 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-d57tn\" portCount=0\nI0623 20:41:52.237197 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-d7nrf\" portCount=0\nI0623 20:41:52.245940 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-dckgs\" portCount=0\nI0623 20:41:52.254255 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-dlbmc\" portCount=0\nI0623 20:41:52.265838 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-dpsr2\" portCount=0\nI0623 20:41:52.298893 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-dq7ln\" portCount=0\nI0623 20:41:52.299131 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-58rk5\"\nI0623 20:41:52.299199 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-7q4bt\"\nI0623 20:41:52.299291 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-bhhr2\"\nI0623 20:41:52.299363 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-btt9z\"\nI0623 20:41:52.299407 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-4v8ld\"\nI0623 20:41:52.299493 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-68r9h\"\nI0623 20:41:52.299558 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-6nldp\"\nI0623 20:41:52.299609 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-8kd85\"\nI0623 20:41:52.299699 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-bg5xf\"\nI0623 20:41:52.299764 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-2r47p\"\nI0623 20:41:52.299815 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-2rc5x\"\nI0623 20:41:52.299896 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-655rm\"\nI0623 20:41:52.299973 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-9p68w\"\nI0623 20:41:52.300044 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-btxfr\"\nI0623 20:41:52.300107 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-4dzcl\"\nI0623 20:41:52.300157 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-7mg7p\"\nI0623 20:41:52.300208 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-b55fw\"\nI0623 20:41:52.300276 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-d4g7l\"\nI0623 20:41:52.300341 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-2x8xk\"\nI0623 20:41:52.300411 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-9nmbz\"\nI0623 20:41:52.300473 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-bjxxw\"\nI0623 20:41:52.300535 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-c6jfc\"\nI0623 20:41:52.300574 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-d57tn\"\nI0623 20:41:52.300646 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-5cxjw\"\nI0623 20:41:52.300714 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-5f7mm\"\nI0623 20:41:52.300774 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-5g2ql\"\nI0623 20:41:52.300822 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-bkjq8\"\nI0623 20:41:52.300890 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-br84t\"\nI0623 20:41:52.300938 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-c4dsp\"\nI0623 20:41:52.301021 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-dlbmc\"\nI0623 20:41:52.301059 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-dq7ln\"\nI0623 20:41:52.301127 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-7j6c7\"\nI0623 20:41:52.301191 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-9t4r8\"\nI0623 20:41:52.301256 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-4k6ch\"\nI0623 20:41:52.301318 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-6sm6q\"\nI0623 20:41:52.301371 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-9z8cc\"\nI0623 20:41:52.301455 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-bkz27\"\nI0623 20:41:52.301494 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-5x85d\"\nI0623 20:41:52.301604 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-76w87\"\nI0623 20:41:52.301668 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-7cdw7\"\nI0623 20:41:52.301715 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-b22s4\"\nI0623 20:41:52.301780 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-bg7pv\"\nI0623 20:41:52.301848 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-7jpfh\"\nI0623 20:41:52.301887 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-7xbhb\"\nI0623 20:41:52.301957 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-d2cd9\"\nI0623 20:41:52.302023 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-5mn6v\"\nI0623 20:41:52.302086 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-6hs86\"\nI0623 20:41:52.302147 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-6n5dd\"\nI0623 20:41:52.302194 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-9gmk5\"\nI0623 20:41:52.302255 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-8lvjt\"\nI0623 20:41:52.302320 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-9h6cb\"\nI0623 20:41:52.302387 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-d7nrf\"\nI0623 20:41:52.302470 10 service.go:444] \"Removing service port\" portName=\"services-9027/externalname-service:http\"\nI0623 20:41:52.302532 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-449g4\"\nI0623 20:41:52.302622 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-4mdhk\"\nI0623 20:41:52.302687 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-5wgst\"\nI0623 20:41:52.302775 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-5wnff\"\nI0623 20:41:52.302825 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-6dl9n\"\nI0623 20:41:52.302912 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-724w9\"\nI0623 20:41:52.302978 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-8sjjn\"\nI0623 20:41:52.303033 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-dpsr2\"\nI0623 20:41:52.303116 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-7lt5m\"\nI0623 20:41:52.303190 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-8xzjg\"\nI0623 20:41:52.303260 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-9b7g9\"\nI0623 20:41:52.303324 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-9tpqh\"\nI0623 20:41:52.303370 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-c4nd4\"\nI0623 20:41:52.303439 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-7z96f\"\nI0623 20:41:52.303497 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-86nmm\"\nI0623 20:41:52.303532 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-cd8jw\"\nI0623 20:41:52.303595 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-cnbs9\"\nI0623 20:41:52.303715 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-dckgs\"\nI0623 20:41:52.304161 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:52.318099 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-dxxjb\" portCount=0\nI0623 20:41:52.330763 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fbphr\" portCount=0\nI0623 20:41:52.341164 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fbpwj\" portCount=0\nI0623 20:41:52.351506 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fd6fm\" portCount=0\nI0623 20:41:52.367894 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-ffmsr\" portCount=0\nI0623 20:41:52.375509 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"76.473986ms\"\nI0623 20:41:52.378649 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fg4bp\" portCount=0\nI0623 20:41:52.390704 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fg9ct\" portCount=0\nI0623 20:41:52.419535 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fjc46\" portCount=0\nI0623 20:41:52.430242 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-flp5b\" portCount=0\nI0623 20:41:52.439229 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fp6zr\" portCount=0\nI0623 20:41:52.446983 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fqccx\" portCount=0\nI0623 20:41:52.455276 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fs7v9\" portCount=0\nI0623 20:41:52.465267 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-fwx9m\" portCount=0\nI0623 20:41:52.473142 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-g4g2q\" portCount=0\nI0623 20:41:52.481357 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-g6hdw\" portCount=0\nI0623 20:41:52.490674 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-gbd75\" portCount=0\nI0623 20:41:52.499510 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-gfwdg\" portCount=0\nI0623 20:41:52.504969 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-gl82d\" portCount=0\nI0623 20:41:52.521113 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-gnw2k\" portCount=0\nI0623 20:41:52.530614 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-gwvjg\" portCount=0\nI0623 20:41:52.539032 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-h7qwf\" portCount=0\nI0623 20:41:52.548715 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-h8blk\" portCount=0\nI0623 20:41:52.556768 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-h9bxw\" portCount=0\nI0623 20:41:52.587517 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-hc45x\" portCount=0\nI0623 20:41:52.602987 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-hhb55\" portCount=0\nI0623 20:41:52.625671 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-hlvhq\" portCount=0\nI0623 20:41:52.637168 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-hs667\" portCount=0\nI0623 20:41:52.652136 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-hsjz5\" portCount=0\nI0623 20:41:52.661788 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-hz94q\" portCount=0\nI0623 20:41:52.681049 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-jfm47\" portCount=0\nI0623 20:41:52.699956 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-jnlfc\" portCount=0\nI0623 20:41:52.717582 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-jq6fx\" portCount=0\nI0623 20:41:52.732051 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-k4qdm\" portCount=0\nI0623 20:41:52.752012 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-k8prz\" portCount=0\nI0623 20:41:52.764093 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-kb265\" portCount=0\nI0623 20:41:52.774174 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-kdkdt\" portCount=0\nI0623 20:41:52.783219 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-kkdrp\" portCount=0\nI0623 20:41:52.792536 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-kt757\" portCount=0\nI0623 20:41:52.799784 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-kv8gl\" portCount=0\nI0623 20:41:52.807485 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-l45zw\" portCount=0\nI0623 20:41:52.817530 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-l7t7s\" portCount=0\nI0623 20:41:52.825551 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-l9jpp\" portCount=0\nI0623 20:41:52.840081 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-lh245\" portCount=0\nI0623 20:41:52.850615 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-llwt4\" portCount=0\nI0623 20:41:52.860265 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-lmxll\" portCount=0\nI0623 20:41:52.870952 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-ltm5g\" portCount=0\nI0623 20:41:52.878935 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-lxlrl\" portCount=0\nI0623 20:41:52.889183 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-mfb49\" portCount=0\nI0623 20:41:52.898033 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-mfw7n\" portCount=0\nI0623 20:41:52.904790 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-mtkcd\" portCount=0\nI0623 20:41:52.913118 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-n2fqw\" portCount=0\nI0623 20:41:52.928949 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-n8l76\" portCount=0\nI0623 20:41:52.939651 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-ncx49\" portCount=0\nI0623 20:41:52.955906 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nfbg4\" portCount=0\nI0623 20:41:52.967804 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nkr6p\" portCount=0\nI0623 20:41:52.977642 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nqsxc\" portCount=0\nI0623 20:41:52.985909 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nvpl7\" portCount=0\nI0623 20:41:52.997146 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nx5js\" portCount=0\nI0623 20:41:53.005036 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-nztnh\" portCount=0\nI0623 20:41:53.019595 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-p2fbc\" portCount=0\nI0623 20:41:53.030399 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-p2q78\" portCount=0\nI0623 20:41:53.042157 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-p8v6k\" portCount=0\nI0623 20:41:53.059556 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-pbx8t\" portCount=0\nI0623 20:41:53.071697 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-pbzq9\" portCount=0\nI0623 20:41:53.081765 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-pfdnh\" portCount=0\nI0623 20:41:53.121533 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-pgrl2\" portCount=0\nI0623 20:41:53.137286 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-pl4kr\" portCount=0\nI0623 20:41:53.193272 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-prqx5\" portCount=0\nI0623 20:41:53.234486 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-q2gmm\" portCount=0\nI0623 20:41:53.276706 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-q6wqb\" portCount=0\nI0623 20:41:53.278312 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-h9bxw\"\nI0623 20:41:53.278606 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-mfw7n\"\nI0623 20:41:53.278631 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-n8l76\"\nI0623 20:41:53.280757 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-fbpwj\"\nI0623 20:41:53.280776 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-hhb55\"\nI0623 20:41:53.280789 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-jq6fx\"\nI0623 20:41:53.280817 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-ltm5g\"\nI0623 20:41:53.280829 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-nqsxc\"\nI0623 20:41:53.280839 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-ffmsr\"\nI0623 20:41:53.280849 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-hz94q\"\nI0623 20:41:53.280860 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-k4qdm\"\nI0623 20:41:53.280870 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-l7t7s\"\nI0623 20:41:53.280896 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-nx5js\"\nI0623 20:41:53.280907 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-pgrl2\"\nI0623 20:41:53.280917 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-p2q78\"\nI0623 20:41:53.280928 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-pbx8t\"\nI0623 20:41:53.280938 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-gbd75\"\nI0623 20:41:53.280947 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-hc45x\"\nI0623 20:41:53.280974 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-kkdrp\"\nI0623 20:41:53.280987 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-mfb49\"\nI0623 20:41:53.280997 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-n2fqw\"\nI0623 20:41:53.281007 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-ncx49\"\nI0623 20:41:53.281017 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-fwx9m\"\nI0623 20:41:53.281026 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-h8blk\"\nI0623 20:41:53.281051 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-kt757\"\nI0623 20:41:53.281062 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-llwt4\"\nI0623 20:41:53.281072 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-nkr6p\"\nI0623 20:41:53.281081 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-pfdnh\"\nI0623 20:41:53.281095 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-p2fbc\"\nI0623 20:41:53.281105 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-prqx5\"\nI0623 20:41:53.281131 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-kv8gl\"\nI0623 20:41:53.281144 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-l45zw\"\nI0623 20:41:53.281153 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-nztnh\"\nI0623 20:41:53.281162 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-flp5b\"\nI0623 20:41:53.281172 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-fqccx\"\nI0623 20:41:53.281182 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-nvpl7\"\nI0623 20:41:53.281208 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-pl4kr\"\nI0623 20:41:53.281218 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-fd6fm\"\nI0623 20:41:53.281228 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-fjc46\"\nI0623 20:41:53.281241 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-k8prz\"\nI0623 20:41:53.281250 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-lmxll\"\nI0623 20:41:53.281260 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-hs667\"\nI0623 20:41:53.281286 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-jnlfc\"\nI0623 20:41:53.281296 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-kb265\"\nI0623 20:41:53.281305 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-g4g2q\"\nI0623 20:41:53.281316 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-mtkcd\"\nI0623 20:41:53.281325 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-nfbg4\"\nI0623 20:41:53.281334 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-fbphr\"\nI0623 20:41:53.281344 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-fs7v9\"\nI0623 20:41:53.281370 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-gwvjg\"\nI0623 20:41:53.281381 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-kdkdt\"\nI0623 20:41:53.281394 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-q6wqb\"\nI0623 20:41:53.281405 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-fg9ct\"\nI0623 20:41:53.281415 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-gnw2k\"\nI0623 20:41:53.281441 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-l9jpp\"\nI0623 20:41:53.281453 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-lxlrl\"\nI0623 20:41:53.281464 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-fg4bp\"\nI0623 20:41:53.281473 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-fp6zr\"\nI0623 20:41:53.281486 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-h7qwf\"\nI0623 20:41:53.281496 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-jfm47\"\nI0623 20:41:53.281527 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-lh245\"\nI0623 20:41:53.281536 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-q2gmm\"\nI0623 20:41:53.281545 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-p8v6k\"\nI0623 20:41:53.281554 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-pbzq9\"\nI0623 20:41:53.281572 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-dxxjb\"\nI0623 20:41:53.281597 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-g6hdw\"\nI0623 20:41:53.281608 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-gfwdg\"\nI0623 20:41:53.281620 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-gl82d\"\nI0623 20:41:53.281628 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-hlvhq\"\nI0623 20:41:53.281662 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-hsjz5\"\nI0623 20:41:53.281952 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:53.318037 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-q8bcw\" portCount=0\nI0623 20:41:53.336383 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"58.08803ms\"\nI0623 20:41:53.348643 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-q8qg6\" portCount=0\nI0623 20:41:53.367917 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-qcxx7\" portCount=0\nI0623 20:41:53.407537 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-qhdm8\" portCount=0\nI0623 20:41:53.442197 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-qk74w\" portCount=0\nI0623 20:41:53.482909 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-qk7cp\" portCount=0\nI0623 20:41:53.522831 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-r79hp\" portCount=0\nI0623 20:41:53.549544 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rgkxh\" portCount=0\nI0623 20:41:53.575414 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rkllg\" portCount=0\nI0623 20:41:53.585604 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rkv78\" portCount=0\nI0623 20:41:53.596015 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rmv65\" portCount=0\nI0623 20:41:53.613506 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rnk5p\" portCount=0\nI0623 20:41:53.631739 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-rzgsp\" portCount=0\nI0623 20:41:53.686784 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-s2qn6\" portCount=0\nI0623 20:41:53.733044 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-s58f7\" portCount=0\nI0623 20:41:53.770878 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-s7xjg\" portCount=0\nI0623 20:41:53.791890 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-s829t\" portCount=0\nI0623 20:41:53.803940 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-sdjlv\" portCount=0\nI0623 20:41:53.812798 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-sf9qg\" portCount=0\nI0623 20:41:53.825610 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-slhls\" portCount=0\nI0623 20:41:53.835740 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-sm75n\" portCount=0\nI0623 20:41:53.844968 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-smgwg\" portCount=0\nI0623 20:41:53.853380 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-st8bv\" portCount=0\nI0623 20:41:53.865514 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-t2k24\" portCount=0\nI0623 20:41:53.878955 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-t2s25\" portCount=0\nI0623 20:41:53.885895 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-tjnm7\" portCount=0\nI0623 20:41:53.897857 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v7gjl\" portCount=0\nI0623 20:41:53.916607 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v7w9j\" portCount=0\nI0623 20:41:53.967987 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v8lrt\" portCount=0\nI0623 20:41:53.973529 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v99dj\" portCount=0\nI0623 20:41:53.988139 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v9lxk\" portCount=0\nI0623 20:41:53.997860 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-v9ttm\" portCount=0\nI0623 20:41:54.006683 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vgk7t\" portCount=0\nI0623 20:41:54.031001 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vjwc8\" portCount=0\nI0623 20:41:54.031158 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vk4x6\" portCount=0\nI0623 20:41:54.046576 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vkxxx\" portCount=0\nI0623 20:41:54.069340 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vs4f5\" portCount=0\nI0623 20:41:54.072333 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vsvrd\" portCount=0\nI0623 20:41:54.087205 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vttbt\" portCount=0\nI0623 20:41:54.091119 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vwx9t\" portCount=0\nI0623 20:41:54.145280 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vxv66\" portCount=0\nI0623 20:41:54.169174 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-vzz8n\" portCount=0\nI0623 20:41:54.181180 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-w2xwx\" portCount=0\nI0623 20:41:54.191165 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-w4wjs\" portCount=0\nI0623 20:41:54.210937 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-w5x7l\" portCount=0\nI0623 20:41:54.222641 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-w6ttc\" portCount=0\nI0623 20:41:54.248884 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-w744x\" portCount=0\nI0623 20:41:54.271107 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-wc2kr\" portCount=0\nI0623 20:41:54.271531 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-r79hp\"\nI0623 20:41:54.271624 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-rnk5p\"\nI0623 20:41:54.271697 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-rzgsp\"\nI0623 20:41:54.271753 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-s58f7\"\nI0623 20:41:54.271801 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-vsvrd\"\nI0623 20:41:54.271854 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-vxv66\"\nI0623 20:41:54.271882 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-w744x\"\nI0623 20:41:54.271937 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-qcxx7\"\nI0623 20:41:54.271968 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-rgkxh\"\nI0623 20:41:54.272040 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-rkv78\"\nI0623 20:41:54.272091 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-v7gjl\"\nI0623 20:41:54.272137 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-v99dj\"\nI0623 20:41:54.272190 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-vzz8n\"\nI0623 20:41:54.272220 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-w2xwx\"\nI0623 20:41:54.272294 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-qk7cp\"\nI0623 20:41:54.272344 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-s2qn6\"\nI0623 20:41:54.272388 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-s829t\"\nI0623 20:41:54.272441 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-sf9qg\"\nI0623 20:41:54.272466 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-vjwc8\"\nI0623 20:41:54.272515 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-qhdm8\"\nI0623 20:41:54.272558 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-sdjlv\"\nI0623 20:41:54.272608 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-smgwg\"\nI0623 20:41:54.272672 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-t2s25\"\nI0623 20:41:54.272802 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-tjnm7\"\nI0623 20:41:54.272855 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-sm75n\"\nI0623 20:41:54.272899 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-vk4x6\"\nI0623 20:41:54.272948 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-vkxxx\"\nI0623 20:41:54.272975 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-vs4f5\"\nI0623 20:41:54.273023 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-q8bcw\"\nI0623 20:41:54.273068 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-qk74w\"\nI0623 20:41:54.273119 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-rkllg\"\nI0623 20:41:54.273147 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-t2k24\"\nI0623 20:41:54.273198 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-v7w9j\"\nI0623 20:41:54.273223 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-v9lxk\"\nI0623 20:41:54.273271 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-q8qg6\"\nI0623 20:41:54.273301 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-rmv65\"\nI0623 20:41:54.273351 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-s7xjg\"\nI0623 20:41:54.273395 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-slhls\"\nI0623 20:41:54.273445 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-v8lrt\"\nI0623 20:41:54.273470 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-v9ttm\"\nI0623 20:41:54.273552 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-vttbt\"\nI0623 20:41:54.273604 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-vwx9t\"\nI0623 20:41:54.273630 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-w5x7l\"\nI0623 20:41:54.273677 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-st8bv\"\nI0623 20:41:54.273703 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-vgk7t\"\nI0623 20:41:54.273752 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-w4wjs\"\nI0623 20:41:54.273828 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-w6ttc\"\nI0623 20:41:54.273855 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-wc2kr\"\nI0623 20:41:54.279234 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:54.282732 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-xpmwj\" portCount=0\nI0623 20:41:54.306791 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-xsldq\" portCount=0\nI0623 20:41:54.314374 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-zc6v4\" portCount=0\nI0623 20:41:54.331504 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-zdhh9\" portCount=0\nI0623 20:41:54.357304 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-znv6s\" portCount=0\nI0623 20:41:54.382795 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-zx52k\" portCount=0\nI0623 20:41:54.400706 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-zxzjz\" portCount=0\nI0623 20:41:54.407809 10 service.go:304] \"Service updated ports\" service=\"svc-latency-7847/latency-svc-zzc6j\" portCount=0\nI0623 20:41:54.526659 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"255.124565ms\"\nI0623 20:41:55.527245 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-znv6s\"\nI0623 20:41:55.527276 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-zx52k\"\nI0623 20:41:55.527288 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-zxzjz\"\nI0623 20:41:55.527298 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-zzc6j\"\nI0623 20:41:55.527307 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-xpmwj\"\nI0623 20:41:55.527316 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-xsldq\"\nI0623 20:41:55.527326 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-zc6v4\"\nI0623 20:41:55.527335 10 service.go:444] \"Removing service port\" portName=\"svc-latency-7847/latency-svc-zdhh9\"\nI0623 20:41:55.528286 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:41:55.695226 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"168.020074ms\"\nI0623 20:42:04.983433 10 service.go:304] \"Service updated ports\" service=\"dns-4608/dns-test-service-3\" portCount=1\nI0623 20:42:04.983479 10 service.go:419] \"Adding new service port\" portName=\"dns-4608/dns-test-service-3:http\" servicePort=\"100.68.194.253:80/TCP\"\nI0623 20:42:04.983611 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:42:05.020206 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"36.728929ms\"\nI0623 20:42:09.954868 10 service.go:304] \"Service updated ports\" service=\"dns-4608/dns-test-service-3\" portCount=0\nI0623 20:42:09.954917 10 service.go:444] \"Removing service port\" portName=\"dns-4608/dns-test-service-3:http\"\nI0623 20:42:09.954984 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:42:09.988704 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.781031ms\"\nI0623 20:42:15.111775 10 service.go:304] \"Service updated ports\" service=\"webhook-8412/e2e-test-webhook\" portCount=1\nI0623 20:42:15.112104 10 service.go:419] \"Adding new service port\" portName=\"webhook-8412/e2e-test-webhook\" servicePort=\"100.64.166.205:8443/TCP\"\nI0623 20:42:15.112360 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:42:15.167598 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"55.527384ms\"\nI0623 20:42:15.167740 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:42:15.214684 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"46.905069ms\"\nI0623 20:42:19.578359 10 service.go:304] \"Service updated ports\" service=\"webhook-8412/e2e-test-webhook\" portCount=0\nI0623 20:42:19.578389 10 service.go:444] \"Removing service port\" portName=\"webhook-8412/e2e-test-webhook\"\nI0623 20:42:19.578457 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:42:19.634943 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"56.542774ms\"\nI0623 20:42:19.635080 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:42:19.706234 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"71.242216ms\"\nI0623 20:43:02.101173 10 service.go:304] \"Service updated ports\" service=\"services-7998/nodeport-reuse\" portCount=1\nI0623 20:43:02.101216 10 service.go:419] \"Adding new service port\" portName=\"services-7998/nodeport-reuse\" servicePort=\"100.70.26.102:80/TCP\"\nI0623 20:43:02.101310 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:02.131000 10 proxier.go:1604] \"Opened local port\" port={Description:nodePort for services-7998/nodeport-reuse IP: IPFamily:4 Port:30284 Protocol:TCP}\nE0623 20:43:02.131070 10 proxier.go:1600] \"can't open port, skipping it\" err=\"listen tcp4 :30284: bind: address already in use\" port={Description:nodePort for services-7998/nodeport-reuse IP: IPFamily:4 Port:30284 Protocol:TCP}\nI0623 20:43:02.139710 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"38.495467ms\"\nI0623 20:43:02.139906 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:02.175645 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"35.892087ms\"\nI0623 20:43:02.204479 10 service.go:304] \"Service updated ports\" service=\"services-7998/nodeport-reuse\" portCount=0\nI0623 20:43:03.176261 10 service.go:444] \"Removing service port\" portName=\"services-7998/nodeport-reuse\"\nI0623 20:43:03.176553 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:03.214461 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"38.216037ms\"\nI0623 20:43:05.835516 10 service.go:304] \"Service updated ports\" service=\"services-7998/nodeport-reuse\" portCount=1\nI0623 20:43:05.835561 10 service.go:419] \"Adding new service port\" portName=\"services-7998/nodeport-reuse\" servicePort=\"100.69.187.52:80/TCP\"\nI0623 20:43:05.835658 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:05.876412 10 proxier.go:1604] \"Opened local port\" port={Description:nodePort for services-7998/nodeport-reuse IP: IPFamily:4 Port:30284 Protocol:TCP}\nE0623 20:43:05.876501 10 proxier.go:1600] \"can't open port, skipping it\" err=\"listen tcp4 :30284: bind: address already in use\" port={Description:nodePort for services-7998/nodeport-reuse IP: IPFamily:4 Port:30284 Protocol:TCP}\nI0623 20:43:05.882208 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"46.652926ms\"\nI0623 20:43:05.882382 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:05.929974 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"47.712704ms\"\nI0623 20:43:05.948974 10 service.go:304] \"Service updated ports\" service=\"services-7998/nodeport-reuse\" portCount=0\nI0623 20:43:06.318817 10 service.go:304] \"Service updated ports\" service=\"webhook-2076/e2e-test-webhook\" portCount=1\nI0623 20:43:06.930533 10 service.go:444] \"Removing service port\" portName=\"services-7998/nodeport-reuse\"\nI0623 20:43:06.930600 10 service.go:419] \"Adding new service port\" portName=\"webhook-2076/e2e-test-webhook\" servicePort=\"100.65.14.108:8443/TCP\"\nI0623 20:43:06.930810 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:06.966820 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"36.298231ms\"\nI0623 20:43:08.182970 10 service.go:304] \"Service updated ports\" service=\"webhook-2076/e2e-test-webhook\" portCount=0\nI0623 20:43:08.183009 10 service.go:444] \"Removing service port\" portName=\"webhook-2076/e2e-test-webhook\"\nI0623 20:43:08.183207 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:08.223375 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"40.358754ms\"\nI0623 20:43:09.223615 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:09.286980 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"63.479966ms\"\nI0623 20:43:29.284895 10 service.go:304] \"Service updated ports\" service=\"kubectl-6650/agnhost-primary\" portCount=1\nI0623 20:43:29.284943 10 service.go:419] \"Adding new service port\" portName=\"kubectl-6650/agnhost-primary\" servicePort=\"100.66.11.72:6379/TCP\"\nI0623 20:43:29.285038 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:29.329168 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"44.229391ms\"\nI0623 20:43:29.329292 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:29.368491 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"39.284845ms\"\nI0623 20:43:31.608207 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:31.673326 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"65.230916ms\"\nI0623 20:43:41.666794 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:41.713974 10 service.go:304] \"Service updated ports\" service=\"kubectl-6650/agnhost-primary\" portCount=0\nI0623 20:43:41.740502 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"73.791414ms\"\nI0623 20:43:41.740560 10 service.go:444] \"Removing service port\" portName=\"kubectl-6650/agnhost-primary\"\nI0623 20:43:41.740777 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:41.816182 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"75.640455ms\"\nI0623 20:43:51.807349 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:51.860579 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"53.350106ms\"\nI0623 20:43:51.861044 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:51.930221 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"69.26408ms\"\nI0623 20:43:52.931262 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:52.961430 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.261075ms\"\nI0623 20:43:54.206325 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:54.248427 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"42.205874ms\"\nI0623 20:43:55.249016 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:55.286840 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"37.967918ms\"\nI0623 20:43:56.377370 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:56.407281 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.009734ms\"\nI0623 20:43:57.407816 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:43:57.439914 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.239044ms\"\nI0623 20:44:01.572758 10 service.go:304] \"Service updated ports\" service=\"webhook-6283/e2e-test-webhook\" portCount=1\nI0623 20:44:01.572843 10 service.go:419] \"Adding new service port\" portName=\"webhook-6283/e2e-test-webhook\" servicePort=\"100.68.214.105:8443/TCP\"\nI0623 20:44:01.572913 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:01.674132 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"101.318716ms\"\nI0623 20:44:01.674291 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:01.774253 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"100.079056ms\"\nI0623 20:44:04.194706 10 service.go:304] \"Service updated ports\" service=\"webhook-6283/e2e-test-webhook\" portCount=0\nI0623 20:44:04.194746 10 service.go:444] \"Removing service port\" portName=\"webhook-6283/e2e-test-webhook\"\nI0623 20:44:04.194885 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:04.243541 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"48.787909ms\"\nI0623 20:44:04.243709 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:04.293851 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"50.268777ms\"\nI0623 20:44:05.966851 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:06.017605 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"50.826767ms\"\nI0623 20:44:07.017972 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:07.049619 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"31.751487ms\"\nI0623 20:44:10.873435 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:10.930414 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"57.244869ms\"\nI0623 20:44:10.930519 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:10.977055 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"46.603329ms\"\nI0623 20:44:13.519358 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:13.554117 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"34.872175ms\"\nI0623 20:44:13.787855 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:13.820007 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.227545ms\"\nI0623 20:44:14.820250 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:14.851776 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"31.631302ms\"\nI0623 20:44:15.037631 10 service.go:304] \"Service updated ports\" service=\"kubectl-1666/agnhost-primary\" portCount=1\nI0623 20:44:15.852535 10 service.go:419] \"Adding new service port\" portName=\"kubectl-1666/agnhost-primary\" servicePort=\"100.66.109.188:6379/TCP\"\nI0623 20:44:15.852655 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:15.895862 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"43.355855ms\"\nI0623 20:44:22.266527 10 service.go:304] \"Service updated ports\" service=\"kubectl-1666/agnhost-primary\" portCount=0\nI0623 20:44:22.266610 10 service.go:444] \"Removing service port\" portName=\"kubectl-1666/agnhost-primary\"\nI0623 20:44:22.266962 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:22.463347 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"196.727888ms\"\nI0623 20:44:22.463477 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:22.658742 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"195.340488ms\"\nI0623 20:44:29.180410 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:29.233839 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"53.532983ms\"\nI0623 20:44:29.280844 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:29.327571 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"46.838477ms\"\nI0623 20:44:30.330754 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:30.530224 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"199.591797ms\"\nI0623 20:44:36.740763 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:36.778365 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"37.721543ms\"\nI0623 20:44:36.778802 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:36.824951 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"46.544228ms\"\nI0623 20:44:37.825242 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:37.889859 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"64.764159ms\"\nI0623 20:44:43.262283 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:43.317402 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"55.278825ms\"\nI0623 20:44:43.323931 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:44:43.372371 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"48.546372ms\"\nI0623 20:45:08.809258 10 service.go:304] \"Service updated ports\" service=\"endpointslice-9645/example-empty-selector\" portCount=1\nI0623 20:45:08.809295 10 service.go:419] \"Adding new service port\" portName=\"endpointslice-9645/example-empty-selector:example\" servicePort=\"100.68.222.150:80/TCP\"\nI0623 20:45:08.809365 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:08.854421 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"45.125578ms\"\nI0623 20:45:08.854578 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:08.886869 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.405568ms\"\nI0623 20:45:09.137429 10 service.go:304] \"Service updated ports\" service=\"endpointslice-9645/example-empty-selector\" portCount=0\nI0623 20:45:09.887073 10 service.go:444] \"Removing service port\" portName=\"endpointslice-9645/example-empty-selector:example\"\nI0623 20:45:09.887314 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:09.942882 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"55.832909ms\"\nI0623 20:45:18.829370 10 service.go:304] \"Service updated ports\" service=\"webhook-5436/e2e-test-webhook\" portCount=1\nI0623 20:45:18.829421 10 service.go:419] \"Adding new service port\" portName=\"webhook-5436/e2e-test-webhook\" servicePort=\"100.71.239.170:8443/TCP\"\nI0623 20:45:18.829523 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:18.879810 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"50.389574ms\"\nI0623 20:45:18.879956 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:18.934712 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"54.860987ms\"\nI0623 20:45:21.716548 10 service.go:304] \"Service updated ports\" service=\"webhook-5436/e2e-test-webhook\" portCount=0\nI0623 20:45:21.716592 10 service.go:444] \"Removing service port\" portName=\"webhook-5436/e2e-test-webhook\"\nI0623 20:45:21.716695 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:21.764779 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"48.17932ms\"\nI0623 20:45:21.764929 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:21.815360 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"50.533671ms\"\nI0623 20:45:22.903847 10 service.go:304] \"Service updated ports\" service=\"endpointslice-9225/example-int-port\" portCount=1\nI0623 20:45:22.903900 10 service.go:419] \"Adding new service port\" portName=\"endpointslice-9225/example-int-port:example\" servicePort=\"100.71.162.227:80/TCP\"\nI0623 20:45:22.904006 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:22.938104 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"34.210166ms\"\nI0623 20:45:23.013765 10 service.go:304] \"Service updated ports\" service=\"endpointslice-9225/example-named-port\" portCount=1\nI0623 20:45:23.124308 10 service.go:304] \"Service updated ports\" service=\"endpointslice-9225/example-no-match\" portCount=1\nI0623 20:45:23.938637 10 service.go:419] \"Adding new service port\" portName=\"endpointslice-9225/example-named-port:http\" servicePort=\"100.65.238.44:80/TCP\"\nI0623 20:45:23.938674 10 service.go:419] \"Adding new service port\" portName=\"endpointslice-9225/example-no-match:example-no-match\" servicePort=\"100.70.155.91:80/TCP\"\nI0623 20:45:23.938972 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:23.978671 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"40.071481ms\"\nI0623 20:45:32.369652 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:32.414491 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"44.93577ms\"\nI0623 20:45:33.778299 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:33.809627 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"31.398092ms\"\nI0623 20:45:33.809951 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:33.839944 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.085321ms\"\nI0623 20:45:49.711759 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:49.785418 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"73.768661ms\"\nI0623 20:45:52.827219 10 service.go:304] \"Service updated ports\" service=\"kubectl-7653/agnhost-replica\" portCount=1\nI0623 20:45:52.827268 10 service.go:419] \"Adding new service port\" portName=\"kubectl-7653/agnhost-replica\" servicePort=\"100.71.236.214:6379/TCP\"\nI0623 20:45:52.827398 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:52.997746 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"170.471451ms\"\nI0623 20:45:52.997879 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:53.143484 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"145.691911ms\"\nI0623 20:45:53.473450 10 service.go:304] \"Service updated ports\" service=\"kubectl-7653/agnhost-primary\" portCount=1\nI0623 20:45:54.122762 10 service.go:304] \"Service updated ports\" service=\"kubectl-7653/frontend\" portCount=1\nI0623 20:45:54.122820 10 service.go:419] \"Adding new service port\" portName=\"kubectl-7653/agnhost-primary\" servicePort=\"100.69.44.249:6379/TCP\"\nI0623 20:45:54.122835 10 service.go:419] \"Adding new service port\" portName=\"kubectl-7653/frontend\" servicePort=\"100.67.26.185:80/TCP\"\nI0623 20:45:54.122942 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:54.279216 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"156.395564ms\"\nI0623 20:45:55.202899 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:55.295134 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"92.340555ms\"\nI0623 20:45:56.298239 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:56.446751 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"148.656256ms\"\nI0623 20:45:58.425762 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:58.462859 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"37.180357ms\"\nI0623 20:45:59.331074 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:45:59.377155 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"46.182597ms\"\nI0623 20:46:02.074988 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:02.109193 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"34.314707ms\"\nI0623 20:46:02.563313 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:02.635834 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"72.608948ms\"\nI0623 20:46:04.364581 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:04.397582 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.107794ms\"\nI0623 20:46:05.161018 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:05.328747 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"167.837552ms\"\nI0623 20:46:06.988906 10 service.go:304] \"Service updated ports\" service=\"kubectl-7653/agnhost-replica\" portCount=0\nI0623 20:46:06.988935 10 service.go:444] \"Removing service port\" portName=\"kubectl-7653/agnhost-replica\"\nI0623 20:46:06.989000 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:07.021785 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.841967ms\"\nI0623 20:46:07.021879 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:07.052672 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.856206ms\"\nI0623 20:46:07.535815 10 service.go:304] \"Service updated ports\" service=\"kubectl-7653/agnhost-primary\" portCount=0\nI0623 20:46:08.053576 10 service.go:444] \"Removing service port\" portName=\"kubectl-7653/agnhost-primary\"\nI0623 20:46:08.053741 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:08.055534 10 service.go:304] \"Service updated ports\" service=\"kubectl-7653/frontend\" portCount=0\nI0623 20:46:08.097773 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"44.229508ms\"\nI0623 20:46:09.098776 10 service.go:444] \"Removing service port\" portName=\"kubectl-7653/frontend\"\nI0623 20:46:09.098914 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:09.132207 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.446129ms\"\nI0623 20:46:09.937505 10 service.go:304] \"Service updated ports\" service=\"endpointslice-9225/example-int-port\" portCount=0\nI0623 20:46:09.949045 10 service.go:304] \"Service updated ports\" service=\"endpointslice-9225/example-named-port\" portCount=0\nI0623 20:46:09.965345 10 service.go:304] \"Service updated ports\" service=\"endpointslice-9225/example-no-match\" portCount=0\nI0623 20:46:10.132366 10 service.go:444] \"Removing service port\" portName=\"endpointslice-9225/example-int-port:example\"\nI0623 20:46:10.132395 10 service.go:444] \"Removing service port\" portName=\"endpointslice-9225/example-named-port:http\"\nI0623 20:46:10.132407 10 service.go:444] \"Removing service port\" portName=\"endpointslice-9225/example-no-match:example-no-match\"\nI0623 20:46:10.132496 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:10.171149 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"38.791755ms\"\nI0623 20:46:42.337814 10 service.go:304] \"Service updated ports\" service=\"deployment-7705/test-rolling-update-with-lb\" portCount=0\nI0623 20:46:42.337856 10 service.go:444] \"Removing service port\" portName=\"deployment-7705/test-rolling-update-with-lb\"\nI0623 20:46:42.337967 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:42.368551 10 service_health.go:107] \"Closing healthcheck\" service=\"deployment-7705/test-rolling-update-with-lb\" port=32659\nE0623 20:46:42.368725 10 service_health.go:187] \"Healthcheck closed\" err=\"accept tcp [::]:32659: use of closed network connection\" service=\"deployment-7705/test-rolling-update-with-lb\"\nI0623 20:46:42.368752 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.900109ms\"\nI0623 20:46:53.825147 10 service.go:304] \"Service updated ports\" service=\"services-5554/service-proxy-toggled\" portCount=1\nI0623 20:46:53.825217 10 service.go:419] \"Adding new service port\" portName=\"services-5554/service-proxy-toggled\" servicePort=\"100.66.99.114:80/TCP\"\nI0623 20:46:53.825314 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:53.874201 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"48.978758ms\"\nI0623 20:46:53.874335 10 proxier.go:827] \"Syncing iptables rules\"\nE0623 20:46:53.938921 10 utils.go:166] \"Failed to get local addresses assuming no local IPs\" err=\"route ip+net: no such network interface\"\nI0623 20:46:53.946288 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"72.039792ms\"\nI0623 20:46:55.786724 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:55.818519 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"31.885931ms\"\nI0623 20:46:56.284024 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:56.327264 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"43.350196ms\"\nI0623 20:46:58.961638 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:46:59.023588 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"62.059722ms\"\nI0623 20:47:17.989112 10 service.go:304] \"Service updated ports\" service=\"services-1379/sourceip-test\" portCount=1\nI0623 20:47:17.989162 10 service.go:419] \"Adding new service port\" portName=\"services-1379/sourceip-test\" servicePort=\"100.65.24.199:8080/TCP\"\nI0623 20:47:17.989258 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:18.021412 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.253425ms\"\nI0623 20:47:18.021638 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:18.053500 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.052067ms\"\nI0623 20:47:22.097932 10 service.go:304] \"Service updated ports\" service=\"services-5554/service-proxy-toggled\" portCount=0\nI0623 20:47:22.097970 10 service.go:444] \"Removing service port\" portName=\"services-5554/service-proxy-toggled\"\nI0623 20:47:22.098069 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:22.178316 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"80.330892ms\"\nI0623 20:47:22.178454 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:22.232110 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"53.751832ms\"\nI0623 20:47:23.232580 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:23.265632 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.141836ms\"\nI0623 20:47:24.129913 10 service.go:304] \"Service updated ports\" service=\"dns-9470/test-service-2\" portCount=1\nI0623 20:47:24.129964 10 service.go:419] \"Adding new service port\" portName=\"dns-9470/test-service-2:http\" servicePort=\"100.68.154.148:80/TCP\"\nI0623 20:47:24.130064 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:24.162943 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.985415ms\"\nI0623 20:47:25.163760 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:25.285598 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"121.916266ms\"\nI0623 20:47:27.111201 10 service.go:304] \"Service updated ports\" service=\"services-7270/e2e-svc-a-7lqxb\" portCount=1\nI0623 20:47:27.111436 10 service.go:419] \"Adding new service port\" portName=\"services-7270/e2e-svc-a-7lqxb:http\" servicePort=\"100.65.92.21:8001/TCP\"\nI0623 20:47:27.111588 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:27.141339 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"29.917206ms\"\nI0623 20:47:27.225110 10 service.go:304] \"Service updated ports\" service=\"services-7270/e2e-svc-b-mdgln\" portCount=1\nI0623 20:47:27.225161 10 service.go:419] \"Adding new service port\" portName=\"services-7270/e2e-svc-b-mdgln:http\" servicePort=\"100.68.217.207:8002/TCP\"\nI0623 20:47:27.225239 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:27.261582 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"36.422442ms\"\nI0623 20:47:27.340008 10 service.go:304] \"Service updated ports\" service=\"services-7270/e2e-svc-c-q426s\" portCount=1\nI0623 20:47:27.556301 10 service.go:304] \"Service updated ports\" service=\"services-7270/e2e-svc-a-7lqxb\" portCount=0\nI0623 20:47:27.569420 10 service.go:304] \"Service updated ports\" service=\"services-7270/e2e-svc-b-mdgln\" portCount=0\nI0623 20:47:28.150426 10 service.go:419] \"Adding new service port\" portName=\"services-7270/e2e-svc-c-q426s:http\" servicePort=\"100.71.168.217:8003/TCP\"\nI0623 20:47:28.150453 10 service.go:444] \"Removing service port\" portName=\"services-7270/e2e-svc-a-7lqxb:http\"\nI0623 20:47:28.150463 10 service.go:444] \"Removing service port\" portName=\"services-7270/e2e-svc-b-mdgln:http\"\nI0623 20:47:28.150594 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:28.183421 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.015898ms\"\nI0623 20:47:30.254837 10 service.go:304] \"Service updated ports\" service=\"services-5554/service-proxy-toggled\" portCount=1\nI0623 20:47:30.254886 10 service.go:419] \"Adding new service port\" portName=\"services-5554/service-proxy-toggled\" servicePort=\"100.66.99.114:80/TCP\"\nI0623 20:47:30.255000 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:30.387561 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"132.669624ms\"\nI0623 20:47:30.387720 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:30.565747 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"178.13738ms\"\nI0623 20:47:33.012667 10 service.go:304] \"Service updated ports\" service=\"services-7270/e2e-svc-c-q426s\" portCount=0\nI0623 20:47:33.012696 10 service.go:444] \"Removing service port\" portName=\"services-7270/e2e-svc-c-q426s:http\"\nI0623 20:47:33.012760 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:33.053451 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"40.746395ms\"\nI0623 20:47:36.026831 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:36.133429 10 service.go:304] \"Service updated ports\" service=\"services-1379/sourceip-test\" portCount=0\nI0623 20:47:36.233467 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"206.79722ms\"\nI0623 20:47:36.233514 10 service.go:444] \"Removing service port\" portName=\"services-1379/sourceip-test\"\nI0623 20:47:36.233639 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:36.431624 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"198.097924ms\"\nI0623 20:47:56.087207 10 service.go:304] \"Service updated ports\" service=\"services-3166/externalname-service\" portCount=1\nI0623 20:47:56.087257 10 service.go:419] \"Adding new service port\" portName=\"services-3166/externalname-service:http\" servicePort=\"100.67.45.245:80/TCP\"\nI0623 20:47:56.087354 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:56.131311 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"44.054468ms\"\nI0623 20:47:56.131437 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:56.174225 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"42.870781ms\"\nI0623 20:47:58.262409 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:47:58.318987 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"56.634344ms\"\nI0623 20:48:00.181957 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:00.242805 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"60.954397ms\"\nI0623 20:48:00.553417 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:00.593119 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"39.814112ms\"\nI0623 20:48:00.617902 10 service.go:304] \"Service updated ports\" service=\"services-5554/service-proxy-toggled\" portCount=0\nI0623 20:48:01.594100 10 service.go:444] \"Removing service port\" portName=\"services-5554/service-proxy-toggled\"\nI0623 20:48:01.594277 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:01.649027 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"54.951478ms\"\nI0623 20:48:02.398784 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:02.497493 10 service.go:304] \"Service updated ports\" service=\"dns-9470/test-service-2\" portCount=0\nI0623 20:48:02.526512 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"127.823943ms\"\nI0623 20:48:03.527513 10 service.go:444] \"Removing service port\" portName=\"dns-9470/test-service-2:http\"\nI0623 20:48:03.527833 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:03.566978 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"39.481414ms\"\nI0623 20:48:10.687185 10 service.go:304] \"Service updated ports\" service=\"services-3166/externalname-service\" portCount=0\nI0623 20:48:10.687250 10 service.go:444] \"Removing service port\" portName=\"services-3166/externalname-service:http\"\nI0623 20:48:10.687317 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:10.720630 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.375014ms\"\nI0623 20:48:10.720735 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:10.753161 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.498703ms\"\nI0623 20:48:14.173728 10 service.go:304] \"Service updated ports\" service=\"services-4421/hairpin-test\" portCount=1\nI0623 20:48:14.173775 10 service.go:419] \"Adding new service port\" portName=\"services-4421/hairpin-test\" servicePort=\"100.65.193.127:8080/TCP\"\nI0623 20:48:14.174137 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:14.212420 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"38.648684ms\"\nI0623 20:48:14.212502 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:14.247554 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"35.100962ms\"\nI0623 20:48:15.040271 10 service.go:304] \"Service updated ports\" service=\"crd-webhook-1988/e2e-test-crd-conversion-webhook\" portCount=1\nI0623 20:48:15.247773 10 service.go:419] \"Adding new service port\" portName=\"crd-webhook-1988/e2e-test-crd-conversion-webhook\" servicePort=\"100.69.123.237:9443/TCP\"\nI0623 20:48:15.247945 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:15.285507 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"37.778315ms\"\nI0623 20:48:20.046880 10 service.go:304] \"Service updated ports\" service=\"crd-webhook-1988/e2e-test-crd-conversion-webhook\" portCount=0\nI0623 20:48:20.046926 10 service.go:444] \"Removing service port\" portName=\"crd-webhook-1988/e2e-test-crd-conversion-webhook\"\nI0623 20:48:20.047041 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:20.098357 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"51.420041ms\"\nI0623 20:48:20.116350 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:20.169386 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"53.101031ms\"\nI0623 20:48:21.170430 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:21.204940 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"34.59865ms\"\nI0623 20:48:32.231600 10 service.go:304] \"Service updated ports\" service=\"services-4421/hairpin-test\" portCount=0\nI0623 20:48:32.231643 10 service.go:444] \"Removing service port\" portName=\"services-4421/hairpin-test\"\nI0623 20:48:32.231743 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:32.447277 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"215.622775ms\"\nI0623 20:48:32.447413 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:48:32.611237 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"163.915368ms\"\nI0623 20:49:16.230296 10 service.go:304] \"Service updated ports\" service=\"services-3970/affinity-clusterip-transition\" portCount=1\nI0623 20:49:16.230342 10 service.go:419] \"Adding new service port\" portName=\"services-3970/affinity-clusterip-transition\" servicePort=\"100.71.253.32:80/TCP\"\nI0623 20:49:16.230441 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:16.267964 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"37.623919ms\"\nI0623 20:49:16.268237 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:16.298657 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.651833ms\"\nI0623 20:49:21.179363 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:21.223453 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"44.173877ms\"\nI0623 20:49:23.861351 10 service.go:304] \"Service updated ports\" service=\"webhook-1103/e2e-test-webhook\" portCount=1\nI0623 20:49:23.861728 10 service.go:419] \"Adding new service port\" portName=\"webhook-1103/e2e-test-webhook\" servicePort=\"100.64.122.216:8443/TCP\"\nI0623 20:49:23.861835 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:23.936602 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"75.201568ms\"\nI0623 20:49:23.937104 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:24.020466 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"83.822026ms\"\nI0623 20:49:25.953673 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:26.023143 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"69.580004ms\"\nI0623 20:49:27.558583 10 service.go:304] \"Service updated ports\" service=\"webhook-1103/e2e-test-webhook\" portCount=0\nI0623 20:49:27.558613 10 service.go:444] \"Removing service port\" portName=\"webhook-1103/e2e-test-webhook\"\nI0623 20:49:27.558679 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:27.590852 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.232824ms\"\nI0623 20:49:27.590938 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:27.653938 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"63.047368ms\"\nI0623 20:49:37.940088 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:37.996480 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"56.550466ms\"\nI0623 20:49:54.498159 10 service.go:304] \"Service updated ports\" service=\"services-3970/affinity-clusterip-transition\" portCount=1\nI0623 20:49:54.498199 10 service.go:421] \"Updating existing service port\" portName=\"services-3970/affinity-clusterip-transition\" servicePort=\"100.71.253.32:80/TCP\"\nI0623 20:49:54.498284 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:54.556010 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"57.812846ms\"\nI0623 20:49:54.601268 10 service.go:304] \"Service updated ports\" service=\"webhook-4362/e2e-test-webhook\" portCount=1\nI0623 20:49:54.601401 10 service.go:419] \"Adding new service port\" portName=\"webhook-4362/e2e-test-webhook\" servicePort=\"100.65.181.95:8443/TCP\"\nI0623 20:49:54.601502 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:54.655388 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"54.036714ms\"\nI0623 20:49:55.658774 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:55.757143 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"98.485885ms\"\nI0623 20:49:56.471007 10 service.go:304] \"Service updated ports\" service=\"services-3970/affinity-clusterip-transition\" portCount=1\nI0623 20:49:56.761508 10 service.go:421] \"Updating existing service port\" portName=\"services-3970/affinity-clusterip-transition\" servicePort=\"100.71.253.32:80/TCP\"\nI0623 20:49:56.761632 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:56.873597 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"112.128479ms\"\nI0623 20:49:57.085893 10 service.go:304] \"Service updated ports\" service=\"services-8531/test-service-xxk9r\" portCount=1\nI0623 20:49:57.404399 10 service.go:304] \"Service updated ports\" service=\"services-8531/test-service-xxk9r\" portCount=1\nI0623 20:49:57.731859 10 service.go:304] \"Service updated ports\" service=\"services-8531/test-service-xxk9r\" portCount=1\nI0623 20:49:57.732179 10 service.go:419] \"Adding new service port\" portName=\"services-8531/test-service-xxk9r:http\" servicePort=\"100.67.61.137:80/TCP\"\nI0623 20:49:57.732280 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:57.795771 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"63.608401ms\"\nI0623 20:49:57.952950 10 service.go:304] \"Service updated ports\" service=\"services-8531/test-service-xxk9r\" portCount=1\nI0623 20:49:58.018752 10 service.go:304] \"Service updated ports\" service=\"webhook-4362/e2e-test-webhook\" portCount=0\nI0623 20:49:58.163522 10 service.go:304] \"Service updated ports\" service=\"services-8531/test-service-xxk9r\" portCount=0\nI0623 20:49:58.795930 10 service.go:444] \"Removing service port\" portName=\"services-8531/test-service-xxk9r:http\"\nI0623 20:49:58.795974 10 service.go:444] \"Removing service port\" portName=\"webhook-4362/e2e-test-webhook\"\nI0623 20:49:58.796068 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:49:58.834448 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"38.524306ms\"\nI0623 20:50:00.953885 10 service.go:304] \"Service updated ports\" service=\"webhook-4966/e2e-test-webhook\" portCount=1\nI0623 20:50:00.954637 10 service.go:419] \"Adding new service port\" portName=\"webhook-4966/e2e-test-webhook\" servicePort=\"100.69.197.22:8443/TCP\"\nI0623 20:50:00.954755 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:01.002269 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"47.600022ms\"\nI0623 20:50:01.002470 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:01.056434 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"54.116766ms\"\nI0623 20:50:14.078745 10 service.go:304] \"Service updated ports\" service=\"webhook-4966/e2e-test-webhook\" portCount=0\nI0623 20:50:14.078782 10 service.go:444] \"Removing service port\" portName=\"webhook-4966/e2e-test-webhook\"\nI0623 20:50:14.078883 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:14.133494 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"54.702299ms\"\nI0623 20:50:14.133631 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:14.203862 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"70.323257ms\"\nI0623 20:50:26.694328 10 service.go:304] \"Service updated ports\" service=\"services-2567/nodeport-test\" portCount=1\nI0623 20:50:26.694375 10 service.go:419] \"Adding new service port\" portName=\"services-2567/nodeport-test:http\" servicePort=\"100.70.90.16:80/TCP\"\nI0623 20:50:26.694478 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:26.727297 10 proxier.go:1604] \"Opened local port\" port={Description:nodePort for services-2567/nodeport-test:http IP: IPFamily:4 Port:31661 Protocol:TCP}\nE0623 20:50:26.727359 10 proxier.go:1600] \"can't open port, skipping it\" err=\"listen tcp4 :31661: bind: address already in use\" port={Description:nodePort for services-2567/nodeport-test:http IP: IPFamily:4 Port:31661 Protocol:TCP}\nI0623 20:50:26.732516 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"38.147643ms\"\nI0623 20:50:26.732597 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:26.764350 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"31.803362ms\"\nI0623 20:50:27.133173 10 service.go:304] \"Service updated ports\" service=\"services-8188/nodeport-service\" portCount=1\nI0623 20:50:27.245783 10 service.go:304] \"Service updated ports\" service=\"services-8188/externalsvc\" portCount=1\nI0623 20:50:27.764731 10 service.go:419] \"Adding new service port\" portName=\"services-8188/externalsvc\" servicePort=\"100.70.186.196:80/TCP\"\nI0623 20:50:27.764763 10 service.go:419] \"Adding new service port\" portName=\"services-8188/nodeport-service\" servicePort=\"100.71.105.87:80/TCP\"\nI0623 20:50:27.764835 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:27.799739 10 proxier.go:1604] \"Opened local port\" port={Description:nodePort for services-8188/nodeport-service IP: IPFamily:4 Port:31297 Protocol:TCP}\nE0623 20:50:27.800134 10 proxier.go:1600] \"can't open port, skipping it\" err=\"listen tcp4 :31297: bind: address already in use\" port={Description:nodePort for services-8188/nodeport-service IP: IPFamily:4 Port:31297 Protocol:TCP}\nI0623 20:50:27.805867 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"41.164987ms\"\nI0623 20:50:29.428083 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:29.470625 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"42.651481ms\"\nI0623 20:50:30.439666 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:30.699404 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"259.835504ms\"\nI0623 20:50:30.699570 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:30.871062 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"171.612032ms\"\nI0623 20:50:33.936933 10 service.go:304] \"Service updated ports\" service=\"services-8188/nodeport-service\" portCount=0\nI0623 20:50:33.936965 10 service.go:444] \"Removing service port\" portName=\"services-8188/nodeport-service\"\nI0623 20:50:33.937043 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:33.972453 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"35.479167ms\"\nI0623 20:50:33.999169 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:34.031651 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.541462ms\"\nI0623 20:50:36.372163 10 service.go:304] \"Service updated ports\" service=\"webhook-9460/e2e-test-webhook\" portCount=1\nI0623 20:50:36.372194 10 service.go:419] \"Adding new service port\" portName=\"webhook-9460/e2e-test-webhook\" servicePort=\"100.64.104.148:8443/TCP\"\nI0623 20:50:36.372839 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:36.424537 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"52.337609ms\"\nI0623 20:50:36.424640 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:36.469542 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"44.964096ms\"\nI0623 20:50:39.806965 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:39.935917 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"129.051435ms\"\nI0623 20:50:41.507044 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:41.591047 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"84.124714ms\"\nI0623 20:50:41.591143 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:41.631984 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"40.905052ms\"\nI0623 20:50:41.653953 10 service.go:304] \"Service updated ports\" service=\"webhook-9460/e2e-test-webhook\" portCount=0\nI0623 20:50:41.906454 10 service.go:304] \"Service updated ports\" service=\"services-8188/externalsvc\" portCount=0\nI0623 20:50:42.633103 10 service.go:444] \"Removing service port\" portName=\"services-8188/externalsvc\"\nI0623 20:50:42.633150 10 service.go:444] \"Removing service port\" portName=\"webhook-9460/e2e-test-webhook\"\nI0623 20:50:42.633458 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:42.668942 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"35.87362ms\"\nI0623 20:50:48.449703 10 service.go:304] \"Service updated ports\" service=\"services-2567/nodeport-test\" portCount=0\nI0623 20:50:48.449741 10 service.go:444] \"Removing service port\" portName=\"services-2567/nodeport-test:http\"\nI0623 20:50:48.449846 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:48.718969 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"269.216824ms\"\nI0623 20:50:48.719124 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:50:48.920635 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"201.622006ms\"\nI0623 20:51:27.413907 10 service.go:304] \"Service updated ports\" service=\"aggregator-3684/sample-api\" portCount=1\nI0623 20:51:27.413950 10 service.go:419] \"Adding new service port\" portName=\"aggregator-3684/sample-api\" servicePort=\"100.64.60.129:7443/TCP\"\nI0623 20:51:27.414080 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:27.444430 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.482173ms\"\nI0623 20:51:27.444523 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:27.474465 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"29.997707ms\"\nI0623 20:51:41.512210 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:41.551097 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"39.040616ms\"\nI0623 20:51:43.341422 10 service.go:304] \"Service updated ports\" service=\"dns-1013/test-service-2\" portCount=1\nI0623 20:51:43.341463 10 service.go:419] \"Adding new service port\" portName=\"dns-1013/test-service-2:http\" servicePort=\"100.69.186.173:80/TCP\"\nI0623 20:51:43.341535 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:43.404973 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"63.509509ms\"\nI0623 20:51:43.405204 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:43.461447 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"56.435208ms\"\nI0623 20:51:44.525665 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:44.595692 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"70.189514ms\"\nI0623 20:51:44.718100 10 service.go:304] \"Service updated ports\" service=\"aggregator-3684/sample-api\" portCount=0\nI0623 20:51:45.596763 10 service.go:444] \"Removing service port\" portName=\"aggregator-3684/sample-api\"\nI0623 20:51:45.596925 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:45.640705 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"43.954733ms\"\nI0623 20:51:46.641890 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:46.693639 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"51.880346ms\"\nI0623 20:51:47.694591 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:47.766692 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"72.257382ms\"\nI0623 20:51:49.031326 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:49.064196 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.974781ms\"\nI0623 20:51:50.173430 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:50.206587 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.270393ms\"\nI0623 20:51:56.491438 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:56.530356 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"39.033564ms\"\nI0623 20:51:59.136910 10 service.go:304] \"Service updated ports\" service=\"services-2298/service-headless-toggled\" portCount=1\nI0623 20:51:59.136963 10 service.go:419] \"Adding new service port\" portName=\"services-2298/service-headless-toggled\" servicePort=\"100.68.99.227:80/TCP\"\nI0623 20:51:59.137069 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:59.291408 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"154.445611ms\"\nI0623 20:51:59.291809 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:51:59.335968 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"44.514728ms\"\nI0623 20:52:01.458744 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:01.666911 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"208.283591ms\"\nI0623 20:52:02.190298 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:02.295657 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"105.466208ms\"\nI0623 20:52:02.459434 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:02.510726 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"51.406286ms\"\nI0623 20:52:03.511654 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:03.553966 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"42.48258ms\"\nI0623 20:52:04.554711 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:04.588339 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.729282ms\"\nI0623 20:52:05.688637 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:05.722757 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"34.216722ms\"\nI0623 20:52:06.231782 10 service.go:304] \"Service updated ports\" service=\"services-3970/affinity-clusterip-transition\" portCount=0\nI0623 20:52:06.723579 10 service.go:444] \"Removing service port\" portName=\"services-3970/affinity-clusterip-transition\"\nI0623 20:52:06.723693 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:06.756037 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.479994ms\"\nI0623 20:52:07.756677 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:07.788128 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"31.554973ms\"\nI0623 20:52:12.351519 10 service.go:304] \"Service updated ports\" service=\"services-690/endpoint-test2\" portCount=1\nI0623 20:52:12.351569 10 service.go:419] \"Adding new service port\" portName=\"services-690/endpoint-test2\" servicePort=\"100.71.229.33:80/TCP\"\nI0623 20:52:12.351749 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:12.450328 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"98.756484ms\"\nI0623 20:52:12.450467 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:12.491196 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"40.824791ms\"\nI0623 20:52:15.103508 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:15.204748 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"101.34614ms\"\nI0623 20:52:18.108869 10 service.go:304] \"Service updated ports\" service=\"sctp-8452/sctp-endpoint-test\" portCount=1\nI0623 20:52:18.108905 10 service.go:419] \"Adding new service port\" portName=\"sctp-8452/sctp-endpoint-test\" servicePort=\"100.70.62.41:5060/SCTP\"\nI0623 20:52:18.108972 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:18.158799 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"49.890204ms\"\nI0623 20:52:18.158925 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:18.192814 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.977574ms\"\nI0623 20:52:21.597014 10 service.go:304] \"Service updated ports\" service=\"services-477/clusterip-service\" portCount=1\nI0623 20:52:21.597061 10 service.go:419] \"Adding new service port\" portName=\"services-477/clusterip-service\" servicePort=\"100.71.170.12:80/TCP\"\nI0623 20:52:21.597363 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:21.648583 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"51.523975ms\"\nI0623 20:52:21.648719 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:21.681784 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.162831ms\"\nI0623 20:52:21.709011 10 service.go:304] \"Service updated ports\" service=\"services-477/externalsvc\" portCount=1\nI0623 20:52:22.682890 10 service.go:419] \"Adding new service port\" portName=\"services-477/externalsvc\" servicePort=\"100.66.82.124:80/TCP\"\nI0623 20:52:22.683053 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:22.755833 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"73.197692ms\"\nI0623 20:52:23.220131 10 service.go:304] \"Service updated ports\" service=\"dns-1013/test-service-2\" portCount=0\nI0623 20:52:23.756854 10 service.go:444] \"Removing service port\" portName=\"dns-1013/test-service-2:http\"\nI0623 20:52:23.757004 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:23.795510 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"38.652452ms\"\nI0623 20:52:24.795750 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:24.828013 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.357685ms\"\nI0623 20:52:25.345163 10 service.go:304] \"Service updated ports\" service=\"services-2298/service-headless-toggled\" portCount=0\nI0623 20:52:25.828842 10 service.go:444] \"Removing service port\" portName=\"services-2298/service-headless-toggled\"\nI0623 20:52:25.829002 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:25.869678 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"40.847798ms\"\nI0623 20:52:26.870534 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:26.902161 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"31.729155ms\"\nI0623 20:52:28.293698 10 service.go:304] \"Service updated ports\" service=\"services-477/clusterip-service\" portCount=0\nI0623 20:52:28.293738 10 service.go:444] \"Removing service port\" portName=\"services-477/clusterip-service\"\nI0623 20:52:28.294005 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:28.331056 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"37.312704ms\"\nI0623 20:52:29.331721 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:29.365265 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.627246ms\"\nI0623 20:52:30.483301 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:30.558512 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"75.283518ms\"\nI0623 20:52:31.130941 10 service.go:304] \"Service updated ports\" service=\"services-2298/service-headless-toggled\" portCount=1\nI0623 20:52:31.130995 10 service.go:419] \"Adding new service port\" portName=\"services-2298/service-headless-toggled\" servicePort=\"100.68.99.227:80/TCP\"\nI0623 20:52:31.131489 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:31.170414 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"39.421316ms\"\nI0623 20:52:35.541984 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:35.575420 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.518951ms\"\nI0623 20:52:35.575750 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:35.606483 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"31.024516ms\"\nI0623 20:52:36.606707 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:36.643224 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"36.611919ms\"\nI0623 20:52:37.645445 10 service.go:304] \"Service updated ports\" service=\"sctp-8452/sctp-endpoint-test\" portCount=0\nI0623 20:52:37.645514 10 service.go:444] \"Removing service port\" portName=\"sctp-8452/sctp-endpoint-test\"\nI0623 20:52:37.645691 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:37.699078 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"53.579474ms\"\nI0623 20:52:38.699760 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:38.730462 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.80503ms\"\nI0623 20:52:39.686328 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:39.743368 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"57.12667ms\"\nI0623 20:52:39.857820 10 service.go:304] \"Service updated ports\" service=\"services-690/endpoint-test2\" portCount=0\nI0623 20:52:40.022505 10 service.go:304] \"Service updated ports\" service=\"services-477/externalsvc\" portCount=0\nI0623 20:52:40.743496 10 service.go:444] \"Removing service port\" portName=\"services-690/endpoint-test2\"\nI0623 20:52:40.743525 10 service.go:444] \"Removing service port\" portName=\"services-477/externalsvc\"\nI0623 20:52:40.743654 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:40.806352 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"62.87643ms\"\nI0623 20:52:51.054610 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:51.099693 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"45.274237ms\"\nI0623 20:52:51.100099 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:51.156047 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"56.31768ms\"\nI0623 20:52:51.267160 10 service.go:304] \"Service updated ports\" service=\"services-2298/service-headless-toggled\" portCount=0\nI0623 20:52:52.156837 10 service.go:444] \"Removing service port\" portName=\"services-2298/service-headless-toggled\"\nI0623 20:52:52.156989 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:52.205832 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"49.028833ms\"\nI0623 20:52:59.621640 10 service.go:304] \"Service updated ports\" service=\"webhook-9845/e2e-test-webhook\" portCount=1\nI0623 20:52:59.621684 10 service.go:419] \"Adding new service port\" portName=\"webhook-9845/e2e-test-webhook\" servicePort=\"100.70.93.105:8443/TCP\"\nI0623 20:52:59.621784 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:59.693919 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"72.233921ms\"\nI0623 20:52:59.694024 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:52:59.725254 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"31.277164ms\"\nI0623 20:53:00.551266 10 service.go:304] \"Service updated ports\" service=\"services-5186/affinity-nodeport\" portCount=1\nI0623 20:53:00.726003 10 service.go:419] \"Adding new service port\" portName=\"services-5186/affinity-nodeport\" servicePort=\"100.70.188.65:80/TCP\"\nI0623 20:53:00.726097 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:00.796920 10 proxier.go:1604] \"Opened local port\" port={Description:nodePort for services-5186/affinity-nodeport IP: IPFamily:4 Port:30665 Protocol:TCP}\nE0623 20:53:00.801673 10 proxier.go:1600] \"can't open port, skipping it\" err=\"listen tcp4 :30665: bind: address already in use\" port={Description:nodePort for services-5186/affinity-nodeport IP: IPFamily:4 Port:30665 Protocol:TCP}\nI0623 20:53:00.815840 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"89.862694ms\"\nI0623 20:53:02.559626 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:02.601900 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"42.385307ms\"\nI0623 20:53:05.958216 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:05.990460 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.360008ms\"\nI0623 20:53:11.282179 10 service.go:304] \"Service updated ports\" service=\"webhook-1829/e2e-test-webhook\" portCount=1\nI0623 20:53:11.282245 10 service.go:419] \"Adding new service port\" portName=\"webhook-1829/e2e-test-webhook\" servicePort=\"100.64.142.12:8443/TCP\"\nI0623 20:53:11.282395 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:11.338514 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"56.253417ms\"\nI0623 20:53:11.338913 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:11.392746 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"54.188832ms\"\nI0623 20:53:12.890961 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:12.942806 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"51.97533ms\"\nI0623 20:53:13.269395 10 service.go:304] \"Service updated ports\" service=\"webhook-1829/e2e-test-webhook\" portCount=0\nI0623 20:53:13.374526 10 service.go:444] \"Removing service port\" portName=\"webhook-1829/e2e-test-webhook\"\nI0623 20:53:13.374866 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:13.455947 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"81.424993ms\"\nI0623 20:53:15.278974 10 service.go:304] \"Service updated ports\" service=\"webhook-9845/e2e-test-webhook\" portCount=0\nI0623 20:53:15.279014 10 service.go:444] \"Removing service port\" portName=\"webhook-9845/e2e-test-webhook\"\nI0623 20:53:15.279219 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:15.335415 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"56.389036ms\"\nI0623 20:53:15.335572 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:15.404206 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"68.745931ms\"\nI0623 20:53:17.307257 10 service.go:304] \"Service updated ports\" service=\"services-1408/nodeport-range-test\" portCount=1\nI0623 20:53:17.307306 10 service.go:419] \"Adding new service port\" portName=\"services-1408/nodeport-range-test\" servicePort=\"100.71.217.131:80/TCP\"\nI0623 20:53:17.307484 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:17.333282 10 proxier.go:1604] \"Opened local port\" port={Description:nodePort for services-1408/nodeport-range-test IP: IPFamily:4 Port:31072 Protocol:TCP}\nE0623 20:53:17.333343 10 proxier.go:1600] \"can't open port, skipping it\" err=\"listen tcp4 :31072: bind: address already in use\" port={Description:nodePort for services-1408/nodeport-range-test IP: IPFamily:4 Port:31072 Protocol:TCP}\nI0623 20:53:17.338653 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"31.353289ms\"\nI0623 20:53:17.338868 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:17.369319 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.631944ms\"\nI0623 20:53:17.633136 10 service.go:304] \"Service updated ports\" service=\"services-1408/nodeport-range-test\" portCount=0\nI0623 20:53:18.369519 10 service.go:444] \"Removing service port\" portName=\"services-1408/nodeport-range-test\"\nI0623 20:53:18.369652 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:18.406750 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"37.261579ms\"\nI0623 20:53:28.064436 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:28.107701 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"43.381542ms\"\nI0623 20:53:28.108240 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:28.139353 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"31.20973ms\"\nI0623 20:53:29.598439 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:29.759259 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"160.937144ms\"\nI0623 20:53:30.750896 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:30.799204 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"48.439899ms\"\nI0623 20:53:31.690068 10 service.go:304] \"Service updated ports\" service=\"conntrack-4166/svc-udp\" portCount=1\nI0623 20:53:31.690116 10 service.go:419] \"Adding new service port\" portName=\"conntrack-4166/svc-udp:udp\" servicePort=\"100.71.10.59:80/UDP\"\nI0623 20:53:31.690236 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:31.726656 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"36.543039ms\"\nI0623 20:53:32.728272 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:32.771248 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"43.050958ms\"\nI0623 20:53:34.759741 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:34.816720 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"57.091604ms\"\nI0623 20:53:34.963454 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:35.012308 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"49.021708ms\"\nI0623 20:53:35.166657 10 service.go:304] \"Service updated ports\" service=\"services-5186/affinity-nodeport\" portCount=0\nI0623 20:53:36.013433 10 service.go:444] \"Removing service port\" portName=\"services-5186/affinity-nodeport\"\nI0623 20:53:36.013636 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:36.046002 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.585909ms\"\nI0623 20:53:36.121870 10 service.go:304] \"Service updated ports\" service=\"services-6940/affinity-clusterip\" portCount=1\nI0623 20:53:37.047159 10 service.go:419] \"Adding new service port\" portName=\"services-6940/affinity-clusterip\" servicePort=\"100.69.45.173:80/TCP\"\nI0623 20:53:37.047244 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:37.078070 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.949553ms\"\nI0623 20:53:39.299311 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:39.341164 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"41.966385ms\"\nI0623 20:53:39.593632 10 service.go:304] \"Service updated ports\" service=\"proxy-804/test-service\" portCount=1\nI0623 20:53:39.593711 10 service.go:419] \"Adding new service port\" portName=\"proxy-804/test-service\" servicePort=\"100.68.242.234:80/TCP\"\nI0623 20:53:39.594053 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:39.652636 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"58.952731ms\"\nI0623 20:53:40.652863 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:40.691945 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"39.18034ms\"\nI0623 20:53:41.798626 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:41.842111 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"43.66276ms\"\nI0623 20:53:43.308373 10 service.go:304] \"Service updated ports\" service=\"services-8134/affinity-nodeport-transition\" portCount=1\nI0623 20:53:43.308419 10 service.go:419] \"Adding new service port\" portName=\"services-8134/affinity-nodeport-transition\" servicePort=\"100.68.194.50:80/TCP\"\nI0623 20:53:43.308657 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:43.349527 10 proxier.go:1604] \"Opened local port\" port={Description:nodePort for services-8134/affinity-nodeport-transition IP: IPFamily:4 Port:30929 Protocol:TCP}\nE0623 20:53:43.349598 10 proxier.go:1600] \"can't open port, skipping it\" err=\"listen tcp4 :30929: bind: address already in use\" port={Description:nodePort for services-8134/affinity-nodeport-transition IP: IPFamily:4 Port:30929 Protocol:TCP}\nI0623 20:53:43.354678 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"46.263079ms\"\nI0623 20:53:43.354772 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:43.390947 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"36.235456ms\"\nI0623 20:53:46.097595 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:46.132371 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"34.890888ms\"\nI0623 20:53:46.542931 10 service.go:304] \"Service updated ports\" service=\"proxy-804/test-service\" portCount=0\nI0623 20:53:46.542970 10 service.go:444] \"Removing service port\" portName=\"proxy-804/test-service\"\nI0623 20:53:46.543156 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:46.578160 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"35.182663ms\"\nI0623 20:53:47.578431 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:47.617650 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"39.327417ms\"\nI0623 20:53:48.400946 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:48.441078 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"40.268149ms\"\nI0623 20:53:50.371876 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:53:50.407546 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"35.800996ms\"\nI0623 20:54:01.146101 10 service.go:304] \"Service updated ports\" service=\"services-8134/affinity-nodeport-transition\" portCount=1\nI0623 20:54:01.146151 10 service.go:421] \"Updating existing service port\" portName=\"services-8134/affinity-nodeport-transition\" servicePort=\"100.68.194.50:80/TCP\"\nI0623 20:54:01.146525 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:01.277869 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"131.712917ms\"\nI0623 20:54:02.695806 10 service.go:304] \"Service updated ports\" service=\"services-8134/affinity-nodeport-transition\" portCount=1\nI0623 20:54:02.695861 10 service.go:421] \"Updating existing service port\" portName=\"services-8134/affinity-nodeport-transition\" servicePort=\"100.68.194.50:80/TCP\"\nI0623 20:54:02.695976 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:02.756114 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"60.25281ms\"\nI0623 20:54:04.569141 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:04.609247 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"40.182832ms\"\nI0623 20:54:04.609352 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:04.640883 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"31.604102ms\"\nI0623 20:54:05.766120 10 service.go:304] \"Service updated ports\" service=\"sctp-1659/sctp-clusterip\" portCount=1\nI0623 20:54:05.766187 10 service.go:419] \"Adding new service port\" portName=\"sctp-1659/sctp-clusterip\" servicePort=\"100.68.238.153:5060/SCTP\"\nI0623 20:54:05.766494 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:05.800454 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"34.279056ms\"\nI0623 20:54:06.801029 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:06.842999 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"42.048373ms\"\nI0623 20:54:07.843971 10 proxier.go:811] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-4166/svc-udp:udp\" clusterIP=\"100.71.10.59\"\nI0623 20:54:07.843991 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:07.879362 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"35.541842ms\"\nI0623 20:54:08.583246 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:08.627098 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"43.954897ms\"\nI0623 20:54:09.627841 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:09.658073 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"30.324392ms\"\nI0623 20:54:12.200991 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:12.233126 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"32.213055ms\"\nI0623 20:54:12.607424 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:12.640686 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"33.330096ms\"\nI0623 20:54:12.756540 10 service.go:304] \"Service updated ports\" service=\"services-8134/affinity-nodeport-transition\" portCount=0\nI0623 20:54:13.509795 10 service.go:444] \"Removing service port\" portName=\"services-8134/affinity-nodeport-transition\"\nI0623 20:54:13.509916 10 proxier.go:827] \"Syncing iptables rules\"\nI0623 20:54:13.545065 10 proxier.go:794] \"SyncProxyRules complete\" elapsed=\"35.278839ms\"\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-0-238.eu-west-1.compute.internal ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-0-42.eu-west-1.compute.internal ====\n2022/06/23 20:32:52 Running command:\nCommand env: (log-file=/var/log/kube-proxy.log, also-stdout=true, redirect-stderr=true)\nRun from directory: \nExecutable path: /usr/local/bin/kube-proxy\nArgs (comma-delimited): /usr/local/bin/kube-proxy,--cluster-cidr=100.96.0.0/11,--conntrack-max-per-core=131072,--hostname-override=ip-172-20-0-42.eu-west-1.compute.internal,--kubeconfig=/var/lib/kube-proxy/kubeconfig,--master=https://api.internal.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io,--oom-score-adj=-998,--v=2\n2022/06/23 20:32:52 Now listening for interrupts\nI0623 20:32:52.356418 10 flags.go:64] FLAG: --add-dir-header=\"false\"\nI0623 20:32:52.356564 10 flags.go:64] FLAG: --alsologtostderr=\"false\"\nI0623 20:32:52.356589 10 flags.go:64] FLAG: --bind-address=\"0.0.0.0\"\nI0623 20:32:52.356611 10 flags.go:64] FLAG: --bind-address-hard-fail=\"false\"\nI0623 20:32:52.356650 10 flags.go:64] FLAG: --boot-id-file=\"/proc/sys/kernel/random/boot_id\"\nI0623 20:32:52.356685 10 flags.go:64] FLAG: --cleanup=\"false\"\nI0623 20:32:52.356705 10 flags.go:64] FLAG: --cluster-cidr=\"100.96.0.0/11\"\nI0623 20:32:52.356726 10 flags.go:64] FLAG: --config=\"\"\nI0623 20:32:52.356759 10 flags.go:64] FLAG: --config-sync-period=\"15m0s\"\nI0623 20:32:52.356791 10 flags.go:64] FLAG: --conntrack-max-per-core=\"131072\"\nI0623 20:32:52.356813 10 flags.go:64] FLAG: --conntrack-min=\"131072\"\nI0623 20:32:52.356831 10 flags.go:64] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI0623 20:32:52.356928 10 flags.go:64] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI0623 20:32:52.356937 10 flags.go:64] FLAG: --detect-local-mode=\"\"\nI0623 20:32:52.356944 10 flags.go:64] FLAG: --feature-gates=\"\"\nI0623 20:32:52.356950 10 flags.go:64] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI0623 20:32:52.356956 10 flags.go:64] FLAG: --healthz-port=\"10256\"\nI0623 20:32:52.356961 10 flags.go:64] FLAG: --help=\"false\"\nI0623 20:32:52.356969 10 flags.go:64] FLAG: --hostname-override=\"ip-172-20-0-42.eu-west-1.compute.internal\"\nI0623 20:32:52.356974 10 flags.go:64] FLAG: --iptables-masquerade-bit=\"14\"\nI0623 20:32:52.356978 10 flags.go:64] FLAG: --iptables-min-sync-period=\"1s\"\nI0623 20:32:52.356983 10 flags.go:64] FLAG: --iptables-sync-period=\"30s\"\nI0623 20:32:52.356987 10 flags.go:64] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI0623 20:32:52.356999 10 flags.go:64] FL