This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 202 succeeded
Started2019-07-08 04:54
Elapsed11m58s
Revisionrelease-1.13
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/23708486-d09a-4555-9350-911c7718f2ad/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/23708486-d09a-4555-9350-911c7718f2ad/targets/test
job-versionv1.13.8-beta.0.35+0c6d31a99f8147
revisionv1.13.8-beta.0.35+0c6d31a99f8147

Test Failures


DumpClusterLogs 7.29s

error during kind export logs /logs/artifacts --loglevel=debug --name=kind-kubetest: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 202 Passed Tests

Show 1972 Skipped Tests

Error lines from build-log.txt

... skipping 100 lines ...
time="04:57:09" level=debug msg="Running: /usr/bin/docker [docker exec 2049fda8a8c16a0ddc5d249195553abf2ce66bad84c9e0cf87516ffc6bf5a2e4 ln -s /kind/bin/kubectl /usr/bin/kubectl]"
time="04:57:10" level=debug msg="Running: /usr/bin/docker [docker exec 2049fda8a8c16a0ddc5d249195553abf2ce66bad84c9e0cf87516ffc6bf5a2e4 systemctl enable /kind/systemd/kubelet.service]"
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /kind/systemd/kubelet.service.
Created symlink /etc/systemd/system/kubelet.service → /kind/systemd/kubelet.service.
time="04:57:10" level=debug msg="Running: /usr/bin/docker [docker exec 2049fda8a8c16a0ddc5d249195553abf2ce66bad84c9e0cf87516ffc6bf5a2e4 mkdir -p /etc/systemd/system/kubelet.service.d]"
time="04:57:10" level=debug msg="Running: /usr/bin/docker [docker exec 2049fda8a8c16a0ddc5d249195553abf2ce66bad84c9e0cf87516ffc6bf5a2e4 cp /kind/systemd/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf]"
time="04:57:10" level=debug msg="Running: /usr/bin/docker [docker exec 2049fda8a8c16a0ddc5d249195553abf2ce66bad84c9e0cf87516ffc6bf5a2e4 /bin/sh -c echo \"KUBELET_EXTRA_ARGS=--fail-swap-on=false\" >> /etc/default/kubelet]"
time="04:57:11" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t 2049fda8a8c16a0ddc5d249195553abf2ce66bad84c9e0cf87516ffc6bf5a2e4 cat /kind/version]"
time="04:57:11" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t 2049fda8a8c16a0ddc5d249195553abf2ce66bad84c9e0cf87516ffc6bf5a2e4 mkdir -p /kind/manifests]"
time="04:57:11" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i 2049fda8a8c16a0ddc5d249195553abf2ce66bad84c9e0cf87516ffc6bf5a2e4 cp /dev/stdin /kind/manifests/default-cni.yaml]"
time="04:57:11" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t 2049fda8a8c16a0ddc5d249195553abf2ce66bad84c9e0cf87516ffc6bf5a2e4 kubeadm config images list --kubernetes-version v1.13.8-beta.0.35+0c6d31a99f8147]"
Pulling: k8s.gcr.io/pause:3.1
time="04:57:12" level=info msg="Pulling image: k8s.gcr.io/pause:3.1 ..."
... skipping 154 lines ...
time="04:58:49" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-control-plane cat /kind/version]"
time="04:58:49" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-control-plane mkdir -p /kind]"
time="04:58:50" level=debug msg="Running: /usr/bin/docker [docker cp /tmp/234402497 kind-kubetest-control-plane:/kind/kubeadm.conf]"
 ✓ Creating kubeadm config 📜
 • Starting control-plane 🕹️  ...
time="04:58:50" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-control-plane kubeadm init --ignore-preflight-errors=all --config=/kind/kubeadm.conf --skip-token-print --v=6]"
time="04:59:25" level=debug msg="I0708 04:58:50.734769     759 initconfiguration.go:169] loading configuration from the given file\nW0708 04:58:50.736211     759 common.go:86] WARNING: Detected resource kinds that may not apply: [InitConfiguration JoinConfiguration]\nW0708 04:58:50.736812     759 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubeadm.k8s.io\", Version:\"v1beta1\", Kind:\"ClusterConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: unknown field \"metadata\"\nW0708 04:58:50.739048     759 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubeadm.k8s.io\", Version:\"v1beta1\", Kind:\"InitConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: unknown field \"metadata\"\nW0708 04:58:50.739792     759 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubeadm.k8s.io\", Version:\"v1beta1\", Kind:\"JoinConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: unknown field \"metadata\"\n[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta1, Kind=JoinConfiguration\nW0708 04:58:50.740384     759 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubelet.config.k8s.io\", Version:\"v1beta1\", Kind:\"KubeletConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: unknown field \"metadata\"\nW0708 04:58:50.741398     759 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubeproxy.config.k8s.io\", Version:\"v1alpha1\", Kind:\"KubeProxyConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: unknown field \"metadata\"\nI0708 04:58:50.743557     759 interface.go:384] Looking for default routes with IPv4 addresses\nI0708 04:58:50.743591     759 interface.go:389] Default route transits interface \"eth0\"\nI0708 04:58:50.743769     759 interface.go:196] Interface eth0 is up\nI0708 04:58:50.743810     759 interface.go:244] Interface \"eth0\" has 1 addresses :[172.17.0.5/16].\nI0708 04:58:50.743833     759 interface.go:211] Checking addr  172.17.0.5/16.\nI0708 04:58:50.743849     759 interface.go:218] IP found 172.17.0.5\nI0708 04:58:50.743859     759 interface.go:250] Found valid IPv4 address 172.17.0.5 for interface \"eth0\".\nI0708 04:58:50.743864     759 interface.go:395] Found active IP 172.17.0.5 \nI0708 04:58:50.745017     759 feature_gate.go:206] feature gates: &{map[]}\n[init] Using Kubernetes version: v1.13.8-beta.0.35+0c6d31a99f8147\n[preflight] Running pre-flight checks\nI0708 04:58:50.745482     759 checks.go:572] validating Kubernetes and kubeadm version\nI0708 04:58:50.745514     759 checks.go:171] validating if the firewall is enabled and active\nI0708 04:58:50.758590     759 checks.go:208] validating availability of port 6443\nI0708 04:58:50.758869     759 checks.go:208] validating availability of port 10251\nI0708 04:58:50.758902     759 checks.go:208] validating availability of port 10252\nI0708 04:58:50.758932     759 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml\nI0708 04:58:50.758950     759 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml\nI0708 04:58:50.758964     759 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml\nI0708 04:58:50.758973     759 checks.go:283] validating the existence of file /etc/kubernetes/manifests/etcd.yaml\nI0708 04:58:50.758985     759 checks.go:430] validating if the connectivity type is via proxy or direct\nI0708 04:58:50.759130     759 checks.go:466] validating http connectivity to first IP address in the CIDR\nI0708 04:58:50.759187     759 checks.go:466] validating http connectivity to first IP address in the CIDR\nI0708 04:58:50.759200     759 checks.go:104] validating the container runtime\nI0708 04:58:50.839871     759 checks.go:130] validating if the service is enabled and active\nI0708 04:58:50.862970     759 checks.go:332] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0708 04:58:50.863076     759 checks.go:332] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0708 04:58:50.863128     759 checks.go:644] validating whether swap is enabled or not\nI0708 04:58:50.863193     759 checks.go:373] validating the presence of executable ip\nI0708 04:58:50.863309     759 checks.go:373] validating the presence of executable iptables\nI0708 04:58:50.863336     759 checks.go:373] validating the presence of executable mount\nI0708 04:58:50.863361     759 checks.go:373] validating the presence of executable nsenter\nI0708 04:58:50.863399     759 checks.go:373] validating the presence of executable ebtables\nI0708 04:58:50.863439     759 checks.go:373] validating the presence of executable ethtool\nI0708 04:58:50.863487     759 checks.go:373] validating the presence of executable socat\nI0708 04:58:50.863524     759 checks.go:373] validating the presence of executable tc\nI0708 04:58:50.863563     759 checks.go:373] validating the presence of executable touch\nI0708 04:58:50.863627     759 checks.go:515] running all checks\nI0708 04:58:50.893992     759 checks.go:403] checking whether the given node name is reachable using net.LookupHost\nI0708 04:58:50.894274     759 checks.go:613] validating kubelet version\nI0708 04:58:50.975592     759 checks.go:130] validating if the service is enabled and active\nI0708 04:58:50.992321     759 checks.go:208] validating availability of port 10250\nI0708 04:58:50.992425     759 checks.go:208] validating availability of port 2379\nI0708 04:58:50.992452     759 checks.go:208] validating availability of port 2380\nI0708 04:58:50.992476     759 checks.go:245] validating the existence and emptiness of directory /var/lib/etcd\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\nI0708 04:58:51.068272     759 checks.go:833] image exists: k8s.gcr.io/kube-apiserver:v1.13.8-beta.0.35_0c6d31a99f8147\nI0708 04:58:51.137285     759 checks.go:833] image exists: k8s.gcr.io/kube-controller-manager:v1.13.8-beta.0.35_0c6d31a99f8147\nI0708 04:58:51.207429     759 checks.go:833] image exists: k8s.gcr.io/kube-scheduler:v1.13.8-beta.0.35_0c6d31a99f8147\nI0708 04:58:51.276780     759 checks.go:833] image exists: k8s.gcr.io/kube-proxy:v1.13.8-beta.0.35_0c6d31a99f8147\nI0708 04:58:51.346962     759 checks.go:833] image exists: k8s.gcr.io/pause:3.1\nI0708 04:58:51.421476     759 checks.go:833] image exists: k8s.gcr.io/etcd:3.2.24\nI0708 04:58:51.503647     759 checks.go:833] image exists: k8s.gcr.io/coredns:1.2.6\nI0708 04:58:51.503719     759 kubelet.go:71] Stopping the kubelet\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\nI0708 04:58:51.606725     759 kubelet.go:89] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[certs] Using certificateDir folder \"/etc/kubernetes/pki\"\nI0708 04:58:51.665194     759 certs.go:113] creating a new certificate authority for front-proxy-ca\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\nI0708 04:58:52.132650     759 certs.go:113] creating a new certificate authority for etcd-ca\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [kind-kubetest-control-plane localhost] and IPs [172.17.0.5 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [kind-kubetest-control-plane localhost] and IPs [172.17.0.5 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\nI0708 04:58:53.447572     759 certs.go:113] creating a new certificate authority for ca\n[certs] Generating \"ca\" certificate and key\n[certs] Generating \"apiserver\" certificate and key\n[certs] apiserver serving cert is signed for DNS names [kind-kubetest-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.17.0.5]\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\nI0708 04:58:54.396298     759 certs.go:72] creating a new public/private key files for signing service account users\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\nI0708 04:58:54.681625     759 kubeconfig.go:92] creating kubeconfig file for admin.conf\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\nI0708 04:58:54.901345     759 kubeconfig.go:92] creating kubeconfig file for kubelet.conf\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\nI0708 04:58:55.170674     759 kubeconfig.go:92] creating kubeconfig file for controller-manager.conf\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\nI0708 04:58:55.322825     759 kubeconfig.go:92] creating kubeconfig file for scheduler.conf\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\nI0708 04:58:55.586514     759 manifests.go:97] [control-plane] getting StaticPodSpecs\nI0708 04:58:55.594601     759 manifests.go:113] [control-plane] wrote static Pod manifest for component \"kube-apiserver\" to \"/etc/kubernetes/manifests/kube-apiserver.yaml\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\nI0708 04:58:55.594657     759 manifests.go:97] [control-plane] getting StaticPodSpecs\nI0708 04:58:55.596035     759 manifests.go:113] [control-plane] wrote static Pod manifest for component \"kube-controller-manager\" to \"/etc/kubernetes/manifests/kube-controller-manager.yaml\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\nI0708 04:58:55.596082     759 manifests.go:97] [control-plane] getting StaticPodSpecs\nI0708 04:58:55.598271     759 manifests.go:113] [control-plane] wrote static Pod manifest for component \"kube-scheduler\" to \"/etc/kubernetes/manifests/kube-scheduler.yaml\"\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\nI0708 04:58:55.598953     759 local.go:60] [etcd] wrote Static Pod manifest for a local etcd member to \"/etc/kubernetes/manifests/etcd.yaml\"\nI0708 04:58:55.598978     759 waitcontrolplane.go:89] [wait-control-plane] Waiting for the API server to be healthy\nI0708 04:58:55.600022     759 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s\nI0708 04:58:55.603975     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 1 milliseconds\nI0708 04:58:56.104816     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:58:56.605289     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:58:57.105322     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:58:57.604798     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:58:58.104771     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:58:58.604638     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:58:59.104747     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:58:59.604864     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:00.104711     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:00.604729     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:01.105085     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:01.604881     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:02.105269     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:02.604792     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:03.104771     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:03.604692     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:04.104773     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:04.604863     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:05.104702     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:05.604732     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:06.104736     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:06.604770     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:07.105120     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:07.604902     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:08.104828     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:08.604914     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:09.104730     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:09.604699     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:10.104732     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:10.604663     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:11.104669     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:11.604747     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:12.104761     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:12.604830     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:13.104762     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:13.604660     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s  in 0 milliseconds\nI0708 04:59:21.151298     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s 500 Internal Server Error in 7046 milliseconds\nI0708 04:59:21.607351     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0708 04:59:22.108009     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s 500 Internal Server Error in 3 milliseconds\nI0708 04:59:22.606425     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0708 04:59:23.106596     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0708 04:59:23.606550     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0708 04:59:24.106392     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0708 04:59:24.607896     759 round_trippers.go:438] GET https://172.17.0.5:6443/healthz?timeout=32s 200 OK in 3 milliseconds\n[apiclient] All control plane components are healthy after 29.005807 seconds\nI0708 04:59:24.608936     759 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf\nI0708 04:59:24.609954     759 uploadconfig.go:114] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap\n[uploadconfig] storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace\nI0708 04:59:24.614765     759 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 404 Not Found in 3 milliseconds\nI0708 04:59:24.620551     759 round_trippers.go:438] POST https://172.17.0.5:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 4 milliseconds\nI0708 04:59:24.628916     759 round_trippers.go:438] POST https://172.17.0.5:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 3 milliseconds\nI0708 04:59:24.633669     759 round_trippers.go:438] POST https://172.17.0.5:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 3 milliseconds\nI0708 04:59:24.635259     759 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf\nI0708 04:59:24.635833     759 uploadconfig.go:128] [upload-config] Uploading the kubelet component config to a ConfigMap\n[kubelet] Creating a ConfigMap \"kubelet-config-1.13\" in namespace kube-system with the configuration for the kubelets in the cluster\nI0708 04:59:24.640092     759 round_trippers.go:438] POST https://172.17.0.5:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 3 milliseconds\nI0708 04:59:24.643415     759 round_trippers.go:438] POST https://172.17.0.5:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 2 milliseconds\nI0708 04:59:24.646074     759 round_trippers.go:438] POST https://172.17.0.5:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 2 milliseconds\nI0708 04:59:24.646414     759 uploadconfig.go:133] [upload-config] Preserving the CRISocket information for the control-plane node\n[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"kind-kubetest-control-plane\" as an annotation\nI0708 04:59:25.150419     759 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/nodes/kind-kubetest-control-plane 200 OK in 3 milliseconds\nI0708 04:59:25.158206     759 round_trippers.go:438] PATCH https://172.17.0.5:6443/api/v1/nodes/kind-kubetest-control-plane 200 OK in 4 milliseconds\nI0708 04:59:25.159952     759 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf\n[mark-control-plane] Marking the node kind-kubetest-control-plane as control-plane by adding the label \"node-role.kubernetes.io/master=''\"\n[mark-control-plane] Marking the node kind-kubetest-control-plane as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]\nI0708 04:59:25.668522     759 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/nodes/kind-kubetest-control-plane 200 OK in 3 milliseconds\nI0708 04:59:25.674324     759 round_trippers.go:438] PATCH https://172.17.0.5:6443/api/v1/nodes/kind-kubetest-control-plane 200 OK in 4 milliseconds\nI0708 04:59:25.675563     759 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf\n[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles\nI0708 04:59:25.678480     759 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-abcdef 404 Not Found in 2 milliseconds\nI0708 04:59:25.682856     759 round_trippers.go:438] POST https://172.17.0.5:6443/api/v1/namespaces/kube-system/secrets 201 Created in 4 milliseconds\n[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials\nI0708 04:59:25.687492     759 round_trippers.go:438] POST https://172.17.0.5:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 3 milliseconds\n[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token\nI0708 04:59:25.691527     759 round_trippers.go:438] POST https://172.17.0.5:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 3 milliseconds\n[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster\nI0708 04:59:25.694564     759 round_trippers.go:438] POST https://172.17.0.5:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 2 milliseconds\n[bootstraptoken] creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace\nI0708 04:59:25.694705     759 clusterinfo.go:46] [bootstraptoken] loading admin kubeconfig\nI0708 04:59:25.695370     759 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf\nI0708 04:59:25.695394     759 clusterinfo.go:54] [bootstraptoken] copying the cluster from admin.conf to the bootstrap kubeconfig\nI0708 04:59:25.695746     759 clusterinfo.go:66] [bootstraptoken] creating/updating ConfigMap in kube-public namespace\nI0708 04:59:25.699040     759 round_trippers.go:438] POST https://172.17.0.5:6443/api/v1/namespaces/kube-public/configmaps 201 Created in 3 milliseconds\nI0708 04:59:25.699365     759 clusterinfo.go:80] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace\nI0708 04:59:25.701941     759 round_trippers.go:438] POST https://172.17.0.5:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles 201 Created in 2 milliseconds\nI0708 04:59:25.704171     759 round_trippers.go:438] POST https://172.17.0.5:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings 201 Created in 1 milliseconds\nI0708 04:59:25.704897     759 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf\nI0708 04:59:25.707808     759 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/namespaces/kube-system/configmaps/kube-dns 404 Not Found in 2 milliseconds\nI0708 04:59:25.709688     759 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/namespaces/kube-system/configmaps/coredns 404 Not Found in 1 milliseconds\nI0708 04:59:25.712466     759 round_trippers.go:438] POST https://172.17.0.5:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 2 milliseconds\nI0708 04:59:25.717178     759 round_trippers.go:438] POST https://172.17.0.5:6443/apis/rbac.authorization.k8s.io/v1/clusterroles 201 Created in 3 milliseconds\nI0708 04:59:25.720111     759 round_trippers.go:438] POST https://172.17.0.5:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 2 milliseconds\nI0708 04:59:25.726462     759 round_trippers.go:438] POST https://172.17.0.5:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 5 milliseconds\nI0708 04:59:25.747957     759 round_trippers.go:438] POST https://172.17.0.5:6443/apis/apps/v1/namespaces/kube-system/deployments 201 Created in 12 milliseconds\nI0708 04:59:25.756277     759 round_trippers.go:438] POST https://172.17.0.5:6443/api/v1/namespaces/kube-system/services 201 Created in 6 milliseconds\n[addons] Applied essential addon: CoreDNS\nI0708 04:59:25.757325     759 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf\nI0708 04:59:25.760592     759 round_trippers.go:438] POST https://172.17.0.5:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 2 milliseconds\nI0708 04:59:25.764558     759 round_trippers.go:438] POST https://172.17.0.5:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 2 milliseconds\nI0708 04:59:25.784726     759 round_trippers.go:438] POST https://172.17.0.5:6443/apis/apps/v1/namespaces/kube-system/daemonsets 201 Created in 12 milliseconds\nI0708 04:59:25.787573     759 round_trippers.go:438] POST https://172.17.0.5:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 2 milliseconds\nI0708 04:59:25.789975     759 round_trippers.go:438] POST https://172.17.0.5:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 2 milliseconds\nI0708 04:59:25.792672     759 round_trippers.go:438] POST https://172.17.0.5:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 2 milliseconds\n[addons] Applied essential addon: kube-proxy\nI0708 04:59:25.793546     759 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf\n\nYour Kubernetes master has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n  mkdir -p $HOME/.kube\n  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n  sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nYou should now deploy a pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n  https://kubernetes.io/docs/concepts/cluster-administration/addons/\n\nYou can now join any number of machines by running the following on each node\nas root:\n\n  kubeadm join 172.17.0.5:6443 --token <value withheld> --discovery-token-ca-cert-hash sha256:b1f6d1301fa598f80325c456bc2959771a71d083eec0d0d06ecb68145dc6e2cd\n"
time="04:59:25" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{(index (index .NetworkSettings.Ports \"6443/tcp\") 0).HostPort}} kind-kubetest-control-plane]"
time="04:59:25" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-control-plane cat /etc/kubernetes/admin.conf]"
time="04:59:26" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-kubetest-control-plane test -f /kind/manifests/default-cni.yaml]"
time="04:59:26" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-kubetest-control-plane kubectl create --kubeconfig=/etc/kubernetes/admin.conf -f /kind/manifests/default-cni.yaml]"
time="04:59:27" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-kubetest-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f -]"
 ✓ Starting control-plane 🕹️
 • Joining worker nodes 🚜  ...
time="04:59:27" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}} kind-kubetest-control-plane]"
time="04:59:27" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}} kind-kubetest-control-plane]"
time="04:59:27" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}} kind-kubetest-control-plane]"
time="04:59:27" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-worker3 kubeadm join 172.17.0.5:6443 --token abcdef.0123456789abcdef --discovery-token-unsafe-skip-ca-verification --ignore-preflight-errors=all --v=6]"
time="04:59:27" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-worker kubeadm join 172.17.0.5:6443 --token abcdef.0123456789abcdef --discovery-token-unsafe-skip-ca-verification --ignore-preflight-errors=all --v=6]"
time="04:59:27" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-worker2 kubeadm join 172.17.0.5:6443 --token abcdef.0123456789abcdef --discovery-token-unsafe-skip-ca-verification --ignore-preflight-errors=all --v=6]"
time="04:59:35" level=debug msg="I0708 04:59:27.920288     806 join.go:299] [join] found NodeName empty; using OS hostname as NodeName\n[preflight] Running pre-flight checks\nI0708 04:59:27.920552     806 join.go:328] [preflight] Running general checks\nI0708 04:59:27.920674     806 checks.go:245] validating the existence and emptiness of directory /etc/kubernetes/manifests\nI0708 04:59:27.920740     806 checks.go:283] validating the existence of file /etc/kubernetes/kubelet.conf\nI0708 04:59:27.920792     806 checks.go:283] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf\nI0708 04:59:27.920839     806 checks.go:104] validating the container runtime\nI0708 04:59:28.020234     806 checks.go:130] validating if the service is enabled and active\nI0708 04:59:28.042225     806 checks.go:332] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0708 04:59:28.042415     806 checks.go:332] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0708 04:59:28.042509     806 checks.go:644] validating whether swap is enabled or not\nI0708 04:59:28.042612     806 checks.go:373] validating the presence of executable ip\nI0708 04:59:28.042762     806 checks.go:373] validating the presence of executable iptables\nI0708 04:59:28.042817     806 checks.go:373] validating the presence of executable mount\nI0708 04:59:28.042884     806 checks.go:373] validating the presence of executable nsenter\nI0708 04:59:28.042963     806 checks.go:373] validating the presence of executable ebtables\nI0708 04:59:28.043046     806 checks.go:373] validating the presence of executable ethtool\nI0708 04:59:28.043173     806 checks.go:373] validating the presence of executable socat\nI0708 04:59:28.043290     806 checks.go:373] validating the presence of executable tc\nI0708 04:59:28.043405     806 checks.go:373] validating the presence of executable touch\nI0708 04:59:28.043516     806 checks.go:515] running all checks\nI0708 04:59:28.070893     806 checks.go:403] checking whether the given node name is reachable using net.LookupHost\nI0708 04:59:28.071485     806 checks.go:613] validating kubelet version\nI0708 04:59:28.162380     806 checks.go:130] validating if the service is enabled and active\nI0708 04:59:28.179515     806 checks.go:208] validating availability of port 10250\nI0708 04:59:28.179700     806 checks.go:283] validating the existence of file /etc/kubernetes/pki/ca.crt\nI0708 04:59:28.179726     806 checks.go:430] validating if the connectivity type is via proxy or direct\nI0708 04:59:28.179769     806 join.go:334] [preflight] Fetching init configuration\nI0708 04:59:28.179783     806 join.go:603] [join] Discovering cluster-info\n[discovery] Trying to connect to API Server \"172.17.0.5:6443\"\n[discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.5:6443\"\nI0708 04:59:28.190865     806 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 10 milliseconds\n[discovery] Failed to connect to API Server \"172.17.0.5:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the master node to creating a new valid token\n[discovery] Trying to connect to API Server \"172.17.0.5:6443\"\n[discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.5:6443\"\nI0708 04:59:33.195210     806 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 1 milliseconds\n[discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server \"172.17.0.5:6443\"\n[discovery] Successfully established connection with API Server \"172.17.0.5:6443\"\nI0708 04:59:33.196529     806 join.go:610] [join] Retrieving KubeConfig objects\n[join] Reading configuration from the cluster...\n[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\nI0708 04:59:33.203743     806 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 6 milliseconds\nI0708 04:59:33.206263     806 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 1 milliseconds\nI0708 04:59:33.209026     806 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13 200 OK in 1 milliseconds\nI0708 04:59:33.210729     806 interface.go:384] Looking for default routes with IPv4 addresses\nI0708 04:59:33.210744     806 interface.go:389] Default route transits interface \"eth0\"\nI0708 04:59:33.210864     806 interface.go:196] Interface eth0 is up\nI0708 04:59:33.210902     806 interface.go:244] Interface \"eth0\" has 1 addresses :[172.17.0.2/16].\nI0708 04:59:33.210918     806 interface.go:211] Checking addr  172.17.0.2/16.\nI0708 04:59:33.210924     806 interface.go:218] IP found 172.17.0.2\nI0708 04:59:33.210948     806 interface.go:250] Found valid IPv4 address 172.17.0.2 for interface \"eth0\".\nI0708 04:59:33.210952     806 interface.go:395] Found active IP 172.17.0.2 \nI0708 04:59:33.211685     806 join.go:341] [preflight] Running configuration dependant checks\nI0708 04:59:33.211717     806 join.go:478] [join] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf\nI0708 04:59:33.300812     806 loader.go:359] Config loaded from file /etc/kubernetes/bootstrap-kubelet.conf\nI0708 04:59:33.301658     806 join.go:503] Stopping the kubelet\n[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace\nI0708 04:59:33.321330     806 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13 200 OK in 2 milliseconds\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\nI0708 04:59:33.414708     806 join.go:520] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...\nI0708 04:59:34.491863     806 loader.go:359] Config loaded from file /etc/kubernetes/kubelet.conf\nI0708 04:59:34.503691     806 loader.go:359] Config loaded from file /etc/kubernetes/kubelet.conf\nI0708 04:59:34.507714     806 join.go:538] [join] preserving the crisocket information for the node\n[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"kind-kubetest-worker\" as an annotation\nI0708 04:59:35.018120     806 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/nodes/kind-kubetest-worker 200 OK in 10 milliseconds\nI0708 04:59:35.029181     806 round_trippers.go:438] PATCH https://172.17.0.5:6443/api/v1/nodes/kind-kubetest-worker 200 OK in 6 milliseconds\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the master to see this node join the cluster.\n"
time="04:59:35" level=debug msg="I0708 04:59:27.901484     814 join.go:299] [join] found NodeName empty; using OS hostname as NodeName\n[preflight] Running pre-flight checks\nI0708 04:59:27.901707     814 join.go:328] [preflight] Running general checks\nI0708 04:59:27.901824     814 checks.go:245] validating the existence and emptiness of directory /etc/kubernetes/manifests\nI0708 04:59:27.902372     814 checks.go:283] validating the existence of file /etc/kubernetes/kubelet.conf\nI0708 04:59:27.902575     814 checks.go:283] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf\nI0708 04:59:27.902624     814 checks.go:104] validating the container runtime\nI0708 04:59:28.009123     814 checks.go:130] validating if the service is enabled and active\nI0708 04:59:28.039818     814 checks.go:332] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0708 04:59:28.039912     814 checks.go:332] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0708 04:59:28.039994     814 checks.go:644] validating whether swap is enabled or not\nI0708 04:59:28.040083     814 checks.go:373] validating the presence of executable ip\nI0708 04:59:28.040215     814 checks.go:373] validating the presence of executable iptables\nI0708 04:59:28.040255     814 checks.go:373] validating the presence of executable mount\nI0708 04:59:28.040296     814 checks.go:373] validating the presence of executable nsenter\nI0708 04:59:28.040331     814 checks.go:373] validating the presence of executable ebtables\nI0708 04:59:28.040366     814 checks.go:373] validating the presence of executable ethtool\nI0708 04:59:28.040402     814 checks.go:373] validating the presence of executable socat\nI0708 04:59:28.040433     814 checks.go:373] validating the presence of executable tc\nI0708 04:59:28.040477     814 checks.go:373] validating the presence of executable touch\nI0708 04:59:28.040531     814 checks.go:515] running all checks\nI0708 04:59:28.066135     814 checks.go:403] checking whether the given node name is reachable using net.LookupHost\nI0708 04:59:28.066616     814 checks.go:613] validating kubelet version\nI0708 04:59:28.158187     814 checks.go:130] validating if the service is enabled and active\nI0708 04:59:28.179606     814 checks.go:208] validating availability of port 10250\nI0708 04:59:28.179893     814 checks.go:283] validating the existence of file /etc/kubernetes/pki/ca.crt\nI0708 04:59:28.179919     814 checks.go:430] validating if the connectivity type is via proxy or direct\nI0708 04:59:28.179959     814 join.go:334] [preflight] Fetching init configuration\nI0708 04:59:28.179969     814 join.go:603] [join] Discovering cluster-info\n[discovery] Trying to connect to API Server \"172.17.0.5:6443\"\n[discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.5:6443\"\nI0708 04:59:28.188526     814 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 7 milliseconds\n[discovery] Failed to connect to API Server \"172.17.0.5:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the master node to creating a new valid token\n[discovery] Trying to connect to API Server \"172.17.0.5:6443\"\n[discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.5:6443\"\nI0708 04:59:33.195363     814 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 3 milliseconds\n[discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server \"172.17.0.5:6443\"\n[discovery] Successfully established connection with API Server \"172.17.0.5:6443\"\nI0708 04:59:33.196976     814 join.go:610] [join] Retrieving KubeConfig objects\n[join] Reading configuration from the cluster...\n[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\nI0708 04:59:33.205690     814 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 7 milliseconds\nI0708 04:59:33.208477     814 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 1 milliseconds\nI0708 04:59:33.210987     814 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13 200 OK in 1 milliseconds\nI0708 04:59:33.212678     814 interface.go:384] Looking for default routes with IPv4 addresses\nI0708 04:59:33.212693     814 interface.go:389] Default route transits interface \"eth0\"\nI0708 04:59:33.212849     814 interface.go:196] Interface eth0 is up\nI0708 04:59:33.212919     814 interface.go:244] Interface \"eth0\" has 1 addresses :[172.17.0.3/16].\nI0708 04:59:33.212958     814 interface.go:211] Checking addr  172.17.0.3/16.\nI0708 04:59:33.212974     814 interface.go:218] IP found 172.17.0.3\nI0708 04:59:33.213044     814 interface.go:250] Found valid IPv4 address 172.17.0.3 for interface \"eth0\".\nI0708 04:59:33.213087     814 interface.go:395] Found active IP 172.17.0.3 \nI0708 04:59:33.213307     814 join.go:341] [preflight] Running configuration dependant checks\nI0708 04:59:33.213346     814 join.go:478] [join] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf\nI0708 04:59:33.291498     814 loader.go:359] Config loaded from file /etc/kubernetes/bootstrap-kubelet.conf\nI0708 04:59:33.292124     814 join.go:503] Stopping the kubelet\n[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace\nI0708 04:59:33.316041     814 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13 200 OK in 4 milliseconds\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\nI0708 04:59:33.420125     814 join.go:520] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...\nI0708 04:59:34.488788     814 loader.go:359] Config loaded from file /etc/kubernetes/kubelet.conf\nI0708 04:59:34.500766     814 loader.go:359] Config loaded from file /etc/kubernetes/kubelet.conf\nI0708 04:59:34.502281     814 join.go:538] [join] preserving the crisocket information for the node\n[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"kind-kubetest-worker3\" as an annotation\nI0708 04:59:35.015767     814 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/nodes/kind-kubetest-worker3 200 OK in 13 milliseconds\nI0708 04:59:35.035189     814 round_trippers.go:438] PATCH https://172.17.0.5:6443/api/v1/nodes/kind-kubetest-worker3 200 OK in 15 milliseconds\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the master to see this node join the cluster.\n"
time="04:59:35" level=debug msg="I0708 04:59:27.928136     788 join.go:299] [join] found NodeName empty; using OS hostname as NodeName\n[preflight] Running pre-flight checks\nI0708 04:59:27.928267     788 join.go:328] [preflight] Running general checks\nI0708 04:59:27.928496     788 checks.go:245] validating the existence and emptiness of directory /etc/kubernetes/manifests\nI0708 04:59:27.928521     788 checks.go:283] validating the existence of file /etc/kubernetes/kubelet.conf\nI0708 04:59:27.928533     788 checks.go:283] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf\nI0708 04:59:27.928545     788 checks.go:104] validating the container runtime\nI0708 04:59:28.044504     788 checks.go:130] validating if the service is enabled and active\nI0708 04:59:28.069291     788 checks.go:332] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0708 04:59:28.069594     788 checks.go:332] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0708 04:59:28.069678     788 checks.go:644] validating whether swap is enabled or not\nI0708 04:59:28.069758     788 checks.go:373] validating the presence of executable ip\nI0708 04:59:28.069891     788 checks.go:373] validating the presence of executable iptables\nI0708 04:59:28.069983     788 checks.go:373] validating the presence of executable mount\nI0708 04:59:28.070060     788 checks.go:373] validating the presence of executable nsenter\nI0708 04:59:28.070131     788 checks.go:373] validating the presence of executable ebtables\nI0708 04:59:28.070259     788 checks.go:373] validating the presence of executable ethtool\nI0708 04:59:28.070339     788 checks.go:373] validating the presence of executable socat\nI0708 04:59:28.070529     788 checks.go:373] validating the presence of executable tc\nI0708 04:59:28.070653     788 checks.go:373] validating the presence of executable touch\nI0708 04:59:28.070763     788 checks.go:515] running all checks\nI0708 04:59:28.101541     788 checks.go:403] checking whether the given node name is reachable using net.LookupHost\nI0708 04:59:28.102536     788 checks.go:613] validating kubelet version\nI0708 04:59:28.194866     788 checks.go:130] validating if the service is enabled and active\nI0708 04:59:28.209305     788 checks.go:208] validating availability of port 10250\nI0708 04:59:28.209477     788 checks.go:283] validating the existence of file /etc/kubernetes/pki/ca.crt\nI0708 04:59:28.209516     788 checks.go:430] validating if the connectivity type is via proxy or direct\nI0708 04:59:28.209555     788 join.go:334] [preflight] Fetching init configuration\nI0708 04:59:28.209561     788 join.go:603] [join] Discovering cluster-info\n[discovery] Trying to connect to API Server \"172.17.0.5:6443\"\n[discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.5:6443\"\nI0708 04:59:28.218573     788 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 8 milliseconds\n[discovery] Failed to connect to API Server \"172.17.0.5:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the master node to creating a new valid token\n[discovery] Trying to connect to API Server \"172.17.0.5:6443\"\n[discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.5:6443\"\nI0708 04:59:33.222947     788 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds\n[discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server \"172.17.0.5:6443\"\n[discovery] Successfully established connection with API Server \"172.17.0.5:6443\"\nI0708 04:59:33.224606     788 join.go:610] [join] Retrieving KubeConfig objects\n[join] Reading configuration from the cluster...\n[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\nI0708 04:59:33.234230     788 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 8 milliseconds\nI0708 04:59:33.237346     788 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 1 milliseconds\nI0708 04:59:33.242374     788 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13 200 OK in 3 milliseconds\nI0708 04:59:33.243858     788 interface.go:384] Looking for default routes with IPv4 addresses\nI0708 04:59:33.243886     788 interface.go:389] Default route transits interface \"eth0\"\nI0708 04:59:33.244011     788 interface.go:196] Interface eth0 is up\nI0708 04:59:33.244088     788 interface.go:244] Interface \"eth0\" has 1 addresses :[172.17.0.4/16].\nI0708 04:59:33.244121     788 interface.go:211] Checking addr  172.17.0.4/16.\nI0708 04:59:33.244130     788 interface.go:218] IP found 172.17.0.4\nI0708 04:59:33.244138     788 interface.go:250] Found valid IPv4 address 172.17.0.4 for interface \"eth0\".\nI0708 04:59:33.244143     788 interface.go:395] Found active IP 172.17.0.4 \nI0708 04:59:33.244238     788 join.go:341] [preflight] Running configuration dependant checks\nI0708 04:59:33.244256     788 join.go:478] [join] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf\nI0708 04:59:33.322266     788 loader.go:359] Config loaded from file /etc/kubernetes/bootstrap-kubelet.conf\nI0708 04:59:33.322748     788 join.go:503] Stopping the kubelet\n[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace\nI0708 04:59:33.342108     788 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13 200 OK in 2 milliseconds\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\nI0708 04:59:33.436042     788 join.go:520] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...\nI0708 04:59:34.513390     788 loader.go:359] Config loaded from file /etc/kubernetes/kubelet.conf\nI0708 04:59:34.526508     788 loader.go:359] Config loaded from file /etc/kubernetes/kubelet.conf\nI0708 04:59:34.527839     788 join.go:538] [join] preserving the crisocket information for the node\n[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"kind-kubetest-worker2\" as an annotation\nI0708 04:59:35.041572     788 round_trippers.go:438] GET https://172.17.0.5:6443/api/v1/nodes/kind-kubetest-worker2 200 OK in 13 milliseconds\nI0708 04:59:35.051471     788 round_trippers.go:438] PATCH https://172.17.0.5:6443/api/v1/nodes/kind-kubetest-worker2 200 OK in 6 milliseconds\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the master to see this node join the cluster.\n"
 ✓ Joining worker nodes 🚜
 • Waiting ≤ 1m0s for control-plane = Ready ⏳  ...
time="04:59:35" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master -o=jsonpath='{.items..status.conditions[-1:].status}']"
time="04:59:35" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master -o=jsonpath='{.items..status.conditions[-1:].status}']"
time="04:59:35" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master -o=jsonpath='{.items..status.conditions[-1:].status}']"
time="04:59:36" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master -o=jsonpath='{.items..status.conditions[-1:].status}']"
... skipping 880 lines ...
STEP: Creating a kubernetes client
Jul  8 05:00:34.029: INFO: >>> kubeConfig: /root/.kube/kind-config-kind-kubetest
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:699
STEP: creating the pod
Jul  8 05:00:34.117: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:154
Jul  8 05:00:48.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 3 lines ...
Jul  8 05:00:54.367: INFO: namespace e2e-tests-init-container-2dgfp deletion completed in 6.26070873s


• [SLOW TEST:20.338 seconds]
[k8s.io] InitContainer [NodeConformance]
test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:153
... skipping 1259 lines ...
Jul  8 05:01:23.427: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-4nzgt.svc from pod e2e-tests-dns-4nzgt/dns-test-616bb08d-a13d-11e9-aaa9-6e4935fcc964: the server could not find the requested resource (get pods dns-test-616bb08d-a13d-11e9-aaa9-6e4935fcc964)
Jul  8 05:01:23.431: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-4nzgt.svc from pod e2e-tests-dns-4nzgt/dns-test-616bb08d-a13d-11e9-aaa9-6e4935fcc964: the server could not find the requested resource (get pods dns-test-616bb08d-a13d-11e9-aaa9-6e4935fcc964)
Jul  8 05:01:23.435: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-4nzgt/dns-test-616bb08d-a13d-11e9-aaa9-6e4935fcc964: the server could not find the requested resource (get pods dns-test-616bb08d-a13d-11e9-aaa9-6e4935fcc964)
Jul  8 05:01:23.439: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-4nzgt/dns-test-616bb08d-a13d-11e9-aaa9-6e4935fcc964: the server could not find the requested resource (get pods dns-test-616bb08d-a13d-11e9-aaa9-6e4935fcc964)
Jul  8 05:01:23.443: INFO: Unable to read 10.105.0.210_udp@PTR from pod e2e-tests-dns-4nzgt/dns-test-616bb08d-a13d-11e9-aaa9-6e4935fcc964: the server could not find the requested resource (get pods dns-test-616bb08d-a13d-11e9-aaa9-6e4935fcc964)
Jul  8 05:01:23.446: INFO: Unable to read 10.105.0.210_tcp@PTR from pod e2e-tests-dns-4nzgt/dns-test-616bb08d-a13d-11e9-aaa9-6e4935fcc964: the server could not find the requested resource (get pods dns-test-616bb08d-a13d-11e9-aaa9-6e4935fcc964)
Jul  8 05:01:23.446: INFO: Lookups using e2e-tests-dns-4nzgt/dns-test-616bb08d-a13d-11e9-aaa9-6e4935fcc964 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-4nzgt.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4nzgt.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4nzgt.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-4nzgt.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-4nzgt.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.105.0.210_udp@PTR 10.105.0.210_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-4nzgt jessie_tcp@dns-test-service.e2e-tests-dns-4nzgt jessie_udp@dns-test-service.e2e-tests-dns-4nzgt.svc jessie_tcp@dns-test-service.e2e-tests-dns-4nzgt.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4nzgt.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4nzgt.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-4nzgt.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-4nzgt.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.105.0.210_udp@PTR 10.105.0.210_tcp@PTR]

Jul  8 05:01:28.591: INFO: DNS probes using e2e-tests-dns-4nzgt/dns-test-616bb08d-a13d-11e9-aaa9-6e4935fcc964 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 689 lines ...
STEP: Creating a kubernetes client
Jul  8 05:00:47.702: INFO: >>> kubeConfig: /root/.kube/kind-config-kind-kubetest
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:699
STEP: creating the pod
Jul  8 05:00:47.850: INFO: PodSpec: initContainers in spec.initContainers
Jul  8 05:01:33.687: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-5985621f-a13d-11e9-af8e-6e4935fcc964", GenerateName:"", Namespace:"e2e-tests-init-container-krqzd", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-krqzd/pods/pod-init-5985621f-a13d-11e9-af8e-6e4935fcc964", UID:"598703f6-a13d-11e9-b573-0242620328e9", ResourceVersion:"3782", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63698158847, loc:(*time.Location)(0x7929da0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"850146778"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-bqzxs", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000d69c00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bqzxs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bqzxs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bqzxs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000cf9ea8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-kubetest-worker3", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000c4eb40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000cf9f20)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000cf9f40)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000cf9f48), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000cf9f4c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63698158847, loc:(*time.Location)(0x7929da0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63698158847, loc:(*time.Location)(0x7929da0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63698158847, loc:(*time.Location)(0x7929da0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63698158847, loc:(*time.Location)(0x7929da0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.36.0.15", StartTime:(*v1.Time)(0xc0011fa060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0011961c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0011962a0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://aa50a0857f4260ef6f2e3c1c2d85369d2d4ed3e54d46ce95f9fee2a388428dbd"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0011fa0a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0011fa080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:154
Jul  8 05:01:33.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-krqzd" for this suite.
Jul  8 05:01:59.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  8 05:01:59.844: INFO: namespace: e2e-tests-init-container-krqzd, resource: bindings, ignored listing per whitelist
Jul  8 05:02:00.014: INFO: namespace e2e-tests-init-container-krqzd deletion completed in 26.315762663s


• [SLOW TEST:72.312 seconds]
[k8s.io] InitContainer [NodeConformance]
test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:153
... skipping 279 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul  8 05:01:53.861: INFO: Successfully updated pod "pod-update-activedeadlineseconds-7bb7b35f-a13d-11e9-9a51-6e4935fcc964"
Jul  8 05:01:53.861: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-7bb7b35f-a13d-11e9-9a51-6e4935fcc964" in namespace "e2e-tests-pods-vxgkp" to be "terminated due to deadline exceeded"
Jul  8 05:01:53.874: INFO: Pod "pod-update-activedeadlineseconds-7bb7b35f-a13d-11e9-9a51-6e4935fcc964": Phase="Running", Reason="", readiness=true. Elapsed: 13.29673ms
Jul  8 05:01:55.885: INFO: Pod "pod-update-activedeadlineseconds-7bb7b35f-a13d-11e9-9a51-6e4935fcc964": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.024294632s
Jul  8 05:01:55.886: INFO: Pod "pod-update-activedeadlineseconds-7bb7b35f-a13d-11e9-9a51-6e4935fcc964" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:154
Jul  8 05:01:55.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-vxgkp" for this suite.
Jul  8 05:02:03.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 2272 lines ...
Jul  8 05:02:34.003: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul  8 05:02:34.003: INFO: Running '/root/.kubetest/kind/kubectl --server=https://localhost:44079 --kubeconfig=/root/.kube/kind-config-kind-kubetest describe pod redis-master-npnpc --namespace=e2e-tests-kubectl-4hw29'
Jul  8 05:02:34.137: INFO: stderr: ""
Jul  8 05:02:34.137: INFO: stdout: "Name:               redis-master-npnpc\nNamespace:          e2e-tests-kubectl-4hw29\nPriority:           0\nPriorityClassName:  <none>\nNode:               kind-kubetest-worker/172.17.0.2\nStart Time:         Mon, 08 Jul 2019 05:02:23 +0000\nLabels:             app=redis\n                    role=master\nAnnotations:        <none>\nStatus:             Running\nIP:                 10.40.0.3\nControlled By:      ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://e548be181fdb809574d0c9804b32e560723e7093ccd126f40c25021cf4dc3476\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 08 Jul 2019 05:02:26 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9785l (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-9785l:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-9785l\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                           Message\n  ----    ------     ----  ----                           -------\n  Normal  Scheduled  11s   default-scheduler              Successfully assigned e2e-tests-kubectl-4hw29/redis-master-npnpc to kind-kubetest-worker\n  Normal  Pulled     8s    kubelet, kind-kubetest-worker  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    8s    kubelet, kind-kubetest-worker  Created container\n  Normal  Started    8s    kubelet, kind-kubetest-worker  Started container\n"
Jul  8 05:02:34.137: INFO: Running '/root/.kubetest/kind/kubectl --server=https://localhost:44079 --kubeconfig=/root/.kube/kind-config-kind-kubetest describe rc redis-master --namespace=e2e-tests-kubectl-4hw29'
Jul  8 05:02:34.286: INFO: stderr: ""
Jul  8 05:02:34.286: INFO: stdout: "Name:         redis-master\nNamespace:    e2e-tests-kubectl-4hw29\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  11s   replication-controller  Created pod: redis-master-npnpc\n"
Jul  8 05:02:34.286: INFO: Running '/root/.kubetest/kind/kubectl --server=https://localhost:44079 --kubeconfig=/root/.kube/kind-config-kind-kubetest describe service redis-master --namespace=e2e-tests-kubectl-4hw29'
Jul  8 05:02:34.435: INFO: stderr: ""
Jul  8 05:02:34.435: INFO: stdout: "Name:              redis-master\nNamespace:         e2e-tests-kubectl-4hw29\nLabels:            app=redis\n                   role=master\nAnnotations:       <none>\nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.101.27.248\nPort:              <unset>  6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.40.0.3:6379\nSession Affinity:  None\nEvents:            <none>\n"
Jul  8 05:02:34.445: INFO: Running '/root/.kubetest/kind/kubectl --server=https://localhost:44079 --kubeconfig=/root/.kube/kind-config-kind-kubetest describe node kind-kubetest-control-plane'
Jul  8 05:02:34.633: INFO: stderr: ""
Jul  8 05:02:34.633: INFO: stdout: "Name:               kind-kubetest-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/hostname=kind-kubetest-control-plane\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Mon, 08 Jul 2019 04:59:21 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Mon, 08 Jul 2019 05:00:07 +0000   Mon, 08 Jul 2019 05:00:07 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Mon, 08 Jul 2019 05:02:31 +0000   Mon, 08 Jul 2019 04:59:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Mon, 08 Jul 2019 05:02:31 +0000   Mon, 08 Jul 2019 04:59:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Mon, 08 Jul 2019 05:02:31 +0000   Mon, 08 Jul 2019 04:59:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Mon, 08 Jul 2019 05:02:31 +0000   Mon, 08 Jul 2019 05:00:11 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.17.0.5\n  Hostname:    kind-kubetest-control-plane\nCapacity:\n cpu:                8\n ephemeral-storage:  253696108Ki\n hugepages-2Mi:      0\n memory:             53588960Ki\n pods:               110\nAllocatable:\n cpu:                8\n ephemeral-storage:  253696108Ki\n hugepages-2Mi:      0\n memory:             53588960Ki\n pods:               110\nSystem Info:\n Machine ID:                 2abb6598f94e4489851e8f254e469fe2\n System UUID:                0319A3A4-3F05-C542-EA1B-1BD829ADA86E\n Boot ID:                    006d295f-70ab-40b6-82a5-c5832859c9db\n Kernel Version:             4.14.127+\n OS Image:                   Ubuntu 18.04.1 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.6.3\n Kubelet Version:            v1.13.8-beta.0.35+0c6d31a99f8147\n Kube-Proxy Version:         v1.13.8-beta.0.35+0c6d31a99f8147\nNon-terminated Pods:         (8 in total)\n  Namespace                  Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                                   ------------  ----------  ---------------  -------------  ---\n  kube-system                coredns-54ff9cd656-2xnvj                               100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m3s\n  kube-system                coredns-54ff9cd656-m67wj                               100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m3s\n  kube-system                etcd-kind-kubetest-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s\n  kube-system                kube-apiserver-kind-kubetest-control-plane             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s\n  kube-system                kube-controller-manager-kind-kubetest-control-plane    200m (2%)     0 (0%)      0 (0%)           0 (0%)         119s\n  kube-system                kube-proxy-phzkc                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s\n  kube-system                kube-scheduler-kind-kubetest-control-plane             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s\n  kube-system                weave-net-xpvkv                                        20m (0%)      0 (0%)      0 (0%)           0 (0%)         3m3s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                770m (9%)   0 (0%)\n  memory             140Mi (0%)  340Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:\n  Type     Reason                    Age                    From                                     Message\n  ----     ------                    ----                   ----                                     -------\n  Normal   Starting                  3m31s                  kubelet, kind-kubetest-control-plane     Starting kubelet.\n  Normal   NodeHasSufficientMemory   3m31s (x7 over 3m31s)  kubelet, kind-kubetest-control-plane     Node kind-kubetest-control-plane status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure     3m31s (x7 over 3m31s)  kubelet, kind-kubetest-control-plane     Node kind-kubetest-control-plane status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID      3m31s (x6 over 3m31s)  kubelet, kind-kubetest-control-plane     Node kind-kubetest-control-plane status is now: NodeHasSufficientPID\n  Normal   NodeAllocatableEnforced   3m31s                  kubelet, kind-kubetest-control-plane     Updated Node Allocatable limit across pods\n  Warning  CheckLimitsForResolvConf  3m31s (x3 over 3m31s)  kubelet, kind-kubetest-control-plane     Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\n  Normal   Starting                  3m2s                   kube-proxy, kind-kubetest-control-plane  Starting kube-proxy.\n"
... skipping 220 lines ...
Jul  8 05:01:37.465: INFO: Running '/root/.kubetest/kind/kubectl --server=https://localhost:44079 --kubeconfig=/root/.kube/kind-config-kind-kubetest create -f - --namespace=e2e-tests-kubectl-5vxjg'
Jul  8 05:01:37.718: INFO: stderr: ""
Jul  8 05:01:37.718: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jul  8 05:01:37.718: INFO: Waiting for all frontend pods to be Running.
Jul  8 05:02:02.769: INFO: Waiting for frontend to serve content.
Jul  8 05:02:12.946: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection timed out [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:155
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(128): Predis\Connection\AbstractConnection-&gt;onConnectionError('Connection time...', 110)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(178): Predis\Connection\StreamConnection-&gt;createStreamSocket(Object(Predis\Connection\Parameters), 'tcp://redis-sla...', 4)
#2 /usr/local/lib/php/Predis/Connection/StreamConnection.php(100): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#3 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(81): Predis\Connection\StreamConnection-&gt;createResource()
#4 /usr/local/lib/php/Predis/Connection/StreamConnection.php(258): Predis\Connection\AbstractConnection-&gt;connect()
... skipping 97 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-6t48p
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-6t48p
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-6t48p
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-6t48p
Jul  8 05:02:36.304: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-6t48p, name: ss-0, uid: 95643680-a13d-11e9-b573-0242620328e9, status phase: Pending. Waiting for statefulset controller to delete.
Jul  8 05:02:36.909: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-6t48p, name: ss-0, uid: 95643680-a13d-11e9-b573-0242620328e9, status phase: Failed. Waiting for statefulset controller to delete.
Jul  8 05:02:36.917: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-6t48p, name: ss-0, uid: 95643680-a13d-11e9-b573-0242620328e9, status phase: Failed. Waiting for statefulset controller to delete.
Jul  8 05:02:36.926: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-6t48p
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-6t48p
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-6t48p and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:85
Jul  8 05:02:42.980: INFO: Deleting all statefulset in ns e2e-tests-statefulset-6t48p
... skipping 2460 lines ...
Jul  8 05:03:40.071: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-s44tf/dns-test-b8f649cb-a13d-11e9-83a2-6e4935fcc964: the server could not find the requested resource (get pods dns-test-b8f649cb-a13d-11e9-83a2-6e4935fcc964)
Jul  8 05:03:40.077: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-s44tf/dns-test-b8f649cb-a13d-11e9-83a2-6e4935fcc964: the server could not find the requested resource (get pods dns-test-b8f649cb-a13d-11e9-83a2-6e4935fcc964)
Jul  8 05:03:40.080: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-s44tf.svc.cluster.local from pod e2e-tests-dns-s44tf/dns-test-b8f649cb-a13d-11e9-83a2-6e4935fcc964: the server could not find the requested resource (get pods dns-test-b8f649cb-a13d-11e9-83a2-6e4935fcc964)
Jul  8 05:03:40.084: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-s44tf/dns-test-b8f649cb-a13d-11e9-83a2-6e4935fcc964: the server could not find the requested resource (get pods dns-test-b8f649cb-a13d-11e9-83a2-6e4935fcc964)
Jul  8 05:03:40.087: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-s44tf/dns-test-b8f649cb-a13d-11e9-83a2-6e4935fcc964: the server could not find the requested resource (get pods dns-test-b8f649cb-a13d-11e9-83a2-6e4935fcc964)
Jul  8 05:03:40.090: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-s44tf/dns-test-b8f649cb-a13d-11e9-83a2-6e4935fcc964: the server could not find the requested resource (get pods dns-test-b8f649cb-a13d-11e9-83a2-6e4935fcc964)
Jul  8 05:03:40.090: INFO: Lookups using e2e-tests-dns-s44tf/dns-test-b8f649cb-a13d-11e9-83a2-6e4935fcc964 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-s44tf.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-s44tf.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  8 05:03:45.315: INFO: DNS probes using e2e-tests-dns-s44tf/dns-test-b8f649cb-a13d-11e9-83a2-6e4935fcc964 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:154
... skipping 141 lines ...
[BeforeEach] [sig-network] Services
  test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-vwzdp
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-vwzdp to expose endpoints map[]
Jul  8 05:03:38.308: INFO: Get endpoints failed (36.206635ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jul  8 05:03:39.310: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-vwzdp exposes endpoints map[] (1.038828107s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-vwzdp
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-vwzdp to expose endpoints map[pod1:[80]]
Jul  8 05:03:41.365: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-vwzdp exposes endpoints map[pod1:[80]] (2.042211506s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-vwzdp
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-vwzdp to expose endpoints map[pod1:[80] pod2:[80]]
... skipping 1091 lines ...
Jul  8 05:04:07.581: INFO: Running AfterSuite actions on all nodes
Jul  8 05:06:36.823: INFO: Running AfterSuite actions on node 1
Jul  8 05:06:36.823: INFO: Skipping dumping logs from cluster


Ran 190 of 2162 Specs in 377.268 seconds
SUCCESS! -- 190 Passed | 0 Failed | 0 Pending | 1972 Skipped 

Ginkgo ran 1 suite in 6m22.278971825s
Test Suite Passed
2019/07/08 05:06:36 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.focus=\[Conformance\] --ginkgo.skip=\[Serial\] --num-nodes=3 --report-dir=/logs/artifacts --disable-log-dump=true' finished in 6m24.268009626s
2019/07/08 05:06:36 kind.go:369: kind.go:DumpClusterLogs()
2019/07/08 05:06:36 process.go:153: Running: kind export logs /logs/artifacts --loglevel=debug --name=kind-kubetest
... skipping 71 lines ...
time="05:06:38" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-worker3 cat /var/log/pods/2db2d230-a13d-11e9-b573-0242620328e9/weave/0.log]"
time="05:06:38" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-worker3 cat /var/log/pods/c73fd264-a13d-11e9-b573-0242620328e9/liveness/5.log]"
time="05:06:38" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-worker3 cat /var/log/containers/kube-proxy-vzj6l_kube-system_kube-proxy-d1af6057f2f4717e45bb727ad76c2545c570c4c441c3f3bcea4dde7217fb2049.log]"
time="05:06:38" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-worker3 cat /var/log/containers/weave-net-qfsl2_kube-system_weave-f92b5a75ab81f17120aa015a01c2f3a3300c2baf6bf5787e804be43bc96f588a.log]"
time="05:06:38" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-worker3 cat /var/log/containers/weave-net-qfsl2_kube-system_weave-npc-f8ccb5feeb8ef0e3b59d45f9a49b424e1c85b67dc56f24580b244f384fdc5920.log]"
time="05:06:38" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-worker3 cat /var/log/containers/liveness-http_e2e-tests-container-probe-cnf7s_liveness-67e6a12e3afb3c889bd0cd1182afa3835bf3a6e5a7c171fd940a09b382889354.log]"
Error: exit status 1
exit status 1

exit status 1
exit status 1
exit status 1
exit status 1
... skipping 13 lines ...
$KUBECONFIG is still set to use /root/.kube/kind-config-kind-kubetest even though that file has been deleted, remember to unset it
time="05:06:44" level=debug msg="Running: /usr/bin/docker [docker rm -f -v kind-kubetest-control-plane kind-kubetest-worker2 kind-kubetest-worker kind-kubetest-worker3]"
2019/07/08 05:06:51 process.go:155: Step 'kind delete cluster --loglevel=debug --name=kind-kubetest' finished in 6.979828192s
2019/07/08 05:06:51 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2019/07/08 05:06:51 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"
2019/07/08 05:06:52 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 955.007216ms
2019/07/08 05:06:52 main.go:316: Something went wrong: encountered 1 errors: [error during kind export logs /logs/artifacts --loglevel=debug --name=kind-kubetest: exit status 1]
+ EXIT_VALUE=1
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
[Barnacle] 2019/07/08 05:06:52 Cleaning up Docker data root...
[Barnacle] 2019/07/08 05:06:52 Removing all containers.
... skipping 28 lines ...