This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 202 succeeded
Started2019-07-05 16:50
Elapsed12m4s
Revisionrelease-1.13
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/c05f93f4-8b0c-4bfe-9975-30853f3c9af7/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/c05f93f4-8b0c-4bfe-9975-30853f3c9af7/targets/test
job-versionv1.13.8-beta.0.35+0c6d31a99f8147
revisionv1.13.8-beta.0.35+0c6d31a99f8147

Test Failures


DumpClusterLogs 7.02s

error during kind export logs /logs/artifacts --loglevel=debug --name=kind-kubetest: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 202 Passed Tests

Show 1972 Skipped Tests

Error lines from build-log.txt

... skipping 101 lines ...
time="16:52:50" level=debug msg="Running: /usr/bin/docker [docker exec 20150d33370fc0b67ce37d46d5f63891d190c8a710c671354c6bbbd476d85023 ln -s /kind/bin/kubectl /usr/bin/kubectl]"
time="16:52:50" level=debug msg="Running: /usr/bin/docker [docker exec 20150d33370fc0b67ce37d46d5f63891d190c8a710c671354c6bbbd476d85023 systemctl enable /kind/systemd/kubelet.service]"
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /kind/systemd/kubelet.service.
Created symlink /etc/systemd/system/kubelet.service → /kind/systemd/kubelet.service.
time="16:52:50" level=debug msg="Running: /usr/bin/docker [docker exec 20150d33370fc0b67ce37d46d5f63891d190c8a710c671354c6bbbd476d85023 mkdir -p /etc/systemd/system/kubelet.service.d]"
time="16:52:50" level=debug msg="Running: /usr/bin/docker [docker exec 20150d33370fc0b67ce37d46d5f63891d190c8a710c671354c6bbbd476d85023 cp /kind/systemd/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf]"
time="16:52:51" level=debug msg="Running: /usr/bin/docker [docker exec 20150d33370fc0b67ce37d46d5f63891d190c8a710c671354c6bbbd476d85023 /bin/sh -c echo \"KUBELET_EXTRA_ARGS=--fail-swap-on=false\" >> /etc/default/kubelet]"
time="16:52:51" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t 20150d33370fc0b67ce37d46d5f63891d190c8a710c671354c6bbbd476d85023 cat /kind/version]"
time="16:52:51" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t 20150d33370fc0b67ce37d46d5f63891d190c8a710c671354c6bbbd476d85023 mkdir -p /kind/manifests]"
time="16:52:51" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i 20150d33370fc0b67ce37d46d5f63891d190c8a710c671354c6bbbd476d85023 cp /dev/stdin /kind/manifests/default-cni.yaml]"
time="16:52:52" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t 20150d33370fc0b67ce37d46d5f63891d190c8a710c671354c6bbbd476d85023 kubeadm config images list --kubernetes-version v1.13.8-beta.0.35+0c6d31a99f8147]"
Pulling: k8s.gcr.io/pause:3.1
time="16:52:52" level=info msg="Pulling image: k8s.gcr.io/pause:3.1 ..."
... skipping 154 lines ...
time="16:54:34" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-control-plane cat /kind/version]"
time="16:54:34" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-control-plane mkdir -p /kind]"
time="16:54:34" level=debug msg="Running: /usr/bin/docker [docker cp /tmp/076715541 kind-kubetest-control-plane:/kind/kubeadm.conf]"
 ✓ Creating kubeadm config 📜
 • Starting control-plane 🕹️  ...
time="16:54:34" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-control-plane kubeadm init --ignore-preflight-errors=all --config=/kind/kubeadm.conf --skip-token-print --v=6]"
time="16:55:10" level=debug msg="I0705 16:54:35.274312     743 initconfiguration.go:169] loading configuration from the given file\nW0705 16:54:35.274995     743 common.go:86] WARNING: Detected resource kinds that may not apply: [InitConfiguration JoinConfiguration]\nW0705 16:54:35.275540     743 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubeadm.k8s.io\", Version:\"v1beta1\", Kind:\"ClusterConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: unknown field \"metadata\"\nW0705 16:54:35.277389     743 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubeadm.k8s.io\", Version:\"v1beta1\", Kind:\"InitConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: unknown field \"metadata\"\nW0705 16:54:35.278007     743 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubeadm.k8s.io\", Version:\"v1beta1\", Kind:\"JoinConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: unknown field \"metadata\"\n[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta1, Kind=JoinConfiguration\nW0705 16:54:35.278555     743 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubelet.config.k8s.io\", Version:\"v1beta1\", Kind:\"KubeletConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: unknown field \"metadata\"\nW0705 16:54:35.279451     743 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubeproxy.config.k8s.io\", Version:\"v1alpha1\", Kind:\"KubeProxyConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: unknown field \"metadata\"\nI0705 16:54:35.281219     743 interface.go:384] Looking for default routes with IPv4 addresses\nI0705 16:54:35.281244     743 interface.go:389] Default route transits interface \"eth0\"\nI0705 16:54:35.281522     743 interface.go:196] Interface eth0 is up\nI0705 16:54:35.281572     743 interface.go:244] Interface \"eth0\" has 1 addresses :[172.17.0.2/16].\nI0705 16:54:35.281591     743 interface.go:211] Checking addr  172.17.0.2/16.\nI0705 16:54:35.281598     743 interface.go:218] IP found 172.17.0.2\nI0705 16:54:35.281605     743 interface.go:250] Found valid IPv4 address 172.17.0.2 for interface \"eth0\".\nI0705 16:54:35.281610     743 interface.go:395] Found active IP 172.17.0.2 \nI0705 16:54:35.281875     743 feature_gate.go:206] feature gates: &{map[]}\n[init] Using Kubernetes version: v1.13.8-beta.0.35+0c6d31a99f8147\n[preflight] Running pre-flight checks\nI0705 16:54:35.282234     743 checks.go:572] validating Kubernetes and kubeadm version\nI0705 16:54:35.282308     743 checks.go:171] validating if the firewall is enabled and active\nI0705 16:54:35.294642     743 checks.go:208] validating availability of port 6443\nI0705 16:54:35.294825     743 checks.go:208] validating availability of port 10251\nI0705 16:54:35.294850     743 checks.go:208] validating availability of port 10252\nI0705 16:54:35.294875     743 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml\nI0705 16:54:35.294887     743 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml\nI0705 16:54:35.294894     743 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml\nI0705 16:54:35.294899     743 checks.go:283] validating the existence of file /etc/kubernetes/manifests/etcd.yaml\nI0705 16:54:35.294907     743 checks.go:430] validating if the connectivity type is via proxy or direct\nI0705 16:54:35.295031     743 checks.go:466] validating http connectivity to first IP address in the CIDR\nI0705 16:54:35.295051     743 checks.go:466] validating http connectivity to first IP address in the CIDR\nI0705 16:54:35.295057     743 checks.go:104] validating the container runtime\nI0705 16:54:35.375573     743 checks.go:130] validating if the service is enabled and active\nI0705 16:54:35.396399     743 checks.go:332] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0705 16:54:35.396510     743 checks.go:332] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0705 16:54:35.396559     743 checks.go:644] validating whether swap is enabled or not\nI0705 16:54:35.396599     743 checks.go:373] validating the presence of executable ip\nI0705 16:54:35.396694     743 checks.go:373] validating the presence of executable iptables\nI0705 16:54:35.396785     743 checks.go:373] validating the presence of executable mount\nI0705 16:54:35.396825     743 checks.go:373] validating the presence of executable nsenter\nI0705 16:54:35.396892     743 checks.go:373] validating the presence of executable ebtables\nI0705 16:54:35.396934     743 checks.go:373] validating the presence of executable ethtool\nI0705 16:54:35.396979     743 checks.go:373] validating the presence of executable socat\nI0705 16:54:35.397016     743 checks.go:373] validating the presence of executable tc\nI0705 16:54:35.397053     743 checks.go:373] validating the presence of executable touch\nI0705 16:54:35.397088     743 checks.go:515] running all checks\nI0705 16:54:35.427079     743 checks.go:403] checking whether the given node name is reachable using net.LookupHost\nI0705 16:54:35.427437     743 checks.go:613] validating kubelet version\nI0705 16:54:35.501524     743 checks.go:130] validating if the service is enabled and active\nI0705 16:54:35.516790     743 checks.go:208] validating availability of port 10250\nI0705 16:54:35.516887     743 checks.go:208] validating availability of port 2379\nI0705 16:54:35.516916     743 checks.go:208] validating availability of port 2380\nI0705 16:54:35.516943     743 checks.go:245] validating the existence and emptiness of directory /var/lib/etcd\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\nI0705 16:54:35.584220     743 checks.go:833] image exists: k8s.gcr.io/kube-apiserver:v1.13.8-beta.0.35_0c6d31a99f8147\nI0705 16:54:35.660826     743 checks.go:833] image exists: k8s.gcr.io/kube-controller-manager:v1.13.8-beta.0.35_0c6d31a99f8147\nI0705 16:54:35.729982     743 checks.go:833] image exists: k8s.gcr.io/kube-scheduler:v1.13.8-beta.0.35_0c6d31a99f8147\nI0705 16:54:35.799625     743 checks.go:833] image exists: k8s.gcr.io/kube-proxy:v1.13.8-beta.0.35_0c6d31a99f8147\nI0705 16:54:35.866377     743 checks.go:833] image exists: k8s.gcr.io/pause:3.1\nI0705 16:54:35.943651     743 checks.go:833] image exists: k8s.gcr.io/etcd:3.2.24\nI0705 16:54:36.013332     743 checks.go:833] image exists: k8s.gcr.io/coredns:1.2.6\nI0705 16:54:36.013390     743 kubelet.go:71] Stopping the kubelet\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\nI0705 16:54:36.116929     743 kubelet.go:89] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[certs] Using certificateDir folder \"/etc/kubernetes/pki\"\nI0705 16:54:36.186505     743 certs.go:113] creating a new certificate authority for ca\n[certs] Generating \"ca\" certificate and key\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"apiserver\" certificate and key\n[certs] apiserver serving cert is signed for DNS names [kind-kubetest-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.17.0.2]\nI0705 16:54:37.412354     743 certs.go:113] creating a new certificate authority for front-proxy-ca\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\nI0705 16:54:38.278454     743 certs.go:113] creating a new certificate authority for etcd-ca\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [kind-kubetest-control-plane localhost] and IPs [172.17.0.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [kind-kubetest-control-plane localhost] and IPs [172.17.0.2 127.0.0.1 ::1]\nI0705 16:54:39.253428     743 certs.go:72] creating a new public/private key files for signing service account users\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\nI0705 16:54:39.450189     743 kubeconfig.go:92] creating kubeconfig file for admin.conf\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\nI0705 16:54:39.617590     743 kubeconfig.go:92] creating kubeconfig file for kubelet.conf\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\nI0705 16:54:39.802410     743 kubeconfig.go:92] creating kubeconfig file for controller-manager.conf\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\nI0705 16:54:39.899256     743 kubeconfig.go:92] creating kubeconfig file for scheduler.conf\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\nI0705 16:54:40.190037     743 manifests.go:97] [control-plane] getting StaticPodSpecs\nI0705 16:54:40.202359     743 manifests.go:113] [control-plane] wrote static Pod manifest for component \"kube-apiserver\" to \"/etc/kubernetes/manifests/kube-apiserver.yaml\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\nI0705 16:54:40.202418     743 manifests.go:97] [control-plane] getting StaticPodSpecs\nI0705 16:54:40.203805     743 manifests.go:113] [control-plane] wrote static Pod manifest for component \"kube-controller-manager\" to \"/etc/kubernetes/manifests/kube-controller-manager.yaml\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\nI0705 16:54:40.203853     743 manifests.go:97] [control-plane] getting StaticPodSpecs\nI0705 16:54:40.204857     743 manifests.go:113] [control-plane] wrote static Pod manifest for component \"kube-scheduler\" to \"/etc/kubernetes/manifests/kube-scheduler.yaml\"\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\nI0705 16:54:40.205770     743 local.go:60] [etcd] wrote Static Pod manifest for a local etcd member to \"/etc/kubernetes/manifests/etcd.yaml\"\nI0705 16:54:40.205797     743 waitcontrolplane.go:89] [wait-control-plane] Waiting for the API server to be healthy\nI0705 16:54:40.206852     743 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s\nI0705 16:54:40.214184     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 2 milliseconds\nI0705 16:54:40.714913     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:41.215019     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:41.715050     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:42.215390     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:42.714984     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:43.214988     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:43.715142     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:44.214989     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:44.715049     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:45.214961     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:45.714857     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:46.215152     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:46.715155     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:47.214981     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:47.714851     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:48.215006     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:48.714972     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:49.215543     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:49.715019     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:50.215035     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:50.715069     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:51.214849     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:51.715079     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:52.214998     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:52.714951     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:53.215116     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:53.715242     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:54.215132     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:54.715083     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:55.215038     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:55.714876     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:56.215069     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:56.715261     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:57.215366     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:57.714895     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:54:58.214967     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0705 16:55:05.558720     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 6844 milliseconds\nI0705 16:55:05.716862     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0705 16:55:06.216950     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0705 16:55:06.716002     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 1 milliseconds\nI0705 16:55:07.216659     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0705 16:55:07.716874     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0705 16:55:08.216785     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0705 16:55:08.716782     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0705 16:55:09.216646     743 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 200 OK in 2 milliseconds\n[apiclient] All control plane components are healthy after 29.005492 seconds\nI0705 16:55:09.217599     743 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf\nI0705 16:55:09.218085     743 uploadconfig.go:114] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap\n[uploadconfig] storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace\nI0705 16:55:09.220859     743 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 404 Not Found in 1 milliseconds\nI0705 16:55:09.225574     743 round_trippers.go:438] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 3 milliseconds\nI0705 16:55:09.231491     743 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 3 milliseconds\nI0705 16:55:09.235500     743 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 3 milliseconds\nI0705 16:55:09.238375     743 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf\nI0705 16:55:09.239089     743 uploadconfig.go:128] [upload-config] Uploading the kubelet component config to a ConfigMap\n[kubelet] Creating a ConfigMap \"kubelet-config-1.13\" in namespace kube-system with the configuration for the kubelets in the cluster\nI0705 16:55:09.243001     743 round_trippers.go:438] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 3 milliseconds\nI0705 16:55:09.245838     743 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 2 milliseconds\nI0705 16:55:09.247983     743 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 1 milliseconds\nI0705 16:55:09.248140     743 uploadconfig.go:133] [upload-config] Preserving the CRISocket information for the control-plane node\n[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"kind-kubetest-control-plane\" as an annotation\nI0705 16:55:09.751550     743 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-kubetest-control-plane 200 OK in 3 milliseconds\nI0705 16:55:09.758019     743 round_trippers.go:438] PATCH https://172.17.0.2:6443/api/v1/nodes/kind-kubetest-control-plane 200 OK in 3 milliseconds\nI0705 16:55:09.759265     743 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf\n[mark-control-plane] Marking the node kind-kubetest-control-plane as control-plane by adding the label \"node-role.kubernetes.io/master=''\"\n[mark-control-plane] Marking the node kind-kubetest-control-plane as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]\nI0705 16:55:10.263287     743 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-kubetest-control-plane 200 OK in 3 milliseconds\nI0705 16:55:10.268332     743 round_trippers.go:438] PATCH https://172.17.0.2:6443/api/v1/nodes/kind-kubetest-control-plane 200 OK in 4 milliseconds\nI0705 16:55:10.269300     743 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf\n[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles\nI0705 16:55:10.271721     743 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-abcdef 404 Not Found in 1 milliseconds\nI0705 16:55:10.275388     743 round_trippers.go:438] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/secrets 201 Created in 3 milliseconds\n[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials\nI0705 16:55:10.279511     743 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 3 milliseconds\n[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token\nI0705 16:55:10.283203     743 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 2 milliseconds\n[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster\nI0705 16:55:10.285454     743 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 2 milliseconds\n[bootstraptoken] creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace\nI0705 16:55:10.285561     743 clusterinfo.go:46] [bootstraptoken] loading admin kubeconfig\nI0705 16:55:10.287579     743 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf\nI0705 16:55:10.287603     743 clusterinfo.go:54] [bootstraptoken] copying the cluster from admin.conf to the bootstrap kubeconfig\nI0705 16:55:10.288122     743 clusterinfo.go:66] [bootstraptoken] creating/updating ConfigMap in kube-public namespace\nI0705 16:55:10.292037     743 round_trippers.go:438] POST https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps 201 Created in 3 milliseconds\nI0705 16:55:10.292182     743 clusterinfo.go:80] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace\nI0705 16:55:10.297925     743 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles 201 Created in 5 milliseconds\nI0705 16:55:10.300252     743 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings 201 Created in 2 milliseconds\nI0705 16:55:10.301120     743 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf\nI0705 16:55:10.303703     743 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kube-dns 404 Not Found in 1 milliseconds\nI0705 16:55:10.305740     743 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/coredns 404 Not Found in 1 milliseconds\nI0705 16:55:10.308058     743 round_trippers.go:438] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 2 milliseconds\nI0705 16:55:10.312741     743 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterroles 201 Created in 3 milliseconds\nI0705 16:55:10.314880     743 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 1 milliseconds\nI0705 16:55:10.319183     743 round_trippers.go:438] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 3 milliseconds\nI0705 16:55:10.337839     743 round_trippers.go:438] POST https://172.17.0.2:6443/apis/apps/v1/namespaces/kube-system/deployments 201 Created in 10 milliseconds\nI0705 16:55:10.345141     743 round_trippers.go:438] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/services 201 Created in 5 milliseconds\n[addons] Applied essential addon: CoreDNS\nI0705 16:55:10.346023     743 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf\nI0705 16:55:10.350765     743 round_trippers.go:438] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 2 milliseconds\nI0705 16:55:10.354670     743 round_trippers.go:438] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 2 milliseconds\nI0705 16:55:10.368252     743 round_trippers.go:438] POST https://172.17.0.2:6443/apis/apps/v1/namespaces/kube-system/daemonsets 201 Created in 6 milliseconds\nI0705 16:55:10.370640     743 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 1 milliseconds\nI0705 16:55:10.373210     743 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 2 milliseconds\nI0705 16:55:10.375509     743 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 2 milliseconds\n[addons] Applied essential addon: kube-proxy\nI0705 16:55:10.376663     743 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf\n\nYour Kubernetes master has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n  mkdir -p $HOME/.kube\n  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n  sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nYou should now deploy a pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n  https://kubernetes.io/docs/concepts/cluster-administration/addons/\n\nYou can now join any number of machines by running the following on each node\nas root:\n\n  kubeadm join 172.17.0.2:6443 --token <value withheld> --discovery-token-ca-cert-hash sha256:d48c50e6538940b1fb482b1167bf7b4626a9585630623719ff733d2c8c5eaa9c\n"
time="16:55:10" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{(index (index .NetworkSettings.Ports \"6443/tcp\") 0).HostPort}} kind-kubetest-control-plane]"
time="16:55:10" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-control-plane cat /etc/kubernetes/admin.conf]"
time="16:55:10" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-kubetest-control-plane test -f /kind/manifests/default-cni.yaml]"
time="16:55:10" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-kubetest-control-plane kubectl create --kubeconfig=/etc/kubernetes/admin.conf -f /kind/manifests/default-cni.yaml]"
time="16:55:11" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-kubetest-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f -]"
 ✓ Starting control-plane 🕹️
 • Joining worker nodes 🚜  ...
time="16:55:12" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}} kind-kubetest-control-plane]"
time="16:55:12" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}} kind-kubetest-control-plane]"
time="16:55:12" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}} kind-kubetest-control-plane]"
time="16:55:12" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-worker2 kubeadm join 172.17.0.2:6443 --token abcdef.0123456789abcdef --discovery-token-unsafe-skip-ca-verification --ignore-preflight-errors=all --v=6]"
time="16:55:12" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-worker3 kubeadm join 172.17.0.2:6443 --token abcdef.0123456789abcdef --discovery-token-unsafe-skip-ca-verification --ignore-preflight-errors=all --v=6]"
time="16:55:12" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-worker kubeadm join 172.17.0.2:6443 --token abcdef.0123456789abcdef --discovery-token-unsafe-skip-ca-verification --ignore-preflight-errors=all --v=6]"
time="16:55:19" level=debug msg="I0705 16:55:12.527541     798 join.go:299] [join] found NodeName empty; using OS hostname as NodeName\n[preflight] Running pre-flight checks\nI0705 16:55:12.528117     798 join.go:328] [preflight] Running general checks\nI0705 16:55:12.528317     798 checks.go:245] validating the existence and emptiness of directory /etc/kubernetes/manifests\nI0705 16:55:12.528392     798 checks.go:283] validating the existence of file /etc/kubernetes/kubelet.conf\nI0705 16:55:12.528438     798 checks.go:283] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf\nI0705 16:55:12.528493     798 checks.go:104] validating the container runtime\nI0705 16:55:12.640815     798 checks.go:130] validating if the service is enabled and active\nI0705 16:55:12.663635     798 checks.go:332] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0705 16:55:12.663784     798 checks.go:332] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0705 16:55:12.664033     798 checks.go:644] validating whether swap is enabled or not\nI0705 16:55:12.664251     798 checks.go:373] validating the presence of executable ip\nI0705 16:55:12.664358     798 checks.go:373] validating the presence of executable iptables\nI0705 16:55:12.664393     798 checks.go:373] validating the presence of executable mount\nI0705 16:55:12.664459     798 checks.go:373] validating the presence of executable nsenter\nI0705 16:55:12.664507     798 checks.go:373] validating the presence of executable ebtables\nI0705 16:55:12.664553     798 checks.go:373] validating the presence of executable ethtool\nI0705 16:55:12.664599     798 checks.go:373] validating the presence of executable socat\nI0705 16:55:12.664643     798 checks.go:373] validating the presence of executable tc\nI0705 16:55:12.664736     798 checks.go:373] validating the presence of executable touch\nI0705 16:55:12.664797     798 checks.go:515] running all checks\nI0705 16:55:12.691863     798 checks.go:403] checking whether the given node name is reachable using net.LookupHost\nI0705 16:55:12.692102     798 checks.go:613] validating kubelet version\nI0705 16:55:12.774634     798 checks.go:130] validating if the service is enabled and active\nI0705 16:55:12.799006     798 checks.go:208] validating availability of port 10250\nI0705 16:55:12.799654     798 checks.go:283] validating the existence of file /etc/kubernetes/pki/ca.crt\nI0705 16:55:12.799720     798 checks.go:430] validating if the connectivity type is via proxy or direct\nI0705 16:55:12.799759     798 join.go:334] [preflight] Fetching init configuration\nI0705 16:55:12.799800     798 join.go:603] [join] Discovering cluster-info\n[discovery] Trying to connect to API Server \"172.17.0.2:6443\"\n[discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.2:6443\"\nI0705 16:55:12.808252     798 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 7 milliseconds\n[discovery] Failed to connect to API Server \"172.17.0.2:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the master node to creating a new valid token\n[discovery] Trying to connect to API Server \"172.17.0.2:6443\"\n[discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.2:6443\"\nI0705 16:55:17.814894     798 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 4 milliseconds\n[discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server \"172.17.0.2:6443\"\n[discovery] Successfully established connection with API Server \"172.17.0.2:6443\"\nI0705 16:55:17.816220     798 join.go:610] [join] Retrieving KubeConfig objects\n[join] Reading configuration from the cluster...\n[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\nI0705 16:55:17.831941     798 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 14 milliseconds\nI0705 16:55:17.834651     798 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 1 milliseconds\nI0705 16:55:17.837612     798 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13 200 OK in 1 milliseconds\nI0705 16:55:17.839016     798 interface.go:384] Looking for default routes with IPv4 addresses\nI0705 16:55:17.839059     798 interface.go:389] Default route transits interface \"eth0\"\nI0705 16:55:17.839222     798 interface.go:196] Interface eth0 is up\nI0705 16:55:17.839380     798 interface.go:244] Interface \"eth0\" has 1 addresses :[172.17.0.5/16].\nI0705 16:55:17.839437     798 interface.go:211] Checking addr  172.17.0.5/16.\nI0705 16:55:17.839486     798 interface.go:218] IP found 172.17.0.5\nI0705 16:55:17.839516     798 interface.go:250] Found valid IPv4 address 172.17.0.5 for interface \"eth0\".\nI0705 16:55:17.839559     798 interface.go:395] Found active IP 172.17.0.5 \nI0705 16:55:17.839660     798 join.go:341] [preflight] Running configuration dependant checks\nI0705 16:55:17.839728     798 join.go:478] [join] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf\nI0705 16:55:17.922929     798 loader.go:359] Config loaded from file /etc/kubernetes/bootstrap-kubelet.conf\nI0705 16:55:17.924042     798 join.go:503] Stopping the kubelet\n[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace\nI0705 16:55:17.944296     798 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13 200 OK in 2 milliseconds\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\nI0705 16:55:18.053673     798 join.go:520] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...\nI0705 16:55:18.642483     798 loader.go:359] Config loaded from file /etc/kubernetes/kubelet.conf\nI0705 16:55:18.657205     798 loader.go:359] Config loaded from file /etc/kubernetes/kubelet.conf\nI0705 16:55:18.659958     798 join.go:538] [join] preserving the crisocket information for the node\n[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"kind-kubetest-worker\" as an annotation\nI0705 16:55:19.172335     798 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-kubetest-worker 200 OK in 11 milliseconds\nI0705 16:55:19.182789     798 round_trippers.go:438] PATCH https://172.17.0.2:6443/api/v1/nodes/kind-kubetest-worker 200 OK in 6 milliseconds\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the master to see this node join the cluster.\n"
time="16:55:19" level=debug msg="I0705 16:55:12.513447     803 join.go:299] [join] found NodeName empty; using OS hostname as NodeName\n[preflight] Running pre-flight checks\nI0705 16:55:12.513673     803 join.go:328] [preflight] Running general checks\nI0705 16:55:12.513776     803 checks.go:245] validating the existence and emptiness of directory /etc/kubernetes/manifests\nI0705 16:55:12.514064     803 checks.go:283] validating the existence of file /etc/kubernetes/kubelet.conf\nI0705 16:55:12.514209     803 checks.go:283] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf\nI0705 16:55:12.514253     803 checks.go:104] validating the container runtime\nI0705 16:55:12.638141     803 checks.go:130] validating if the service is enabled and active\nI0705 16:55:12.663741     803 checks.go:332] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0705 16:55:12.663981     803 checks.go:332] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0705 16:55:12.664113     803 checks.go:644] validating whether swap is enabled or not\nI0705 16:55:12.664323     803 checks.go:373] validating the presence of executable ip\nI0705 16:55:12.664892     803 checks.go:373] validating the presence of executable iptables\nI0705 16:55:12.665016     803 checks.go:373] validating the presence of executable mount\nI0705 16:55:12.665113     803 checks.go:373] validating the presence of executable nsenter\nI0705 16:55:12.665222     803 checks.go:373] validating the presence of executable ebtables\nI0705 16:55:12.665331     803 checks.go:373] validating the presence of executable ethtool\nI0705 16:55:12.665441     803 checks.go:373] validating the presence of executable socat\nI0705 16:55:12.665569     803 checks.go:373] validating the presence of executable tc\nI0705 16:55:12.665696     803 checks.go:373] validating the presence of executable touch\nI0705 16:55:12.667704     803 checks.go:515] running all checks\nI0705 16:55:12.692411     803 checks.go:403] checking whether the given node name is reachable using net.LookupHost\nI0705 16:55:12.692796     803 checks.go:613] validating kubelet version\nI0705 16:55:12.774153     803 checks.go:130] validating if the service is enabled and active\nI0705 16:55:12.789522     803 checks.go:208] validating availability of port 10250\nI0705 16:55:12.789640     803 checks.go:283] validating the existence of file /etc/kubernetes/pki/ca.crt\nI0705 16:55:12.789657     803 checks.go:430] validating if the connectivity type is via proxy or direct\nI0705 16:55:12.789697     803 join.go:334] [preflight] Fetching init configuration\nI0705 16:55:12.789725     803 join.go:603] [join] Discovering cluster-info\n[discovery] Trying to connect to API Server \"172.17.0.2:6443\"\n[discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.2:6443\"\nI0705 16:55:12.798254     803 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 7 milliseconds\n[discovery] Failed to connect to API Server \"172.17.0.2:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the master node to creating a new valid token\n[discovery] Trying to connect to API Server \"172.17.0.2:6443\"\n[discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.2:6443\"\nI0705 16:55:17.804438     803 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 3 milliseconds\n[discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server \"172.17.0.2:6443\"\n[discovery] Successfully established connection with API Server \"172.17.0.2:6443\"\nI0705 16:55:17.806345     803 join.go:610] [join] Retrieving KubeConfig objects\n[join] Reading configuration from the cluster...\n[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\nI0705 16:55:17.813217     803 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 6 milliseconds\nI0705 16:55:17.821144     803 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 6 milliseconds\nI0705 16:55:17.824614     803 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13 200 OK in 2 milliseconds\nI0705 16:55:17.826184     803 interface.go:384] Looking for default routes with IPv4 addresses\nI0705 16:55:17.826266     803 interface.go:389] Default route transits interface \"eth0\"\nI0705 16:55:17.826463     803 interface.go:196] Interface eth0 is up\nI0705 16:55:17.826576     803 interface.go:244] Interface \"eth0\" has 1 addresses :[172.17.0.3/16].\nI0705 16:55:17.826648     803 interface.go:211] Checking addr  172.17.0.3/16.\nI0705 16:55:17.826710     803 interface.go:218] IP found 172.17.0.3\nI0705 16:55:17.826768     803 interface.go:250] Found valid IPv4 address 172.17.0.3 for interface \"eth0\".\nI0705 16:55:17.826819     803 interface.go:395] Found active IP 172.17.0.3 \nI0705 16:55:17.829136     803 join.go:341] [preflight] Running configuration dependant checks\nI0705 16:55:17.829242     803 join.go:478] [join] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf\nI0705 16:55:17.915647     803 loader.go:359] Config loaded from file /etc/kubernetes/bootstrap-kubelet.conf\nI0705 16:55:17.916415     803 join.go:503] Stopping the kubelet\n[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace\nI0705 16:55:17.935499     803 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13 200 OK in 2 milliseconds\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\nI0705 16:55:18.060930     803 join.go:520] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...\nI0705 16:55:19.141110     803 loader.go:359] Config loaded from file /etc/kubernetes/kubelet.conf\nI0705 16:55:19.156389     803 loader.go:359] Config loaded from file /etc/kubernetes/kubelet.conf\nI0705 16:55:19.159306     803 join.go:538] [join] preserving the crisocket information for the node\n[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"kind-kubetest-worker2\" as an annotation\nI0705 16:55:19.673377     803 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-kubetest-worker2 200 OK in 12 milliseconds\nI0705 16:55:19.682216     803 round_trippers.go:438] PATCH https://172.17.0.2:6443/api/v1/nodes/kind-kubetest-worker2 200 OK in 6 milliseconds\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the master to see this node join the cluster.\n"
time="16:55:19" level=debug msg="I0705 16:55:12.522245     785 join.go:299] [join] found NodeName empty; using OS hostname as NodeName\n[preflight] Running pre-flight checks\nI0705 16:55:12.522406     785 join.go:328] [preflight] Running general checks\nI0705 16:55:12.522552     785 checks.go:245] validating the existence and emptiness of directory /etc/kubernetes/manifests\nI0705 16:55:12.522658     785 checks.go:283] validating the existence of file /etc/kubernetes/kubelet.conf\nI0705 16:55:12.522706     785 checks.go:283] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf\nI0705 16:55:12.522723     785 checks.go:104] validating the container runtime\nI0705 16:55:12.628351     785 checks.go:130] validating if the service is enabled and active\nI0705 16:55:12.653412     785 checks.go:332] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0705 16:55:12.653563     785 checks.go:332] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0705 16:55:12.653628     785 checks.go:644] validating whether swap is enabled or not\nI0705 16:55:12.653698     785 checks.go:373] validating the presence of executable ip\nI0705 16:55:12.653810     785 checks.go:373] validating the presence of executable iptables\nI0705 16:55:12.653846     785 checks.go:373] validating the presence of executable mount\nI0705 16:55:12.653879     785 checks.go:373] validating the presence of executable nsenter\nI0705 16:55:12.653921     785 checks.go:373] validating the presence of executable ebtables\nI0705 16:55:12.653964     785 checks.go:373] validating the presence of executable ethtool\nI0705 16:55:12.654006     785 checks.go:373] validating the presence of executable socat\nI0705 16:55:12.654046     785 checks.go:373] validating the presence of executable tc\nI0705 16:55:12.654093     785 checks.go:373] validating the presence of executable touch\nI0705 16:55:12.654141     785 checks.go:515] running all checks\nI0705 16:55:12.686021     785 checks.go:403] checking whether the given node name is reachable using net.LookupHost\nI0705 16:55:12.686624     785 checks.go:613] validating kubelet version\nI0705 16:55:12.774245     785 checks.go:130] validating if the service is enabled and active\nI0705 16:55:12.789611     785 checks.go:208] validating availability of port 10250\nI0705 16:55:12.789848     785 checks.go:283] validating the existence of file /etc/kubernetes/pki/ca.crt\nI0705 16:55:12.789879     785 checks.go:430] validating if the connectivity type is via proxy or direct\nI0705 16:55:12.789932     785 join.go:334] [preflight] Fetching init configuration\nI0705 16:55:12.789942     785 join.go:603] [join] Discovering cluster-info\n[discovery] Trying to connect to API Server \"172.17.0.2:6443\"\n[discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.2:6443\"\nI0705 16:55:12.799708     785 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 8 milliseconds\n[discovery] Failed to connect to API Server \"172.17.0.2:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the master node to creating a new valid token\n[discovery] Trying to connect to API Server \"172.17.0.2:6443\"\n[discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.2:6443\"\nI0705 16:55:17.804376     785 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds\n[discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server \"172.17.0.2:6443\"\n[discovery] Successfully established connection with API Server \"172.17.0.2:6443\"\nI0705 16:55:17.805737     785 join.go:610] [join] Retrieving KubeConfig objects\n[join] Reading configuration from the cluster...\n[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\nI0705 16:55:17.813458     785 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 7 milliseconds\nI0705 16:55:17.821336     785 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 6 milliseconds\nI0705 16:55:17.824532     785 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13 200 OK in 2 milliseconds\nI0705 16:55:17.827323     785 interface.go:384] Looking for default routes with IPv4 addresses\nI0705 16:55:17.828084     785 interface.go:389] Default route transits interface \"eth0\"\nI0705 16:55:17.828533     785 interface.go:196] Interface eth0 is up\nI0705 16:55:17.828618     785 interface.go:244] Interface \"eth0\" has 1 addresses :[172.17.0.4/16].\nI0705 16:55:17.828651     785 interface.go:211] Checking addr  172.17.0.4/16.\nI0705 16:55:17.828665     785 interface.go:218] IP found 172.17.0.4\nI0705 16:55:17.828704     785 interface.go:250] Found valid IPv4 address 172.17.0.4 for interface \"eth0\".\nI0705 16:55:17.828762     785 interface.go:395] Found active IP 172.17.0.4 \nI0705 16:55:17.829093     785 join.go:341] [preflight] Running configuration dependant checks\nI0705 16:55:17.829187     785 join.go:478] [join] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf\nI0705 16:55:17.919346     785 loader.go:359] Config loaded from file /etc/kubernetes/bootstrap-kubelet.conf\nI0705 16:55:17.920467     785 join.go:503] Stopping the kubelet\n[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace\nI0705 16:55:17.942375     785 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13 200 OK in 3 milliseconds\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\nI0705 16:55:18.050959     785 join.go:520] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...\nI0705 16:55:19.141096     785 loader.go:359] Config loaded from file /etc/kubernetes/kubelet.conf\nI0705 16:55:19.157491     785 loader.go:359] Config loaded from file /etc/kubernetes/kubelet.conf\nI0705 16:55:19.160052     785 join.go:538] [join] preserving the crisocket information for the node\n[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"kind-kubetest-worker3\" as an annotation\nI0705 16:55:19.674625     785 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-kubetest-worker3 200 OK in 14 milliseconds\nI0705 16:55:19.694394     785 round_trippers.go:438] PATCH https://172.17.0.2:6443/api/v1/nodes/kind-kubetest-worker3 200 OK in 13 milliseconds\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the master to see this node join the cluster.\n"
 ✓ Joining worker nodes 🚜
 • Waiting ≤ 1m0s for control-plane = Ready ⏳  ...
time="16:55:19" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master -o=jsonpath='{.items..status.conditions[-1:].status}']"
time="16:55:20" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master -o=jsonpath='{.items..status.conditions[-1:].status}']"
time="16:55:20" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master -o=jsonpath='{.items..status.conditions[-1:].status}']"
time="16:55:20" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master -o=jsonpath='{.items..status.conditions[-1:].status}']"
... skipping 2576 lines ...
Jul  5 16:56:36.025: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-g8sd2/dns-test-cfa9eddc-9f45-11e9-a2f8-32ed06d97aec: the server could not find the requested resource (get pods dns-test-cfa9eddc-9f45-11e9-a2f8-32ed06d97aec)
Jul  5 16:56:36.049: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-g8sd2/dns-test-cfa9eddc-9f45-11e9-a2f8-32ed06d97aec: the server could not find the requested resource (get pods dns-test-cfa9eddc-9f45-11e9-a2f8-32ed06d97aec)
Jul  5 16:56:36.065: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-g8sd2.svc.cluster.local from pod e2e-tests-dns-g8sd2/dns-test-cfa9eddc-9f45-11e9-a2f8-32ed06d97aec: the server could not find the requested resource (get pods dns-test-cfa9eddc-9f45-11e9-a2f8-32ed06d97aec)
Jul  5 16:56:36.086: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-g8sd2/dns-test-cfa9eddc-9f45-11e9-a2f8-32ed06d97aec: the server could not find the requested resource (get pods dns-test-cfa9eddc-9f45-11e9-a2f8-32ed06d97aec)
Jul  5 16:56:36.095: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-g8sd2/dns-test-cfa9eddc-9f45-11e9-a2f8-32ed06d97aec: the server could not find the requested resource (get pods dns-test-cfa9eddc-9f45-11e9-a2f8-32ed06d97aec)
Jul  5 16:56:36.107: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-g8sd2/dns-test-cfa9eddc-9f45-11e9-a2f8-32ed06d97aec: the server could not find the requested resource (get pods dns-test-cfa9eddc-9f45-11e9-a2f8-32ed06d97aec)
Jul  5 16:56:36.107: INFO: Lookups using e2e-tests-dns-g8sd2/dns-test-cfa9eddc-9f45-11e9-a2f8-32ed06d97aec failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-g8sd2.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-g8sd2.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  5 16:56:41.112: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-g8sd2/dns-test-cfa9eddc-9f45-11e9-a2f8-32ed06d97aec: the server could not find the requested resource (get pods dns-test-cfa9eddc-9f45-11e9-a2f8-32ed06d97aec)
Jul  5 16:56:41.237: INFO: Lookups using e2e-tests-dns-g8sd2/dns-test-cfa9eddc-9f45-11e9-a2f8-32ed06d97aec failed for: [wheezy_udp@kubernetes.default]

Jul  5 16:56:46.222: INFO: DNS probes using e2e-tests-dns-g8sd2/dns-test-cfa9eddc-9f45-11e9-a2f8-32ed06d97aec succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:154
... skipping 325 lines ...
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul  5 16:56:48.553: INFO: Successfully updated pod "pod-update-activedeadlineseconds-df58758b-9f45-11e9-9b8d-32ed06d97aec"
Jul  5 16:56:48.553: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-df58758b-9f45-11e9-9b8d-32ed06d97aec" in namespace "e2e-tests-pods-9rfc7" to be "terminated due to deadline exceeded"
Jul  5 16:56:48.559: INFO: Pod "pod-update-activedeadlineseconds-df58758b-9f45-11e9-9b8d-32ed06d97aec": Phase="Running", Reason="", readiness=true. Elapsed: 5.441333ms
Jul  5 16:56:50.563: INFO: Pod "pod-update-activedeadlineseconds-df58758b-9f45-11e9-9b8d-32ed06d97aec": Phase="Running", Reason="", readiness=true. Elapsed: 2.009751609s
Jul  5 16:56:52.566: INFO: Pod "pod-update-activedeadlineseconds-df58758b-9f45-11e9-9b8d-32ed06d97aec": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.013100852s
Jul  5 16:56:52.566: INFO: Pod "pod-update-activedeadlineseconds-df58758b-9f45-11e9-9b8d-32ed06d97aec" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:154
Jul  5 16:56:52.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-9rfc7" for this suite.
Jul  5 16:56:58.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 866 lines ...
STEP: Creating a kubernetes client
Jul  5 16:56:17.766: INFO: >>> kubeConfig: /root/.kube/kind-config-kind-kubetest
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:699
STEP: creating the pod
Jul  5 16:56:17.869: INFO: PodSpec: initContainers in spec.initContainers
Jul  5 16:57:04.853: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ce913dcf-9f45-11e9-bae5-32ed06d97aec", GenerateName:"", Namespace:"e2e-tests-init-container-gw5pd", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-gw5pd/pods/pod-init-ce913dcf-9f45-11e9-bae5-32ed06d97aec", UID:"ce93ecd0-9f45-11e9-81ec-024230ce147e", ResourceVersion:"4396", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63697942577, loc:(*time.Location)(0x7929da0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"869569613"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-kr5m7", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000a3efc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-kr5m7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-kr5m7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-kr5m7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000f1a558), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-kubetest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000d8c720), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000f1a5d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000f1a5f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000f1a5f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000f1a5fc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697942577, loc:(*time.Location)(0x7929da0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697942577, loc:(*time.Location)(0x7929da0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697942577, loc:(*time.Location)(0x7929da0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697942577, loc:(*time.Location)(0x7929da0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.40.0.1", StartTime:(*v1.Time)(0xc000d8b1e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0003aa700)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0003aa770)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://5d7d92411e4b3f18455465a103db5107f0a414d98dd79287d798bb6a93e4f9d4"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000d8b220), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000d8b200), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:154
Jul  5 16:57:04.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-gw5pd" for this suite.
Jul  5 16:57:26.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 16:57:26.903: INFO: namespace: e2e-tests-init-container-gw5pd, resource: bindings, ignored listing per whitelist
Jul  5 16:57:26.975: INFO: namespace e2e-tests-init-container-gw5pd deletion completed in 22.101154795s


• [SLOW TEST:69.209 seconds]
[k8s.io] InitContainer [NodeConformance]
test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:153
... skipping 547 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-92ngr
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-92ngr
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-92ngr
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-92ngr
Jul  5 16:57:11.525: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-92ngr, name: ss-0, uid: ee648118-9f45-11e9-81ec-024230ce147e, status phase: Pending. Waiting for statefulset controller to delete.
Jul  5 16:57:12.648: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-92ngr, name: ss-0, uid: ee648118-9f45-11e9-81ec-024230ce147e, status phase: Failed. Waiting for statefulset controller to delete.
Jul  5 16:57:12.654: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-92ngr, name: ss-0, uid: ee648118-9f45-11e9-81ec-024230ce147e, status phase: Failed. Waiting for statefulset controller to delete.
Jul  5 16:57:12.659: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-92ngr
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-92ngr
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-92ngr and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:85
Jul  5 16:57:20.702: INFO: Deleting all statefulset in ns e2e-tests-statefulset-92ngr
... skipping 1242 lines ...
Jul  5 16:57:43.019: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul  5 16:57:43.019: INFO: Running '/root/.kubetest/kind/kubectl --server=https://localhost:42167 --kubeconfig=/root/.kube/kind-config-kind-kubetest describe pod redis-master-fvznv --namespace=e2e-tests-kubectl-sfpb6'
Jul  5 16:57:43.132: INFO: stderr: ""
Jul  5 16:57:43.132: INFO: stdout: "Name:               redis-master-fvznv\nNamespace:          e2e-tests-kubectl-sfpb6\nPriority:           0\nPriorityClassName:  <none>\nNode:               kind-kubetest-worker2/172.17.0.3\nStart Time:         Fri, 05 Jul 2019 16:57:39 +0000\nLabels:             app=redis\n                    role=master\nAnnotations:        <none>\nStatus:             Running\nIP:                 10.32.0.6\nControlled By:      ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://56699ab66acae6a6e1e190c45afcbe3057041eacc15d1042829a241de8314f8a\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 05 Jul 2019 16:57:41 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xtkt8 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-xtkt8:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-xtkt8\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                            Message\n  ----    ------     ----  ----                            -------\n  Normal  Scheduled  4s    default-scheduler               Successfully assigned e2e-tests-kubectl-sfpb6/redis-master-fvznv to kind-kubetest-worker2\n  Normal  Pulling    2s    kubelet, kind-kubetest-worker2  pulling image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\"\n  Normal  Pulled     2s    kubelet, kind-kubetest-worker2  Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\"\n  Normal  Created    2s    kubelet, kind-kubetest-worker2  Created container\n  Normal  Started    2s    kubelet, kind-kubetest-worker2  Started container\n"
Jul  5 16:57:43.132: INFO: Running '/root/.kubetest/kind/kubectl --server=https://localhost:42167 --kubeconfig=/root/.kube/kind-config-kind-kubetest describe rc redis-master --namespace=e2e-tests-kubectl-sfpb6'
Jul  5 16:57:43.266: INFO: stderr: ""
Jul  5 16:57:43.266: INFO: stdout: "Name:         redis-master\nNamespace:    e2e-tests-kubectl-sfpb6\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  4s    replication-controller  Created pod: redis-master-fvznv\n"
Jul  5 16:57:43.266: INFO: Running '/root/.kubetest/kind/kubectl --server=https://localhost:42167 --kubeconfig=/root/.kube/kind-config-kind-kubetest describe service redis-master --namespace=e2e-tests-kubectl-sfpb6'
Jul  5 16:57:43.402: INFO: stderr: ""
Jul  5 16:57:43.402: INFO: stdout: "Name:              redis-master\nNamespace:         e2e-tests-kubectl-sfpb6\nLabels:            app=redis\n                   role=master\nAnnotations:       <none>\nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.101.40.74\nPort:              <unset>  6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.32.0.6:6379\nSession Affinity:  None\nEvents:            <none>\n"
Jul  5 16:57:43.406: INFO: Running '/root/.kubetest/kind/kubectl --server=https://localhost:42167 --kubeconfig=/root/.kube/kind-config-kind-kubetest describe node kind-kubetest-control-plane'
Jul  5 16:57:43.554: INFO: stderr: ""
Jul  5 16:57:43.554: INFO: stdout: "Name:               kind-kubetest-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/hostname=kind-kubetest-control-plane\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Fri, 05 Jul 2019 16:55:05 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Fri, 05 Jul 2019 16:55:52 +0000   Fri, 05 Jul 2019 16:55:52 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Fri, 05 Jul 2019 16:57:35 +0000   Fri, 05 Jul 2019 16:55:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Fri, 05 Jul 2019 16:57:35 +0000   Fri, 05 Jul 2019 16:55:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Fri, 05 Jul 2019 16:57:35 +0000   Fri, 05 Jul 2019 16:55:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Fri, 05 Jul 2019 16:57:35 +0000   Fri, 05 Jul 2019 16:55:55 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.17.0.2\n  Hostname:    kind-kubetest-control-plane\nCapacity:\n cpu:                8\n ephemeral-storage:  253696108Ki\n hugepages-2Mi:      0\n memory:             53588960Ki\n pods:               110\nAllocatable:\n cpu:                8\n ephemeral-storage:  253696108Ki\n hugepages-2Mi:      0\n memory:             53588960Ki\n pods:               110\nSystem Info:\n Machine ID:                 c4c9bb0d92b9418aa96a73237e2e173b\n System UUID:                E4715DD0-F66A-27EF-B12E-3715743F2273\n Boot ID:                    87175292-4da9-4045-a4a9-f31a34da368d\n Kernel Version:             4.14.127+\n OS Image:                   Ubuntu 18.04.1 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.6.3\n Kubelet Version:            v1.13.8-beta.0.35+0c6d31a99f8147\n Kube-Proxy Version:         v1.13.8-beta.0.35+0c6d31a99f8147\nNon-terminated Pods:         (8 in total)\n  Namespace                  Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                                   ------------  ----------  ---------------  -------------  ---\n  kube-system                coredns-54ff9cd656-cs89m                               100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m28s\n  kube-system                coredns-54ff9cd656-llwr5                               100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m28s\n  kube-system                etcd-kind-kubetest-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s\n  kube-system                kube-apiserver-kind-kubetest-control-plane             250m (3%)     0 (0%)      0 (0%)           0 (0%)         87s\n  kube-system                kube-controller-manager-kind-kubetest-control-plane    200m (2%)     0 (0%)      0 (0%)           0 (0%)         90s\n  kube-system                kube-proxy-2pb9m                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s\n  kube-system                kube-scheduler-kind-kubetest-control-plane             100m (1%)     0 (0%)      0 (0%)           0 (0%)         76s\n  kube-system                weave-net-lsx7n                                        20m (0%)      0 (0%)      0 (0%)           0 (0%)         2m28s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                770m (9%)   0 (0%)\n  memory             140Mi (0%)  340Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:\n  Type     Reason                    Age                    From                                     Message\n  ----     ------                    ----                   ----                                     -------\n  Normal   Starting                  2m56s                  kubelet, kind-kubetest-control-plane     Starting kubelet.\n  Normal   NodeAllocatableEnforced   2m56s                  kubelet, kind-kubetest-control-plane     Updated Node Allocatable limit across pods\n  Normal   NodeHasSufficientMemory   2m55s (x7 over 2m56s)  kubelet, kind-kubetest-control-plane     Node kind-kubetest-control-plane status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure     2m55s (x7 over 2m56s)  kubelet, kind-kubetest-control-plane     Node kind-kubetest-control-plane status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID      2m55s (x7 over 2m56s)  kubelet, kind-kubetest-control-plane     Node kind-kubetest-control-plane status is now: NodeHasSufficientPID\n  Warning  CheckLimitsForResolvConf  2m55s (x2 over 2m56s)  kubelet, kind-kubetest-control-plane     Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\n  Normal   Starting                  2m27s                  kube-proxy, kind-kubetest-control-plane  Starting kube-proxy.\n"
... skipping 428 lines ...
STEP: Creating a kubernetes client
Jul  5 16:58:05.894: INFO: >>> kubeConfig: /root/.kube/kind-config-kind-kubetest
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:699
STEP: creating the pod
Jul  5 16:58:06.019: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:154
Jul  5 16:58:12.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 3 lines ...
Jul  5 16:58:21.088: INFO: namespace e2e-tests-init-container-9r9rp deletion completed in 8.145568478s


• [SLOW TEST:15.195 seconds]
[k8s.io] InitContainer [NodeConformance]
test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:699
------------------------------
[BeforeEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  5 16:57:54.599: INFO: >>> kubeConfig: /root/.kube/kind-config-kind-kubetest
... skipping 734 lines ...
[BeforeEach] [sig-network] Services
  test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-ncvc6
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-ncvc6 to expose endpoints map[]
Jul  5 16:58:11.603: INFO: Get endpoints failed (10.907523ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jul  5 16:58:12.607: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-ncvc6 exposes endpoints map[] (1.014486621s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-ncvc6
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-ncvc6 to expose endpoints map[pod1:[80]]
Jul  5 16:58:16.722: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.108011815s elapsed, will retry)
Jul  5 16:58:19.954: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-ncvc6 exposes endpoints map[pod1:[80]] (7.340351336s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-ncvc6
... skipping 369 lines ...
Jul  5 16:58:46.918: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-hs9jz.svc from pod e2e-tests-dns-hs9jz/dns-test-24daf086-9f46-11e9-9cc9-32ed06d97aec: the server could not find the requested resource (get pods dns-test-24daf086-9f46-11e9-9cc9-32ed06d97aec)
Jul  5 16:58:46.922: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-hs9jz.svc from pod e2e-tests-dns-hs9jz/dns-test-24daf086-9f46-11e9-9cc9-32ed06d97aec: the server could not find the requested resource (get pods dns-test-24daf086-9f46-11e9-9cc9-32ed06d97aec)
Jul  5 16:58:46.932: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-hs9jz/dns-test-24daf086-9f46-11e9-9cc9-32ed06d97aec: the server could not find the requested resource (get pods dns-test-24daf086-9f46-11e9-9cc9-32ed06d97aec)
Jul  5 16:58:46.935: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-hs9jz/dns-test-24daf086-9f46-11e9-9cc9-32ed06d97aec: the server could not find the requested resource (get pods dns-test-24daf086-9f46-11e9-9cc9-32ed06d97aec)
Jul  5 16:58:46.939: INFO: Unable to read 10.101.240.205_udp@PTR from pod e2e-tests-dns-hs9jz/dns-test-24daf086-9f46-11e9-9cc9-32ed06d97aec: the server could not find the requested resource (get pods dns-test-24daf086-9f46-11e9-9cc9-32ed06d97aec)
Jul  5 16:58:46.947: INFO: Unable to read 10.101.240.205_tcp@PTR from pod e2e-tests-dns-hs9jz/dns-test-24daf086-9f46-11e9-9cc9-32ed06d97aec: the server could not find the requested resource (get pods dns-test-24daf086-9f46-11e9-9cc9-32ed06d97aec)
Jul  5 16:58:46.947: INFO: Lookups using e2e-tests-dns-hs9jz/dns-test-24daf086-9f46-11e9-9cc9-32ed06d97aec failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-hs9jz wheezy_tcp@dns-test-service.e2e-tests-dns-hs9jz wheezy_udp@dns-test-service.e2e-tests-dns-hs9jz.svc wheezy_tcp@dns-test-service.e2e-tests-dns-hs9jz.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-hs9jz.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-hs9jz.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-hs9jz.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-hs9jz.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.101.240.205_udp@PTR 10.101.240.205_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-hs9jz jessie_tcp@dns-test-service.e2e-tests-dns-hs9jz jessie_udp@dns-test-service.e2e-tests-dns-hs9jz.svc jessie_tcp@dns-test-service.e2e-tests-dns-hs9jz.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-hs9jz.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-hs9jz.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-hs9jz.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-hs9jz.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.101.240.205_udp@PTR 10.101.240.205_tcp@PTR]

Jul  5 16:58:52.083: INFO: DNS probes using e2e-tests-dns-hs9jz/dns-test-24daf086-9f46-11e9-9cc9-32ed06d97aec succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 1886 lines ...
Jul  5 16:59:12.077: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jul  5 16:59:12.077: INFO: Waiting for all frontend pods to be Running.
Jul  5 16:59:42.130: INFO: Waiting for frontend to serve content.
Jul  5 16:59:45.562: INFO: Trying to add a new entry to the guestbook.
Jul  5 16:59:45.586: INFO: Verifying that added entry can be retrieved.
Jul  5 16:59:45.602: INFO: Failed to get response from guestbook. err: <nil>, response: {"data": ""}
STEP: using delete to clean up resources
Jul  5 16:59:50.627: INFO: Running '/root/.kubetest/kind/kubectl --server=https://localhost:42167 --kubeconfig=/root/.kube/kind-config-kind-kubetest delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2nqjs'
Jul  5 16:59:50.750: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  5 16:59:50.750: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jul  5 16:59:50.751: INFO: Running '/root/.kubetest/kind/kubectl --server=https://localhost:42167 --kubeconfig=/root/.kube/kind-config-kind-kubetest delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2nqjs'
... skipping 408 lines ...
Jul  5 16:59:23.965: INFO: Running AfterSuite actions on all nodes
Jul  5 17:02:23.427: INFO: Running AfterSuite actions on node 1
Jul  5 17:02:23.427: INFO: Skipping dumping logs from cluster


Ran 190 of 2162 Specs in 379.269 seconds
SUCCESS! -- 190 Passed | 0 Failed | 0 Pending | 1972 Skipped 

Ginkgo ran 1 suite in 6m24.535527572s
Test Suite Passed
2019/07/05 17:02:23 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.focus=\[Conformance\] --ginkgo.skip=\[Serial\] --num-nodes=3 --report-dir=/logs/artifacts --disable-log-dump=true' finished in 6m26.505451656s
2019/07/05 17:02:23 kind.go:369: kind.go:DumpClusterLogs()
2019/07/05 17:02:23 process.go:153: Running: kind export logs /logs/artifacts --loglevel=debug --name=kind-kubetest
... skipping 67 lines ...
time="17:02:25" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-worker2 cat /var/log/containers/weave-net-m4cwj_kube-system_weave-31404ae37913ffd02443f1bc6da5ac0368304b1af8d63036bddf8c7751199141.log]"
time="17:02:25" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-worker2 cat /var/log/containers/liveness-http_e2e-tests-container-probe-xn4h2_liveness-95b02284f04f8ace2fc99fc7599af8a7c3b25e947ede561e7e3344047e87635c.log]"
time="17:02:25" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-worker2 cat /var/log/pods/ab7e4e5f-9f45-11e9-81ec-024230ce147e/weave/1.log]"
time="17:02:25" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-worker2 cat /var/log/pods/ab7e4e5f-9f45-11e9-81ec-024230ce147e/weave-npc/0.log]"
time="17:02:25" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-worker2 cat /var/log/pods/ab7e4e5f-9f45-11e9-81ec-024230ce147e/weave/0.log]"
time="17:02:25" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t kind-kubetest-worker2 cat /var/log/pods/12ca6b07-9f46-11e9-81ec-024230ce147e/liveness/0.log]"
Error: exit status 1
exit status 1

2019/07/05 17:02:30 process.go:155: Step 'kind export logs /logs/artifacts --loglevel=debug --name=kind-kubetest' finished in 7.017474197s
2019/07/05 17:02:30 kind.go:432: kind.go:Down()
2019/07/05 17:02:30 kind.go:411: kind.go:clusterExists()
2019/07/05 17:02:30 process.go:153: Running: kind get clusters
... skipping 7 lines ...
$KUBECONFIG is still set to use /root/.kube/kind-config-kind-kubetest even though that file has been deleted, remember to unset it
time="17:02:30" level=debug msg="Running: /usr/bin/docker [docker rm -f -v kind-kubetest-worker kind-kubetest-worker3 kind-kubetest-worker2 kind-kubetest-control-plane]"
2019/07/05 17:02:36 process.go:155: Step 'kind delete cluster --loglevel=debug --name=kind-kubetest' finished in 6.285972955s
2019/07/05 17:02:36 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2019/07/05 17:02:36 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"
2019/07/05 17:02:37 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 1.021876497s
2019/07/05 17:02:37 main.go:316: Something went wrong: encountered 1 errors: [error during kind export logs /logs/artifacts --loglevel=debug --name=kind-kubetest: exit status 1]
+ EXIT_VALUE=1
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
[Barnacle] 2019/07/05 17:02:37 Cleaning up Docker data root...
[Barnacle] 2019/07/05 17:02:37 Removing all containers.
... skipping 28 lines ...