This job view page is being replaced by Spyglass soon. Check out the new job view.
PRmgdevstack: Promote e2e "verifying service's sessionAffinity for ClusterIP and NodePort services" to Conformance
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2019-08-22 15:10
Elapsed1h6m
Revision
Buildergke-prow-ssd-pool-1a225945-c9hz
pode8cafb57-c4ee-11e9-aa27-62c9a4cbf4a1
infra-commit7fa9a9c37
pode8cafb57-c4ee-11e9-aa27-62c9a4cbf4a1
repok8s.io/test-infra
repo-commit7fa9a9c372cfaf3b0e05396a5459326047d4bf46
repos{u'k8s.io/kubernetes': u'master:d54c5163e041030bd3bc63dcab8876d5f8c51983,76443:fc84ff19464f8fb45653d491acb2e10db0dbacf9', u'k8s.io/test-infra': u'master'}

No Test Failures!


Error lines from build-log.txt

... skipping 661 lines ...
I0822 15:17:07.893] time="15:17:07" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane cat /kind/version]"
I0822 15:17:08.312] time="15:17:08" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-control-plane]"
I0822 15:17:08.400] time="15:17:08" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-worker]"
I0822 15:17:08.400] time="15:17:08" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-control-plane]"
I0822 15:17:08.401] time="15:17:08" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-worker2]"
I0822 15:17:08.492] time="15:17:08" level=debug msg="Configuration Input data: {kind v1.17.0-alpha.0.437+5bb1de1dcee5d6 172.17.0.2:6443 6443 127.0.0.1 false 172.17.0.3 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}}"
I0822 15:17:08.496] time="15:17:08" level=debug msg="Configuration generated:\n # config generated by kind\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterConfiguration\nmetadata:\n  name: config\nkubernetesVersion: v1.17.0-alpha.0.437+5bb1de1dcee5d6\nclusterName: \"kind\"\ncontrolPlaneEndpoint: \"172.17.0.2:6443\"\n# on docker for mac we have to expose the api server via port forward,\n# so we need to ensure the cert is valid for localhost so we can talk\n# to the cluster after rewriting the kubeconfig to point to localhost\napiServer:\n  certSANs: [localhost, \"127.0.0.1\"]\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\n    # configure ipv6 default addresses for IPv6 clusters\n    \nscheduler:\n  extraArgs:\n    # configure ipv6 default addresses for IPv6 clusters\n    \nnetworking:\n  podSubnet: \"10.244.0.0/16\"\n  serviceSubnet: \"10.96.0.0/12\"\n---\napiVersion: kubeadm.k8s.io/v1beta2\nkind: InitConfiguration\nmetadata:\n  name: config\n# we use a well know token for TLS bootstrap\nbootstrapTokens:\n- token: \"abcdef.0123456789abcdef\"\n# we use a well know port for making the API server discoverable inside docker network. \n# from the host machine such port will be accessible via a random local port instead.\nlocalAPIEndpoint:\n  advertiseAddress: \"172.17.0.3\"\n  bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.3\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeadm.k8s.io/v1beta2\nkind: JoinConfiguration\nmetadata:\n  name: config\n\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.3\"\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: \"172.17.0.2:6443\"\n    token: \"abcdef.0123456789abcdef\"\n    unsafeSkipCAVerification: true\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nmetadata:\n  name: config\n# configure ipv6 addresses in IPv6 mode\n\n# disable disk resource management by default\n# kubelet will see the host disk that the inner container runtime\n# is ultimately backed by and attempt to recover disk space. we don't want that.\nimageGCHighThresholdPercent: 100\nevictionHard:\n  nodefs.available: \"0%\"\n  nodefs.inodesFree: \"0%\"\n  imagefs.available: \"0%\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmetadata:\n  name: config\n"
I0822 15:17:08.498] time="15:17:08" level=debug msg="Configuration Input data: {kind v1.17.0-alpha.0.437+5bb1de1dcee5d6 172.17.0.2:6443 6443 127.0.0.1 true 172.17.0.2 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}}"
I0822 15:17:08.501] time="15:17:08" level=debug msg="Configuration generated:\n # config generated by kind\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterConfiguration\nmetadata:\n  name: config\nkubernetesVersion: v1.17.0-alpha.0.437+5bb1de1dcee5d6\nclusterName: \"kind\"\ncontrolPlaneEndpoint: \"172.17.0.2:6443\"\n# on docker for mac we have to expose the api server via port forward,\n# so we need to ensure the cert is valid for localhost so we can talk\n# to the cluster after rewriting the kubeconfig to point to localhost\napiServer:\n  certSANs: [localhost, \"127.0.0.1\"]\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\n    # configure ipv6 default addresses for IPv6 clusters\n    \nscheduler:\n  extraArgs:\n    # configure ipv6 default addresses for IPv6 clusters\n    \nnetworking:\n  podSubnet: \"10.244.0.0/16\"\n  serviceSubnet: \"10.96.0.0/12\"\n---\napiVersion: kubeadm.k8s.io/v1beta2\nkind: InitConfiguration\nmetadata:\n  name: config\n# we use a well know token for TLS bootstrap\nbootstrapTokens:\n- token: \"abcdef.0123456789abcdef\"\n# we use a well know port for making the API server discoverable inside docker network. \n# from the host machine such port will be accessible via a random local port instead.\nlocalAPIEndpoint:\n  advertiseAddress: \"172.17.0.2\"\n  bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.2\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeadm.k8s.io/v1beta2\nkind: JoinConfiguration\nmetadata:\n  name: config\ncontrolPlane:\n  localAPIEndpoint:\n    advertiseAddress: \"172.17.0.2\"\n    bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.2\"\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: \"172.17.0.2:6443\"\n    token: \"abcdef.0123456789abcdef\"\n    unsafeSkipCAVerification: true\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nmetadata:\n  name: config\n# configure ipv6 addresses in IPv6 mode\n\n# disable disk resource management by default\n# kubelet will see the host disk that the inner container runtime\n# is ultimately backed by and attempt to recover disk space. we don't want that.\nimageGCHighThresholdPercent: 100\nevictionHard:\n  nodefs.available: \"0%\"\n  nodefs.inodesFree: \"0%\"\n  imagefs.available: \"0%\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmetadata:\n  name: config\n"
I0822 15:17:08.514] time="15:17:08" level=debug msg="Configuration Input data: {kind v1.17.0-alpha.0.437+5bb1de1dcee5d6 172.17.0.2:6443 6443 127.0.0.1 false 172.17.0.4 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}}"
I0822 15:17:08.517] time="15:17:08" level=debug msg="Configuration generated:\n # config generated by kind\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterConfiguration\nmetadata:\n  name: config\nkubernetesVersion: v1.17.0-alpha.0.437+5bb1de1dcee5d6\nclusterName: \"kind\"\ncontrolPlaneEndpoint: \"172.17.0.2:6443\"\n# on docker for mac we have to expose the api server via port forward,\n# so we need to ensure the cert is valid for localhost so we can talk\n# to the cluster after rewriting the kubeconfig to point to localhost\napiServer:\n  certSANs: [localhost, \"127.0.0.1\"]\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\n    # configure ipv6 default addresses for IPv6 clusters\n    \nscheduler:\n  extraArgs:\n    # configure ipv6 default addresses for IPv6 clusters\n    \nnetworking:\n  podSubnet: \"10.244.0.0/16\"\n  serviceSubnet: \"10.96.0.0/12\"\n---\napiVersion: kubeadm.k8s.io/v1beta2\nkind: InitConfiguration\nmetadata:\n  name: config\n# we use a well know token for TLS bootstrap\nbootstrapTokens:\n- token: \"abcdef.0123456789abcdef\"\n# we use a well know port for making the API server discoverable inside docker network. \n# from the host machine such port will be accessible via a random local port instead.\nlocalAPIEndpoint:\n  advertiseAddress: \"172.17.0.4\"\n  bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.4\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeadm.k8s.io/v1beta2\nkind: JoinConfiguration\nmetadata:\n  name: config\n\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.4\"\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: \"172.17.0.2:6443\"\n    token: \"abcdef.0123456789abcdef\"\n    unsafeSkipCAVerification: true\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nmetadata:\n  name: config\n# configure ipv6 addresses in IPv6 mode\n\n# disable disk resource management by default\n# kubelet will see the host disk that the inner container runtime\n# is ultimately backed by and attempt to recover disk space. we don't want that.\nimageGCHighThresholdPercent: 100\nevictionHard:\n  nodefs.available: \"0%\"\n  nodefs.inodesFree: \"0%\"\n  imagefs.available: \"0%\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmetadata:\n  name: config\n"
I0822 15:17:08.533] time="15:17:08" level=debug msg="Using kubeadm config:\napiServer:\n  certSANs:\n  - localhost\n  - 127.0.0.1\napiVersion: kubeadm.k8s.io/v1beta2\nclusterName: kind\ncontrolPlaneEndpoint: 172.17.0.2:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.17.0-alpha.0.437+5bb1de1dcee5d6\nnetworking:\n  podSubnet: 10.244.0.0/16\n  serviceSubnet: 10.96.0.0/12\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.4\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.4\n---\napiVersion: kubeadm.k8s.io/v1beta2\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.2:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.4\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
I0822 15:17:08.534] time="15:17:08" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker mkdir -p /kind]"
I0822 15:17:08.535] time="15:17:08" level=debug msg="Using kubeadm config:\napiServer:\n  certSANs:\n  - localhost\n  - 127.0.0.1\napiVersion: kubeadm.k8s.io/v1beta2\nclusterName: kind\ncontrolPlaneEndpoint: 172.17.0.2:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.17.0-alpha.0.437+5bb1de1dcee5d6\nnetworking:\n  podSubnet: 10.244.0.0/16\n  serviceSubnet: 10.96.0.0/12\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.2\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.2\n---\napiVersion: kubeadm.k8s.io/v1beta2\ncontrolPlane:\n  localAPIEndpoint:\n    advertiseAddress: 172.17.0.2\n    bindPort: 6443\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.2:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.2\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
I0822 15:17:08.535] time="15:17:08" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane mkdir -p /kind]"
I0822 15:17:08.537] time="15:17:08" level=debug msg="Using kubeadm config:\napiServer:\n  certSANs:\n  - localhost\n  - 127.0.0.1\napiVersion: kubeadm.k8s.io/v1beta2\nclusterName: kind\ncontrolPlaneEndpoint: 172.17.0.2:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.17.0-alpha.0.437+5bb1de1dcee5d6\nnetworking:\n  podSubnet: 10.244.0.0/16\n  serviceSubnet: 10.96.0.0/12\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.3\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.3\n---\napiVersion: kubeadm.k8s.io/v1beta2\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.2:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.3\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
I0822 15:17:08.537] time="15:17:08" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker2 mkdir -p /kind]"
I0822 15:17:08.890] time="15:17:08" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-worker2 cp /dev/stdin /kind/kubeadm.conf]"
I0822 15:17:08.933] time="15:17:08" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-worker cp /dev/stdin /kind/kubeadm.conf]"
I0822 15:17:08.945] time="15:17:08" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-control-plane cp /dev/stdin /kind/kubeadm.conf]"
I0822 15:17:09.422]  ✓ Creating kubeadm config 📜
I0822 15:17:09.423]  â€ĸ Starting control-plane 🕹ī¸  ...
I0822 15:17:09.423] time="15:17:09" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane kubeadm init --ignore-preflight-errors=all --config=/kind/kubeadm.conf --skip-token-print --v=6]"
I0822 15:17:43.101] time="15:17:43" level=debug msg="I0822 15:17:10.078946      81 initconfiguration.go:186] loading configuration from \"/kind/kubeadm.conf\"\n[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration\nI0822 15:17:10.089277      81 feature_gate.go:216] feature gates: &{map[]}\n[init] Using Kubernetes version: v1.17.0-alpha.0.437+5bb1de1dcee5d6\n[preflight] Running pre-flight checks\nI0822 15:17:10.092527      81 checks.go:576] validating Kubernetes and kubeadm version\nI0822 15:17:10.092572      81 checks.go:168] validating if the firewall is enabled and active\nI0822 15:17:10.122393      81 checks.go:203] validating availability of port 6443\nI0822 15:17:10.122589      81 checks.go:203] validating availability of port 10251\nI0822 15:17:10.122617      81 checks.go:203] validating availability of port 10252\nI0822 15:17:10.122647      81 checks.go:288] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml\nI0822 15:17:10.122659      81 checks.go:288] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml\nI0822 15:17:10.122667      81 checks.go:288] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml\nI0822 15:17:10.122674      81 checks.go:288] validating the existence of file /etc/kubernetes/manifests/etcd.yaml\nI0822 15:17:10.122683      81 checks.go:434] validating if the connectivity type is via proxy or direct\nI0822 15:17:10.124316      81 checks.go:470] validating http connectivity to first IP address in the CIDR\nI0822 15:17:10.124365      81 checks.go:470] validating http connectivity to first IP address in the CIDR\nI0822 15:17:10.124377      81 checks.go:104] validating the container runtime\nI0822 15:17:10.278901      81 checks.go:378] validating the presence of executable crictl\nI0822 15:17:10.280105      81 checks.go:337] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0822 15:17:10.280215      81 checks.go:337] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0822 15:17:10.280315      81 checks.go:648] validating whether swap is enabled or not\nI0822 15:17:10.280361      81 checks.go:378] validating the presence of executable ip\nI0822 15:17:10.280467      81 checks.go:378] validating the presence of executable iptables\nI0822 15:17:10.280540      81 checks.go:378] validating the presence of executable mount\nI0822 15:17:10.280564      81 checks.go:378] validating the presence of executable nsenter\nI0822 15:17:10.280631      81 checks.go:378] validating the presence of executable ebtables\nI0822 15:17:10.280717      81 checks.go:378] validating the presence of executable ethtool\nI0822 15:17:10.280752      81 checks.go:378] validating the presence of executable socat\nI0822 15:17:10.280795      81 checks.go:378] validating the presence of executable tc\nI0822 15:17:10.280828      81 checks.go:378] validating the presence of executable touch\nI0822 15:17:10.280888      81 checks.go:519] running all checks\nI0822 15:17:10.298687      81 checks.go:408] checking whether the given node name is reachable using net.LookupHost\nI0822 15:17:10.300287      81 checks.go:617] validating kubelet version\nI0822 15:17:10.430459      81 checks.go:130] validating if the service is enabled and active\nI0822 15:17:10.449368      81 checks.go:203] validating availability of port 10250\nI0822 15:17:10.449493      81 checks.go:203] validating availability of port 2379\nI0822 15:17:10.449532      81 checks.go:203] validating availability of port 2380\nI0822 15:17:10.449569      81 checks.go:251] validating the existence and emptiness of directory /var/lib/etcd\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\nI0822 15:17:10.469388      81 checks.go:837] image exists: k8s.gcr.io/kube-apiserver:v1.17.0-alpha.0.437_5bb1de1dcee5d6\nI0822 15:17:10.489841      81 checks.go:837] image exists: k8s.gcr.io/kube-controller-manager:v1.17.0-alpha.0.437_5bb1de1dcee5d6\nI0822 15:17:10.502612      81 checks.go:837] image exists: k8s.gcr.io/kube-scheduler:v1.17.0-alpha.0.437_5bb1de1dcee5d6\nI0822 15:17:10.518631      81 checks.go:837] image exists: k8s.gcr.io/kube-proxy:v1.17.0-alpha.0.437_5bb1de1dcee5d6\nI0822 15:17:10.531336      81 checks.go:843] pulling k8s.gcr.io/pause:3.1\nI0822 15:17:11.209912      81 checks.go:843] pulling k8s.gcr.io/etcd:3.3.10\nI0822 15:17:19.582280      81 checks.go:843] pulling k8s.gcr.io/coredns:1.5.0\nI0822 15:17:21.296991      81 kubelet.go:61] Stopping the kubelet\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\nI0822 15:17:21.344170      81 kubelet.go:79] Starting the kubelet\n[kubelet-start] Activating the kubelet service\nI0822 15:17:21.440317      81 certs.go:104] creating a new certificate authority for ca\n[certs] Using certificateDir folder \"/etc/kubernetes/pki\"\n[certs] Generating \"ca\" certificate and key\n[certs] Generating \"apiserver\" certificate and key\n[certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.17.0.2 172.17.0.2 127.0.0.1]\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\nI0822 15:17:22.475216      81 certs.go:104] creating a new certificate authority for front-proxy-ca\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\nI0822 15:17:23.398054      81 certs.go:104] creating a new certificate authority for etcd-ca\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\nI0822 15:17:25.408609      81 certs.go:70] creating a new public/private key files for signing service account users\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\nI0822 15:17:25.959711      81 kubeconfig.go:79] creating kubeconfig file for admin.conf\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\nI0822 15:17:26.345097      81 kubeconfig.go:79] creating kubeconfig file for kubelet.conf\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\nI0822 15:17:26.756689      81 kubeconfig.go:79] creating kubeconfig file for controller-manager.conf\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\nI0822 15:17:27.284879      81 kubeconfig.go:79] creating kubeconfig file for scheduler.conf\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\nI0822 15:17:27.407183      81 manifests.go:91] [control-plane] getting StaticPodSpecs\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\nI0822 15:17:27.421495      81 manifests.go:116] [control-plane] wrote static Pod manifest for component \"kube-apiserver\" to \"/etc/kubernetes/manifests/kube-apiserver.yaml\"\nI0822 15:17:27.421546      81 manifests.go:91] [control-plane] getting StaticPodSpecs\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\nI0822 15:17:27.423429      81 manifests.go:116] [control-plane] wrote static Pod manifest for component \"kube-controller-manager\" to \"/etc/kubernetes/manifests/kube-controller-manager.yaml\"\nI0822 15:17:27.423476      81 manifests.go:91] [control-plane] getting StaticPodSpecs\nI0822 15:17:27.429643      81 manifests.go:116] [control-plane] wrote static Pod manifest for component \"kube-scheduler\" to \"/etc/kubernetes/manifests/kube-scheduler.yaml\"\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\nI0822 15:17:27.430603      81 local.go:69] [etcd] wrote Static Pod manifest for a local etcd member to \"/etc/kubernetes/manifests/etcd.yaml\"\nI0822 15:17:27.430630      81 waitcontrolplane.go:80] [wait-control-plane] Waiting for the API server to be healthy\nI0822 15:17:27.431645      81 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s\nI0822 15:17:27.441608      81 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 4 milliseconds\nI0822 15:17:27.942228      81 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 15:17:28.442361      81 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 15:17:28.942863      81 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 15:17:29.442390      81 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 15:17:29.942344      81 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 15:17:30.442370      81 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 15:17:30.942339      81 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 15:17:31.442804      81 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 15:17:31.942789      81 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 15:17:32.442333      81 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 15:17:37.965040      81 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 5022 milliseconds\nI0822 15:17:38.445183      81 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 3 milliseconds\nI0822 15:17:38.944433      81 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0822 15:17:39.449716      81 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 7 milliseconds\nI0822 15:17:39.944493      81 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0822 15:17:40.449107      81 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 7 milliseconds\nI0822 15:17:40.944459      81 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0822 15:17:41.448462      81 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s 200 OK in 5 milliseconds\nI0822 15:17:41.448837      81 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap\n[apiclient] All control plane components are healthy after 14.012810 seconds\n[upload-config] Storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace\nI0822 15:17:41.457265      81 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 6 milliseconds\nI0822 15:17:41.465316      81 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 6 milliseconds\nI0822 15:17:41.486866      81 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 20 milliseconds\nI0822 15:17:41.487961      81 uploadconfig.go:122] [upload-config] Uploading the kubelet component config to a ConfigMap\n[kubelet] Creating a ConfigMap \"kubelet-config-1.17\" in namespace kube-system with the configuration for the kubelets in the cluster\nI0822 15:17:41.497397      81 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 7 milliseconds\nI0822 15:17:41.514432      81 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 6 milliseconds\nI0822 15:17:41.523443      81 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 8 milliseconds\nI0822 15:17:41.523656      81 uploadconfig.go:127] [upload-config] Preserving the CRISocket information for the control-plane node\nI0822 15:17:41.527704      81 patchnode.go:30] [patchnode] Uploading the CRI Socket information \"/run/containerd/containerd.sock\" to the Node API object \"kind-control-plane\" as an annotation\nI0822 15:17:42.032005      81 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane 200 OK in 3 milliseconds\nI0822 15:17:42.042744      81 round_trippers.go:443] PATCH https://172.17.0.2:6443/api/v1/nodes/kind-control-plane 200 OK in 5 milliseconds\n[upload-certs] Skipping phase. Please see --upload-certs\n[mark-control-plane] Marking the node kind-control-plane as control-plane by adding the label \"node-role.kubernetes.io/master=''\"\n[mark-control-plane] Marking the node kind-control-plane as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]\nI0822 15:17:42.546261      81 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane 200 OK in 2 milliseconds\nI0822 15:17:42.566385      81 round_trippers.go:443] PATCH https://172.17.0.2:6443/api/v1/nodes/kind-control-plane 200 OK in 17 milliseconds\n[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles\nI0822 15:17:42.570978      81 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-abcdef 404 Not Found in 4 milliseconds\nI0822 15:17:42.576829      81 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/secrets 201 Created in 5 milliseconds\n[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials\nI0822 15:17:42.581660      81 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 3 milliseconds\n[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token\nI0822 15:17:42.585910      81 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 3 milliseconds\n[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster\nI0822 15:17:42.589956      81 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 3 milliseconds\nI0822 15:17:42.590475      81 clusterinfo.go:45] [bootstrap-token] loading admin kubeconfig\n[bootstrap-token] Creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace\nI0822 15:17:42.591419      81 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\nI0822 15:17:42.591443      81 clusterinfo.go:53] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig\nI0822 15:17:42.591888      81 clusterinfo.go:65] [bootstrap-token] creating/updating ConfigMap in kube-public namespace\nI0822 15:17:42.600535      81 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps 201 Created in 8 milliseconds\nI0822 15:17:42.601167      81 clusterinfo.go:79] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace\nI0822 15:17:42.605370      81 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles 201 Created in 3 milliseconds\nI0822 15:17:42.610392      81 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings 201 Created in 4 milliseconds\nI0822 15:17:42.615067      81 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kube-dns 404 Not Found in 3 milliseconds\nI0822 15:17:42.619409      81 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/coredns 404 Not Found in 3 milliseconds\nI0822 15:17:42.626301      81 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 6 milliseconds\nI0822 15:17:42.639891      81 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterroles 201 Created in 12 milliseconds\nI0822 15:17:42.648776      81 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 8 milliseconds\nI0822 15:17:42.655259      81 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 5 milliseconds\nI0822 15:17:42.698204      81 round_trippers.go:443] POST https://172.17.0.2:6443/apis/apps/v1/namespaces/kube-system/deployments 201 Created in 29 milliseconds\n[addons] Applied essential addon: CoreDNS\nI0822 15:17:42.748375      81 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/services 201 Created in 47 milliseconds\nI0822 15:17:42.771068      81 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 22 milliseconds\nI0822 15:17:42.945000      81 request.go:538] Throttling request took 171.188887ms, request: POST:https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps\nI0822 15:17:42.949653      81 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 4 milliseconds\nI0822 15:17:42.971458      81 round_trippers.go:443] POST https://172.17.0.2:6443/apis/apps/v1/namespaces/kube-system/daemonsets 201 Created in 13 milliseconds\nI0822 15:17:42.974795      81 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 2 milliseconds\nI0822 15:17:42.978488      81 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 3 milliseconds\nI0822 15:17:42.982497      81 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 3 milliseconds\nI0822 15:17:42.983551      81 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\nI0822 15:17:42.984630      81 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\n[addons] Applied essential addon: kube-proxy\n\nYour Kubernetes control-plane has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n  mkdir -p $HOME/.kube\n  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n  sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nYou should now deploy a pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n  https://kubernetes.io/docs/concepts/cluster-administration/addons/\n\nYou can now join any number of control-plane nodes by copying certificate authorities \nand service account keys on each node and then running the following as root:\n\n  kubeadm join 172.17.0.2:6443 --token <value withheld> \\\n    --discovery-token-ca-cert-hash sha256:39cbe7446fad9b9db2fbfc34a9ead701b9b50b33ab26f7a55ffdd3b8f96ad6f3 \\\n    --control-plane \t  \n\nThen you can join any number of worker nodes by running the following on each as root:\n\nkubeadm join 172.17.0.2:6443 --token <value withheld> \\\n    --discovery-token-ca-cert-hash sha256:39cbe7446fad9b9db2fbfc34a9ead701b9b50b33ab26f7a55ffdd3b8f96ad6f3 "
I0822 15:17:43.102] time="15:17:43" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{(index (index .NetworkSettings.Ports \"6443/tcp\") 0).HostPort}} kind-control-plane]"
I0822 15:17:43.198] time="15:17:43" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane cat /etc/kubernetes/admin.conf]"
I0822 15:17:43.567]  ✓ Starting control-plane 🕹ī¸
I0822 15:17:43.568]  â€ĸ Installing CNI 🔌  ...
I0822 15:17:43.568] time="15:17:43" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane cat /kind/manifests/default-cni.yaml]"
I0822 15:17:44.024] time="15:17:44" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-control-plane kubectl create --kubeconfig=/etc/kubernetes/admin.conf -f -]"
I0822 15:17:45.606]  ✓ Installing CNI 🔌
I0822 15:17:45.606]  â€ĸ Installing StorageClass 💾  ...
I0822 15:17:45.606] time="15:17:45" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f -]"
I0822 15:17:46.317]  ✓ Installing StorageClass 💾
I0822 15:17:46.319]  â€ĸ Joining worker nodes 🚜  ...
I0822 15:17:46.320] time="15:17:46" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker kubeadm join --config /kind/kubeadm.conf --ignore-preflight-errors=all --v=6]"
I0822 15:17:46.321] time="15:17:46" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker2 kubeadm join --config /kind/kubeadm.conf --ignore-preflight-errors=all --v=6]"
I0822 15:18:16.283] time="15:18:16" level=debug msg="I0822 15:17:46.731035     369 join.go:363] [preflight] found NodeName empty; using OS hostname as NodeName\nI0822 15:17:46.731084     369 joinconfiguration.go:75] loading configuration from \"/kind/kubeadm.conf\"\nI0822 15:17:46.735920     369 preflight.go:90] [preflight] Running general checks\n[preflight] Running pre-flight checks\nI0822 15:17:46.736027     369 checks.go:251] validating the existence and emptiness of directory /etc/kubernetes/manifests\nI0822 15:17:46.736050     369 checks.go:288] validating the existence of file /etc/kubernetes/kubelet.conf\nI0822 15:17:46.736059     369 checks.go:288] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf\nI0822 15:17:46.736069     369 checks.go:104] validating the container runtime\nI0822 15:17:46.749680     369 checks.go:378] validating the presence of executable crictl\nI0822 15:17:46.749744     369 checks.go:337] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0822 15:17:46.749831     369 checks.go:337] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0822 15:17:46.749886     369 checks.go:648] validating whether swap is enabled or not\nI0822 15:17:46.749928     369 checks.go:378] validating the presence of executable ip\nI0822 15:17:46.749996     369 checks.go:378] validating the presence of executable iptables\nI0822 15:17:46.750034     369 checks.go:378] validating the presence of executable mount\nI0822 15:17:46.750059     369 checks.go:378] validating the presence of executable nsenter\nI0822 15:17:46.750101     369 checks.go:378] validating the presence of executable ebtables\nI0822 15:17:46.750166     369 checks.go:378] validating the presence of executable ethtool\nI0822 15:17:46.750197     369 checks.go:378] validating the presence of executable socat\nI0822 15:17:46.750247     369 checks.go:378] validating the presence of executable tc\nI0822 15:17:46.750274     369 checks.go:378] validating the presence of executable touch\nI0822 15:17:46.750308     369 checks.go:519] running all checks\nI0822 15:17:46.759955     369 checks.go:408] checking whether the given node name is reachable using net.LookupHost\nI0822 15:17:46.760417     369 checks.go:617] validating kubelet version\nI0822 15:17:46.883472     369 checks.go:130] validating if the service is enabled and active\nI0822 15:17:46.901325     369 checks.go:203] validating availability of port 10250\nI0822 15:17:46.901564     369 checks.go:288] validating the existence of file /etc/kubernetes/pki/ca.crt\nI0822 15:17:46.901580     369 checks.go:434] validating if the connectivity type is via proxy or direct\nI0822 15:17:46.901622     369 join.go:433] [preflight] Discovering cluster-info\nI0822 15:17:46.901725     369 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.2:6443\"\nI0822 15:17:46.902365     369 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.2:6443\"\nI0822 15:17:46.911704     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 9 milliseconds\nI0822 15:17:46.912575     369 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.2:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0822 15:17:51.912774     369 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.2:6443\"\nI0822 15:17:51.914721     369 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.2:6443\"\nI0822 15:17:51.921577     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 6 milliseconds\nI0822 15:17:51.922041     369 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.2:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0822 15:17:56.922223     369 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.2:6443\"\nI0822 15:17:56.922996     369 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.2:6443\"\nI0822 15:17:56.925866     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds\nI0822 15:17:56.926104     369 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.2:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0822 15:18:01.926332     369 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.2:6443\"\nI0822 15:18:01.927045     369 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.2:6443\"\nI0822 15:18:01.931909     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 4 milliseconds\nI0822 15:18:01.933944     369 token.go:109] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server \"172.17.0.2:6443\"\nI0822 15:18:01.933969     369 token.go:205] [discovery] Successfully established connection with API Server \"172.17.0.2:6443\"\nI0822 15:18:01.934008     369 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process\nI0822 15:18:01.934022     369 join.go:447] [preflight] Fetching init configuration\nI0822 15:18:01.934030     369 join.go:485] [preflight] Retrieving KubeConfig objects\n[preflight] Reading configuration from the cluster...\n[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\nI0822 15:18:01.951559     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 16 milliseconds\nI0822 15:18:01.957114     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 4 milliseconds\nI0822 15:18:01.961178     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 2 milliseconds\nI0822 15:18:01.964433     369 interface.go:384] Looking for default routes with IPv4 addresses\nI0822 15:18:01.964452     369 interface.go:389] Default route transits interface \"eth0\"\nI0822 15:18:01.964549     369 interface.go:196] Interface eth0 is up\nI0822 15:18:01.964591     369 interface.go:244] Interface \"eth0\" has 1 addresses :[172.17.0.3/16].\nI0822 15:18:01.964610     369 interface.go:211] Checking addr  172.17.0.3/16.\nI0822 15:18:01.964620     369 interface.go:218] IP found 172.17.0.3\nI0822 15:18:01.964629     369 interface.go:250] Found valid IPv4 address 172.17.0.3 for interface \"eth0\".\nI0822 15:18:01.964636     369 interface.go:395] Found active IP 172.17.0.3 \nI0822 15:18:01.964719     369 preflight.go:101] [preflight] Running configuration dependant checks\nI0822 15:18:01.964734     369 controlplaneprepare.go:211] [download-certs] Skipping certs download\nI0822 15:18:01.964746     369 kubelet.go:107] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf\nI0822 15:18:01.966346     369 kubelet.go:115] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt\nI0822 15:18:01.966988     369 loader.go:375] Config loaded from file:  /etc/kubernetes/bootstrap-kubelet.conf\nI0822 15:18:01.967700     369 kubelet.go:133] [kubelet-start] Stopping the kubelet\n[kubelet-start] Downloading configuration for the kubelet from the \"kubelet-config-1.17\" ConfigMap in the kube-system namespace\nI0822 15:18:01.993363     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 7 milliseconds\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\nI0822 15:18:02.007792     369 kubelet.go:150] [kubelet-start] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...\nI0822 15:18:03.157322     369 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0822 15:18:03.657052     369 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0822 15:18:03.676422     369 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0822 15:18:03.678571     369 kubelet.go:168] [kubelet-start] preserving the crisocket information for the node\nI0822 15:18:03.678609     369 patchnode.go:30] [patchnode] Uploading the CRI Socket information \"/run/containerd/containerd.sock\" to the Node API object \"kind-worker2\" as an annotation\nI0822 15:18:04.189469     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 10 milliseconds\nI0822 15:18:04.681878     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0822 15:18:05.182838     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0822 15:18:05.682377     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0822 15:18:06.181839     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0822 15:18:06.682864     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0822 15:18:07.192952     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 13 milliseconds\nI0822 15:18:07.682176     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0822 15:18:08.184430     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 5 milliseconds\nI0822 15:18:08.681783     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0822 15:18:09.181749     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0822 15:18:09.682440     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0822 15:18:10.183128     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 4 milliseconds\nI0822 15:18:10.682107     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0822 15:18:11.182533     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0822 15:18:11.746071     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 67 milliseconds\nI0822 15:18:12.181903     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0822 15:18:12.683711     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 4 milliseconds\nI0822 15:18:13.185808     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0822 15:18:13.682268     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0822 15:18:14.181931     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0822 15:18:14.681905     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0822 15:18:15.182118     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0822 15:18:15.683529     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 4 milliseconds\nI0822 15:18:16.182441     369 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 200 OK in 3 milliseconds\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the control-plane to see this node join the cluster.\n\nI0822 15:18:16.189399     369 round_trippers.go:443] PATCH https://172.17.0.2:6443/api/v1/nodes/kind-worker2 200 OK in 4 milliseconds"
I0822 15:18:16.295] time="15:18:16" level=debug msg="I0822 15:17:46.713021     365 join.go:363] [preflight] found NodeName empty; using OS hostname as NodeName\nI0822 15:17:46.713080     365 joinconfiguration.go:75] loading configuration from \"/kind/kubeadm.conf\"\nI0822 15:17:46.715476     365 preflight.go:90] [preflight] Running general checks\n[preflight] Running pre-flight checks\nI0822 15:17:46.715565     365 checks.go:251] validating the existence and emptiness of directory /etc/kubernetes/manifests\nI0822 15:17:46.715580     365 checks.go:288] validating the existence of file /etc/kubernetes/kubelet.conf\nI0822 15:17:46.715589     365 checks.go:288] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf\nI0822 15:17:46.715597     365 checks.go:104] validating the container runtime\nI0822 15:17:46.730408     365 checks.go:378] validating the presence of executable crictl\nI0822 15:17:46.730477     365 checks.go:337] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0822 15:17:46.730544     365 checks.go:337] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0822 15:17:46.730611     365 checks.go:648] validating whether swap is enabled or not\nI0822 15:17:46.730673     365 checks.go:378] validating the presence of executable ip\nI0822 15:17:46.730746     365 checks.go:378] validating the presence of executable iptables\nI0822 15:17:46.730808     365 checks.go:378] validating the presence of executable mount\nI0822 15:17:46.730827     365 checks.go:378] validating the presence of executable nsenter\nI0822 15:17:46.730860     365 checks.go:378] validating the presence of executable ebtables\nI0822 15:17:46.730919     365 checks.go:378] validating the presence of executable ethtool\nI0822 15:17:46.730951     365 checks.go:378] validating the presence of executable socat\nI0822 15:17:46.731003     365 checks.go:378] validating the presence of executable tc\nI0822 15:17:46.731029     365 checks.go:378] validating the presence of executable touch\nI0822 15:17:46.731077     365 checks.go:519] running all checks\nI0822 15:17:46.743938     365 checks.go:408] checking whether the given node name is reachable using net.LookupHost\nI0822 15:17:46.744361     365 checks.go:617] validating kubelet version\nI0822 15:17:46.890412     365 checks.go:130] validating if the service is enabled and active\nI0822 15:17:46.905851     365 checks.go:203] validating availability of port 10250\nI0822 15:17:46.906108     365 checks.go:288] validating the existence of file /etc/kubernetes/pki/ca.crt\nI0822 15:17:46.906127     365 checks.go:434] validating if the connectivity type is via proxy or direct\nI0822 15:17:46.906172     365 join.go:433] [preflight] Discovering cluster-info\nI0822 15:17:46.906303     365 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.2:6443\"\nI0822 15:17:46.907073     365 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.2:6443\"\nI0822 15:17:46.916816     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 9 milliseconds\nI0822 15:17:46.917775     365 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.2:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0822 15:17:51.918012     365 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.2:6443\"\nI0822 15:17:51.918765     365 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.2:6443\"\nI0822 15:17:51.924062     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 5 milliseconds\nI0822 15:17:51.924347     365 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.2:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0822 15:17:56.924557     365 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.2:6443\"\nI0822 15:17:56.925166     365 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.2:6443\"\nI0822 15:17:56.927733     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds\nI0822 15:17:56.927948     365 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.2:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0822 15:18:01.928124     365 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.2:6443\"\nI0822 15:18:01.928850     365 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.2:6443\"\nI0822 15:18:01.933795     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 4 milliseconds\nI0822 15:18:01.935747     365 token.go:109] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server \"172.17.0.2:6443\"\nI0822 15:18:01.935769     365 token.go:205] [discovery] Successfully established connection with API Server \"172.17.0.2:6443\"\nI0822 15:18:01.935797     365 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process\nI0822 15:18:01.935821     365 join.go:447] [preflight] Fetching init configuration\nI0822 15:18:01.935827     365 join.go:485] [preflight] Retrieving KubeConfig objects\n[preflight] Reading configuration from the cluster...\n[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\nI0822 15:18:01.947609     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 10 milliseconds\nI0822 15:18:01.954343     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 4 milliseconds\nI0822 15:18:01.958290     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 2 milliseconds\nI0822 15:18:01.960624     365 interface.go:384] Looking for default routes with IPv4 addresses\nI0822 15:18:01.960657     365 interface.go:389] Default route transits interface \"eth0\"\nI0822 15:18:01.960950     365 interface.go:196] Interface eth0 is up\nI0822 15:18:01.961038     365 interface.go:244] Interface \"eth0\" has 1 addresses :[172.17.0.4/16].\nI0822 15:18:01.961070     365 interface.go:211] Checking addr  172.17.0.4/16.\nI0822 15:18:01.961081     365 interface.go:218] IP found 172.17.0.4\nI0822 15:18:01.961096     365 interface.go:250] Found valid IPv4 address 172.17.0.4 for interface \"eth0\".\nI0822 15:18:01.961104     365 interface.go:395] Found active IP 172.17.0.4 \nI0822 15:18:01.961198     365 preflight.go:101] [preflight] Running configuration dependant checks\nI0822 15:18:01.961221     365 controlplaneprepare.go:211] [download-certs] Skipping certs download\nI0822 15:18:01.961254     365 kubelet.go:107] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf\nI0822 15:18:01.966445     365 kubelet.go:115] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt\nI0822 15:18:01.972888     365 loader.go:375] Config loaded from file:  /etc/kubernetes/bootstrap-kubelet.conf\nI0822 15:18:01.984889     365 kubelet.go:133] [kubelet-start] Stopping the kubelet\n[kubelet-start] Downloading configuration for the kubelet from the \"kubelet-config-1.17\" ConfigMap in the kube-system namespace\nI0822 15:18:02.023533     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 3 milliseconds\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\nI0822 15:18:02.038473     365 kubelet.go:150] [kubelet-start] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...\nI0822 15:18:03.164170     365 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0822 15:18:03.185370     365 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0822 15:18:03.187538     365 kubelet.go:168] [kubelet-start] preserving the crisocket information for the node\nI0822 15:18:03.187579     365 patchnode.go:30] [patchnode] Uploading the CRI Socket information \"/run/containerd/containerd.sock\" to the Node API object \"kind-worker\" as an annotation\nI0822 15:18:03.698802     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 10 milliseconds\nI0822 15:18:04.189959     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0822 15:18:04.690706     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0822 15:18:05.191214     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0822 15:18:05.690883     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0822 15:18:06.190928     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0822 15:18:06.691162     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0822 15:18:07.193366     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 5 milliseconds\nI0822 15:18:07.690733     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0822 15:18:08.190819     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0822 15:18:08.691185     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0822 15:18:09.190716     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0822 15:18:09.690646     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0822 15:18:10.190637     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0822 15:18:10.691316     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0822 15:18:11.191106     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0822 15:18:11.745388     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 55 milliseconds\nI0822 15:18:12.191885     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0822 15:18:12.690937     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0822 15:18:13.191006     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0822 15:18:13.691296     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0822 15:18:14.190796     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0822 15:18:14.691065     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0822 15:18:15.190953     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0822 15:18:15.690080     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0822 15:18:16.191520     365 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 200 OK in 3 milliseconds\nI0822 15:18:16.199863     365 round_trippers.go:443] PATCH https://172.17.0.2:6443/api/v1/nodes/kind-worker 200 OK in 5 milliseconds\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the control-plane to see this node join the cluster.\n"
I0822 15:18:16.296]  ✓ Joining worker nodes 🚜
I0822 15:18:16.296]  â€ĸ Waiting ≤ 1m0s for control-plane = Ready âŗ  ...
I0822 15:18:16.296] time="15:18:16" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master -o=jsonpath='{.items..status.conditions[-1:].status}']"
I0822 15:18:16.614]  ✓ Waiting ≤ 1m0s for control-plane = Ready âŗ
I0822 15:18:16.615]  â€ĸ Ready after 0s 💚
I0822 15:18:16.615] Cluster creation complete. You can now use the cluster with:
... skipping 775 lines ...
I0822 16:16:17.792] [16:16:17] Pod status is: Running
I0822 16:16:22.896] [16:16:22] Pod status is: Running
I0822 16:16:28.007] [16:16:28] Pod status is: Running
I0822 16:16:33.111] [16:16:33] Pod status is: Running
I0822 16:16:38.216] [16:16:38] Pod status is: Running
I0822 16:16:43.326] [16:16:43] Pod status is: Running
W0822 16:16:48.431] Error from server (NotFound): pods "e2e-conformance-test" not found
W0822 16:16:48.437] + cleanup
W0822 16:16:48.437] + kind export logs /workspace/_artifacts/logs
I0822 16:16:50.773] Exported logs to: /workspace/_artifacts/logs
W0822 16:16:50.874] + [[ true = true ]]
W0822 16:16:50.874] + kind delete cluster
I0822 16:16:50.974] Deleting cluster "kind" ...
... skipping 8 lines ...
W0822 16:16:59.015]     check(*cmd)
W0822 16:16:59.015]   File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 30, in check
W0822 16:16:59.015]     subprocess.check_call(cmd)
W0822 16:16:59.016]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0822 16:16:59.016]     raise CalledProcessError(retcode, cmd)
W0822 16:16:59.016] subprocess.CalledProcessError: Command '('bash', '-c', 'cd ./../../k8s.io/kubernetes && source ./../test-infra/experiment/kind-conformance-image-e2e.sh')' returned non-zero exit status 1
E0822 16:16:59.023] Command failed
I0822 16:16:59.023] process 667 exited with code 1 after 64.7m
E0822 16:16:59.023] FAIL: pull-kubernetes-conformance-image-test
I0822 16:16:59.024] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0822 16:16:59.702] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0822 16:16:59.762] process 154331 exited with code 0 after 0.0m
I0822 16:16:59.763] Call:  gcloud config get-value account
I0822 16:17:00.072] process 154343 exited with code 0 after 0.0m
I0822 16:17:00.073] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0822 16:17:00.073] Upload result and artifacts...
I0822 16:17:00.073] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1164555458573242368
I0822 16:17:00.073] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1164555458573242368/artifacts
W0822 16:17:01.416] CommandException: One or more URLs matched no objects.
E0822 16:17:01.555] Command failed
I0822 16:17:01.555] process 154355 exited with code 1 after 0.0m
W0822 16:17:01.555] Remote dir gs://kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1164555458573242368/artifacts not exist yet
I0822 16:17:01.555] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1164555458573242368/artifacts
I0822 16:17:03.972] process 154497 exited with code 0 after 0.0m
W0822 16:17:03.972] metadata path /workspace/_artifacts/metadata.json does not exist
W0822 16:17:03.972] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...