This job view page is being replaced by Spyglass soon. Check out the new job view.
PRmgdevstack: Promote e2e "verifying service's sessionAffinity for ClusterIP and NodePort services" to Conformance
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2019-08-14 10:27
Elapsed26m14s
Revision
Buildergke-prow-ssd-pool-1a225945-2x74
pod035aba35-be7e-11e9-bc02-ae225b01b9ea
infra-commit381773791
pod035aba35-be7e-11e9-bc02-ae225b01b9ea
repok8s.io/test-infra
repo-commit3817737918dd4df33a74dee105079196c49a4722
repos{u'k8s.io/kubernetes': u'master:2ad2795136be1edd0d1920e7b3e8c25e1e66f6a4,76443:a43439255e522af448976464f48150cb8344f24a', u'k8s.io/test-infra': u'master'}

No Test Failures!


Error lines from build-log.txt

... skipping 655 lines ...
I0814 10:32:33.065] time="10:32:33" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane cat /kind/version]"
I0814 10:32:33.379] time="10:32:33" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-control-plane]"
I0814 10:32:33.445] time="10:32:33" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-worker2]"
I0814 10:32:33.445] time="10:32:33" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-worker]"
I0814 10:32:33.445] time="10:32:33" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-control-plane]"
I0814 10:32:33.523] time="10:32:33" level=debug msg="Configuration Input data: {kind v1.17.0-alpha.0.92+cfffc111e0dbd6 172.17.0.4:6443 6443 127.0.0.1 true 172.17.0.4 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}}"
I0814 10:32:33.530] time="10:32:33" level=debug msg="Configuration generated:\n # config generated by kind\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterConfiguration\nmetadata:\n  name: config\nkubernetesVersion: v1.17.0-alpha.0.92+cfffc111e0dbd6\nclusterName: \"kind\"\ncontrolPlaneEndpoint: \"172.17.0.4:6443\"\n# on docker for mac we have to expose the api server via port forward,\n# so we need to ensure the cert is valid for localhost so we can talk\n# to the cluster after rewriting the kubeconfig to point to localhost\napiServer:\n  certSANs: [localhost, \"127.0.0.1\"]\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\n    # configure ipv6 default addresses for IPv6 clusters\n    \nscheduler:\n  extraArgs:\n    # configure ipv6 default addresses for IPv6 clusters\n    \nnetworking:\n  podSubnet: \"10.244.0.0/16\"\n  serviceSubnet: \"10.96.0.0/12\"\n---\napiVersion: kubeadm.k8s.io/v1beta2\nkind: InitConfiguration\nmetadata:\n  name: config\n# we use a well know token for TLS bootstrap\nbootstrapTokens:\n- token: \"abcdef.0123456789abcdef\"\n# we use a well know port for making the API server discoverable inside docker network. \n# from the host machine such port will be accessible via a random local port instead.\nlocalAPIEndpoint:\n  advertiseAddress: \"172.17.0.4\"\n  bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.4\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeadm.k8s.io/v1beta2\nkind: JoinConfiguration\nmetadata:\n  name: config\ncontrolPlane:\n  localAPIEndpoint:\n    advertiseAddress: \"172.17.0.4\"\n    bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.4\"\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: \"172.17.0.4:6443\"\n    token: \"abcdef.0123456789abcdef\"\n    unsafeSkipCAVerification: true\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nmetadata:\n  name: config\n# configure ipv6 addresses in IPv6 mode\n\n# disable disk resource management by default\n# kubelet will see the host disk that the inner container runtime\n# is ultimately backed by and attempt to recover disk space. we don't want that.\nimageGCHighThresholdPercent: 100\nevictionHard:\n  nodefs.available: \"0%\"\n  nodefs.inodesFree: \"0%\"\n  imagefs.available: \"0%\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmetadata:\n  name: config\n"
I0814 10:32:33.530] time="10:32:33" level=debug msg="Configuration Input data: {kind v1.17.0-alpha.0.92+cfffc111e0dbd6 172.17.0.4:6443 6443 127.0.0.1 false 172.17.0.2 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}}"
I0814 10:32:33.533] time="10:32:33" level=debug msg="Configuration generated:\n # config generated by kind\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterConfiguration\nmetadata:\n  name: config\nkubernetesVersion: v1.17.0-alpha.0.92+cfffc111e0dbd6\nclusterName: \"kind\"\ncontrolPlaneEndpoint: \"172.17.0.4:6443\"\n# on docker for mac we have to expose the api server via port forward,\n# so we need to ensure the cert is valid for localhost so we can talk\n# to the cluster after rewriting the kubeconfig to point to localhost\napiServer:\n  certSANs: [localhost, \"127.0.0.1\"]\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\n    # configure ipv6 default addresses for IPv6 clusters\n    \nscheduler:\n  extraArgs:\n    # configure ipv6 default addresses for IPv6 clusters\n    \nnetworking:\n  podSubnet: \"10.244.0.0/16\"\n  serviceSubnet: \"10.96.0.0/12\"\n---\napiVersion: kubeadm.k8s.io/v1beta2\nkind: InitConfiguration\nmetadata:\n  name: config\n# we use a well know token for TLS bootstrap\nbootstrapTokens:\n- token: \"abcdef.0123456789abcdef\"\n# we use a well know port for making the API server discoverable inside docker network. \n# from the host machine such port will be accessible via a random local port instead.\nlocalAPIEndpoint:\n  advertiseAddress: \"172.17.0.2\"\n  bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.2\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeadm.k8s.io/v1beta2\nkind: JoinConfiguration\nmetadata:\n  name: config\n\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.2\"\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: \"172.17.0.4:6443\"\n    token: \"abcdef.0123456789abcdef\"\n    unsafeSkipCAVerification: true\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nmetadata:\n  name: config\n# configure ipv6 addresses in IPv6 mode\n\n# disable disk resource management by default\n# kubelet will see the host disk that the inner container runtime\n# is ultimately backed by and attempt to recover disk space. we don't want that.\nimageGCHighThresholdPercent: 100\nevictionHard:\n  nodefs.available: \"0%\"\n  nodefs.inodesFree: \"0%\"\n  imagefs.available: \"0%\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmetadata:\n  name: config\n"
I0814 10:32:33.533] time="10:32:33" level=debug msg="Configuration Input data: {kind v1.17.0-alpha.0.92+cfffc111e0dbd6 172.17.0.4:6443 6443 127.0.0.1 false 172.17.0.3 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}}"
I0814 10:32:33.536] time="10:32:33" level=debug msg="Configuration generated:\n # config generated by kind\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterConfiguration\nmetadata:\n  name: config\nkubernetesVersion: v1.17.0-alpha.0.92+cfffc111e0dbd6\nclusterName: \"kind\"\ncontrolPlaneEndpoint: \"172.17.0.4:6443\"\n# on docker for mac we have to expose the api server via port forward,\n# so we need to ensure the cert is valid for localhost so we can talk\n# to the cluster after rewriting the kubeconfig to point to localhost\napiServer:\n  certSANs: [localhost, \"127.0.0.1\"]\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\n    # configure ipv6 default addresses for IPv6 clusters\n    \nscheduler:\n  extraArgs:\n    # configure ipv6 default addresses for IPv6 clusters\n    \nnetworking:\n  podSubnet: \"10.244.0.0/16\"\n  serviceSubnet: \"10.96.0.0/12\"\n---\napiVersion: kubeadm.k8s.io/v1beta2\nkind: InitConfiguration\nmetadata:\n  name: config\n# we use a well know token for TLS bootstrap\nbootstrapTokens:\n- token: \"abcdef.0123456789abcdef\"\n# we use a well know port for making the API server discoverable inside docker network. \n# from the host machine such port will be accessible via a random local port instead.\nlocalAPIEndpoint:\n  advertiseAddress: \"172.17.0.3\"\n  bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.3\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeadm.k8s.io/v1beta2\nkind: JoinConfiguration\nmetadata:\n  name: config\n\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.3\"\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: \"172.17.0.4:6443\"\n    token: \"abcdef.0123456789abcdef\"\n    unsafeSkipCAVerification: true\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nmetadata:\n  name: config\n# configure ipv6 addresses in IPv6 mode\n\n# disable disk resource management by default\n# kubelet will see the host disk that the inner container runtime\n# is ultimately backed by and attempt to recover disk space. we don't want that.\nimageGCHighThresholdPercent: 100\nevictionHard:\n  nodefs.available: \"0%\"\n  nodefs.inodesFree: \"0%\"\n  imagefs.available: \"0%\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmetadata:\n  name: config\n"
I0814 10:32:33.554] time="10:32:33" level=debug msg="Using kubeadm config:\napiServer:\n  certSANs:\n  - localhost\n  - 127.0.0.1\napiVersion: kubeadm.k8s.io/v1beta2\nclusterName: kind\ncontrolPlaneEndpoint: 172.17.0.4:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.17.0-alpha.0.92+cfffc111e0dbd6\nnetworking:\n  podSubnet: 10.244.0.0/16\n  serviceSubnet: 10.96.0.0/12\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.4\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.4\n---\napiVersion: kubeadm.k8s.io/v1beta2\ncontrolPlane:\n  localAPIEndpoint:\n    advertiseAddress: 172.17.0.4\n    bindPort: 6443\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.4:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.4\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
I0814 10:32:33.555] time="10:32:33" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane mkdir -p /kind]"
I0814 10:32:33.557] time="10:32:33" level=debug msg="Using kubeadm config:\napiServer:\n  certSANs:\n  - localhost\n  - 127.0.0.1\napiVersion: kubeadm.k8s.io/v1beta2\nclusterName: kind\ncontrolPlaneEndpoint: 172.17.0.4:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.17.0-alpha.0.92+cfffc111e0dbd6\nnetworking:\n  podSubnet: 10.244.0.0/16\n  serviceSubnet: 10.96.0.0/12\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.2\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.2\n---\napiVersion: kubeadm.k8s.io/v1beta2\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.4:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.2\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
I0814 10:32:33.558] time="10:32:33" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker2 mkdir -p /kind]"
I0814 10:32:33.559] time="10:32:33" level=debug msg="Using kubeadm config:\napiServer:\n  certSANs:\n  - localhost\n  - 127.0.0.1\napiVersion: kubeadm.k8s.io/v1beta2\nclusterName: kind\ncontrolPlaneEndpoint: 172.17.0.4:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.17.0-alpha.0.92+cfffc111e0dbd6\nnetworking:\n  podSubnet: 10.244.0.0/16\n  serviceSubnet: 10.96.0.0/12\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.3\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.3\n---\napiVersion: kubeadm.k8s.io/v1beta2\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.4:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.3\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
I0814 10:32:33.560] time="10:32:33" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker mkdir -p /kind]"
I0814 10:32:33.818] time="10:32:33" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-worker2 cp /dev/stdin /kind/kubeadm.conf]"
I0814 10:32:33.822] time="10:32:33" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-control-plane cp /dev/stdin /kind/kubeadm.conf]"
I0814 10:32:33.825] time="10:32:33" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-worker cp /dev/stdin /kind/kubeadm.conf]"
I0814 10:32:34.140]  ✓ Creating kubeadm config 📜
I0814 10:32:34.140]  â€ĸ Starting control-plane 🕹ī¸  ...
I0814 10:32:34.141] time="10:32:34" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane kubeadm init --ignore-preflight-errors=all --config=/kind/kubeadm.conf --skip-token-print --v=6]"
I0814 10:33:22.602] time="10:33:22" level=debug msg="I0814 10:32:34.534334      83 initconfiguration.go:186] loading configuration from \"/kind/kubeadm.conf\"\n[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration\nI0814 10:32:34.543200      83 feature_gate.go:216] feature gates: &{map[]}\n[init] Using Kubernetes version: v1.17.0-alpha.0.92+cfffc111e0dbd6\n[preflight] Running pre-flight checks\nI0814 10:32:34.543548      83 checks.go:574] validating Kubernetes and kubeadm version\nI0814 10:32:34.543587      83 checks.go:169] validating if the firewall is enabled and active\nI0814 10:32:34.561240      83 checks.go:204] validating availability of port 6443\nI0814 10:32:34.561479      83 checks.go:204] validating availability of port 10251\nI0814 10:32:34.561512      83 checks.go:204] validating availability of port 10252\nI0814 10:32:34.561536      83 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml\nI0814 10:32:34.561548      83 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml\nI0814 10:32:34.561560      83 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml\nI0814 10:32:34.561566      83 checks.go:286] validating the existence of file /etc/kubernetes/manifests/etcd.yaml\nI0814 10:32:34.561574      83 checks.go:432] validating if the connectivity type is via proxy or direct\nI0814 10:32:34.562567      83 checks.go:468] validating http connectivity to first IP address in the CIDR\nI0814 10:32:34.562608      83 checks.go:468] validating http connectivity to first IP address in the CIDR\nI0814 10:32:34.562617      83 checks.go:105] validating the container runtime\nI0814 10:32:34.695845      83 checks.go:376] validating the presence of executable crictl\nI0814 10:32:34.695914      83 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0814 10:32:34.696003      83 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0814 10:32:34.696106      83 checks.go:646] validating whether swap is enabled or not\nI0814 10:32:34.696162      83 checks.go:376] validating the presence of executable ip\nI0814 10:32:34.696275      83 checks.go:376] validating the presence of executable iptables\nI0814 10:32:34.696325      83 checks.go:376] validating the presence of executable mount\nI0814 10:32:34.696383      83 checks.go:376] validating the presence of executable nsenter\nI0814 10:32:34.696457      83 checks.go:376] validating the presence of executable ebtables\nI0814 10:32:34.696559      83 checks.go:376] validating the presence of executable ethtool\nI0814 10:32:34.696602      83 checks.go:376] validating the presence of executable socat\nI0814 10:32:34.696648      83 checks.go:376] validating the presence of executable tc\nI0814 10:32:34.696686      83 checks.go:376] validating the presence of executable touch\nI0814 10:32:34.696738      83 checks.go:517] running all checks\nI0814 10:32:34.703787      83 checks.go:406] checking whether the given node name is reachable using net.LookupHost\nI0814 10:32:34.706128      83 checks.go:615] validating kubelet version\nI0814 10:32:34.785214      83 checks.go:131] validating if the service is enabled and active\nI0814 10:32:34.797500      83 checks.go:204] validating availability of port 10250\nI0814 10:32:34.797821      83 checks.go:204] validating availability of port 2379\nI0814 10:32:34.797924      83 checks.go:204] validating availability of port 2380\nI0814 10:32:34.798107      83 checks.go:249] validating the existence and emptiness of directory /var/lib/etcd\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\nI0814 10:32:34.811564      83 checks.go:835] image exists: k8s.gcr.io/kube-apiserver:v1.17.0-alpha.0.92_cfffc111e0dbd6\nI0814 10:32:34.821444      83 checks.go:835] image exists: k8s.gcr.io/kube-controller-manager:v1.17.0-alpha.0.92_cfffc111e0dbd6\nI0814 10:32:34.829429      83 checks.go:835] image exists: k8s.gcr.io/kube-scheduler:v1.17.0-alpha.0.92_cfffc111e0dbd6\nI0814 10:32:34.836989      83 checks.go:835] image exists: k8s.gcr.io/kube-proxy:v1.17.0-alpha.0.92_cfffc111e0dbd6\nI0814 10:32:34.844441      83 checks.go:841] pulling k8s.gcr.io/pause:3.1\nI0814 10:32:35.348422      83 checks.go:841] pulling k8s.gcr.io/etcd:3.3.10\nI0814 10:32:43.135169      83 checks.go:841] pulling k8s.gcr.io/coredns:1.3.1\nI0814 10:32:44.739648      83 kubelet.go:61] Stopping the kubelet\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\nI0814 10:32:44.768051      83 kubelet.go:79] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[certs] Using certificateDir folder \"/etc/kubernetes/pki\"\nI0814 10:32:44.843436      83 certs.go:104] creating a new certificate authority for ca\n[certs] Generating \"ca\" certificate and key\n[certs] Generating \"apiserver\" certificate and key\n[certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.17.0.4 172.17.0.4 127.0.0.1]\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\nI0814 10:32:45.650865      83 certs.go:104] creating a new certificate authority for front-proxy-ca\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\nI0814 10:32:46.652796      83 certs.go:104] creating a new certificate authority for etcd-ca\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.4 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.4 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\nI0814 10:32:48.309342      83 certs.go:70] creating a new public/private key files for signing service account users\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\nI0814 10:32:48.586207      83 kubeconfig.go:79] creating kubeconfig file for admin.conf\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\nI0814 10:32:48.925692      83 kubeconfig.go:79] creating kubeconfig file for kubelet.conf\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\nI0814 10:32:49.206698      83 kubeconfig.go:79] creating kubeconfig file for controller-manager.conf\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\nI0814 10:32:49.351765      83 kubeconfig.go:79] creating kubeconfig file for scheduler.conf\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\nI0814 10:32:50.018951      83 manifests.go:108] [control-plane] getting StaticPodSpecs\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\nI0814 10:32:50.029148      83 manifests.go:133] [control-plane] wrote static Pod manifest for component \"kube-apiserver\" to \"/etc/kubernetes/manifests/kube-apiserver.yaml\"\nI0814 10:32:50.029179      83 manifests.go:108] [control-plane] getting StaticPodSpecs\nI0814 10:32:50.030497      83 manifests.go:133] [control-plane] wrote static Pod manifest for component \"kube-controller-manager\" to \"/etc/kubernetes/manifests/kube-controller-manager.yaml\"\nI0814 10:32:50.030532      83 manifests.go:108] [control-plane] getting StaticPodSpecs\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\nI0814 10:32:50.031601      83 manifests.go:133] [control-plane] wrote static Pod manifest for component \"kube-scheduler\" to \"/etc/kubernetes/manifests/kube-scheduler.yaml\"\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\nI0814 10:32:50.032472      83 local.go:69] [etcd] wrote Static Pod manifest for a local etcd member to \"/etc/kubernetes/manifests/etcd.yaml\"\nI0814 10:32:50.032498      83 waitcontrolplane.go:80] [wait-control-plane] Waiting for the API server to be healthy\nI0814 10:32:50.033648      83 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s\nI0814 10:32:50.046631      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 3 milliseconds\nI0814 10:32:50.547597      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:32:51.047314      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:32:51.547285      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:32:52.047281      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:32:52.547273      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:32:53.047410      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:32:53.547245      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:32:54.047248      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:32:54.547315      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:32:55.047406      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:32:55.547347      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:32:56.047336      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:32:56.547230      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:32:57.047281      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:32:57.547201      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:32:58.047305      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:32:58.547245      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:32:59.047239      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:32:59.547279      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:00.047224      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:00.547340      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:01.047270      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:01.547255      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:02.047242      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:02.547237      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:03.047268      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:03.547331      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:04.047260      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:04.547281      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:05.047438      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:05.547304      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:06.047381      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:06.547467      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:07.047320      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:07.547245      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:08.047269      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:08.547307      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:09.047256      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:09.547267      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:10.047288      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:10.547323      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:11.047331      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:11.547232      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:12.047244      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:12.547500      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:13.047245      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0814 10:33:17.463535      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s 500 Internal Server Error in 3916 milliseconds\nI0814 10:33:17.551299      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s 500 Internal Server Error in 4 milliseconds\nI0814 10:33:18.049520      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0814 10:33:18.548876      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s 500 Internal Server Error in 1 milliseconds\nI0814 10:33:19.049163      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0814 10:33:19.548997      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0814 10:33:20.048931      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s 500 Internal Server Error in 1 milliseconds\nI0814 10:33:20.549090      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0814 10:33:21.049204      83 round_trippers.go:471] GET https://172.17.0.4:6443/healthz?timeout=32s 200 OK in 2 milliseconds\nI0814 10:33:21.049339      83 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap\n[apiclient] All control plane components are healthy after 31.008748 seconds\n[upload-config] Storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace\nI0814 10:33:21.054500      83 round_trippers.go:471] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 3 milliseconds\nI0814 10:33:21.058764      83 round_trippers.go:471] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 3 milliseconds\nI0814 10:33:21.065429      83 round_trippers.go:471] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 3 milliseconds\nI0814 10:33:21.066109      83 uploadconfig.go:122] [upload-config] Uploading the kubelet component config to a ConfigMap\n[kubelet] Creating a ConfigMap \"kubelet-config-1.17\" in namespace kube-system with the configuration for the kubelets in the cluster\nI0814 10:33:21.069797      83 round_trippers.go:471] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 2 milliseconds\nI0814 10:33:21.072335      83 round_trippers.go:471] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 2 milliseconds\nI0814 10:33:21.074685      83 round_trippers.go:471] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 2 milliseconds\nI0814 10:33:21.074780      83 uploadconfig.go:127] [upload-config] Preserving the CRISocket information for the control-plane node\nI0814 10:33:21.074791      83 patchnode.go:30] [patchnode] Uploading the CRI Socket information \"/run/containerd/containerd.sock\" to the Node API object \"kind-control-plane\" as an annotation\nI0814 10:33:21.578070      83 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-control-plane 200 OK in 2 milliseconds\nI0814 10:33:21.588145      83 round_trippers.go:471] PATCH https://172.17.0.4:6443/api/v1/nodes/kind-control-plane 200 OK in 3 milliseconds\n[upload-certs] Skipping phase. Please see --upload-certs\n[mark-control-plane] Marking the node kind-control-plane as control-plane by adding the label \"node-role.kubernetes.io/master=''\"\n[mark-control-plane] Marking the node kind-control-plane as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]\nI0814 10:33:22.090922      83 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-control-plane 200 OK in 2 milliseconds\nI0814 10:33:22.096148      83 round_trippers.go:471] PATCH https://172.17.0.4:6443/api/v1/nodes/kind-control-plane 200 OK in 4 milliseconds\n[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles\nI0814 10:33:22.098461      83 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-abcdef 404 Not Found in 1 milliseconds\nI0814 10:33:22.102089      83 round_trippers.go:471] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/secrets 201 Created in 2 milliseconds\n[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials\nI0814 10:33:22.105697      83 round_trippers.go:471] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 2 milliseconds\n[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token\nI0814 10:33:22.109029      83 round_trippers.go:471] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 2 milliseconds\n[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster\nI0814 10:33:22.111312      83 round_trippers.go:471] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 2 milliseconds\nI0814 10:33:22.111512      83 clusterinfo.go:45] [bootstrap-token] loading admin kubeconfig\n[bootstrap-token] Creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace\nI0814 10:33:22.112327      83 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\nI0814 10:33:22.112348      83 clusterinfo.go:53] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig\nI0814 10:33:22.112851      83 clusterinfo.go:65] [bootstrap-token] creating/updating ConfigMap in kube-public namespace\nI0814 10:33:22.115588      83 round_trippers.go:471] POST https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps 201 Created in 2 milliseconds\nI0814 10:33:22.115761      83 clusterinfo.go:79] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace\nI0814 10:33:22.118731      83 round_trippers.go:471] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles 201 Created in 2 milliseconds\nI0814 10:33:22.121186      83 round_trippers.go:471] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings 201 Created in 2 milliseconds\nI0814 10:33:22.123122      83 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kube-dns 404 Not Found in 1 milliseconds\nI0814 10:33:22.125334      83 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/coredns 404 Not Found in 1 milliseconds\nI0814 10:33:22.127824      83 round_trippers.go:471] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 2 milliseconds\nI0814 10:33:22.131980      83 round_trippers.go:471] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/clusterroles 201 Created in 3 milliseconds\nI0814 10:33:22.135511      83 round_trippers.go:471] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 3 milliseconds\nI0814 10:33:22.139212      83 round_trippers.go:471] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 2 milliseconds\nI0814 10:33:22.161501      83 round_trippers.go:471] POST https://172.17.0.4:6443/apis/apps/v1/namespaces/kube-system/deployments 201 Created in 12 milliseconds\nI0814 10:33:22.169773      83 round_trippers.go:471] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/services 201 Created in 5 milliseconds\n[addons] Applied essential addon: CoreDNS\nI0814 10:33:22.289014      83 request.go:538] Throttling request took 118.749193ms, request: POST:https://172.17.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts\nI0814 10:33:22.292649      83 round_trippers.go:471] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 3 milliseconds\nI0814 10:33:22.488990      83 request.go:538] Throttling request took 193.285863ms, request: POST:https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps\nI0814 10:33:22.494423      83 round_trippers.go:471] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 5 milliseconds\nI0814 10:33:22.509722      83 round_trippers.go:471] POST https://172.17.0.4:6443/apis/apps/v1/namespaces/kube-system/daemonsets 201 Created in 9 milliseconds\nI0814 10:33:22.513424      83 round_trippers.go:471] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 2 milliseconds\nI0814 10:33:22.516190      83 round_trippers.go:471] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 2 milliseconds\nI0814 10:33:22.518685      83 round_trippers.go:471] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 2 milliseconds\n[addons] Applied essential addon: kube-proxy\nI0814 10:33:22.519530      83 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\nI0814 10:33:22.520577      83 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\n\nYour Kubernetes control-plane has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n  mkdir -p $HOME/.kube\n  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n  sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nYou should now deploy a pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n  https://kubernetes.io/docs/concepts/cluster-administration/addons/\n\nYou can now join any number of control-plane nodes by copying certificate authorities \nand service account keys on each node and then running the following as root:\n\n  kubeadm join 172.17.0.4:6443 --token <value withheld> \\\n    --discovery-token-ca-cert-hash sha256:05126efea01bef2fe1df02f21dee468851da3bfa4d037a090d7b627e0d7e647e \\\n    --control-plane \t  \n\nThen you can join any number of worker nodes by running the following on each as root:\n\nkubeadm join 172.17.0.4:6443 --token <value withheld> \\\n    --discovery-token-ca-cert-hash sha256:05126efea01bef2fe1df02f21dee468851da3bfa4d037a090d7b627e0d7e647e "
I0814 10:33:22.602] time="10:33:22" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{(index (index .NetworkSettings.Ports \"6443/tcp\") 0).HostPort}} kind-control-plane]"
I0814 10:33:22.648] time="10:33:22" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane cat /etc/kubernetes/admin.conf]"
I0814 10:33:22.864]  ✓ Starting control-plane 🕹ī¸
I0814 10:33:22.864]  â€ĸ Installing CNI 🔌  ...
I0814 10:33:22.864] time="10:33:22" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane cat /kind/manifests/default-cni.yaml]"
I0814 10:33:23.064] time="10:33:23" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-control-plane kubectl create --kubeconfig=/etc/kubernetes/admin.conf -f -]"
I0814 10:33:23.878]  ✓ Installing CNI 🔌
I0814 10:33:23.878]  â€ĸ Installing StorageClass 💾  ...
I0814 10:33:23.879] time="10:33:23" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f -]"
I0814 10:33:24.301]  ✓ Installing StorageClass 💾
I0814 10:33:24.301]  â€ĸ Joining worker nodes 🚜  ...
I0814 10:33:24.302] time="10:33:24" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker2 kubeadm join --config /kind/kubeadm.conf --ignore-preflight-errors=all --v=6]"
I0814 10:33:24.302] time="10:33:24" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker kubeadm join --config /kind/kubeadm.conf --ignore-preflight-errors=all --v=6]"
I0814 10:33:53.408] time="10:33:53" level=debug msg="I0814 10:33:24.523289     515 join.go:360] [preflight] found NodeName empty; using OS hostname as NodeName\nI0814 10:33:24.523339     515 joinconfiguration.go:75] loading configuration from \"/kind/kubeadm.conf\"\n[preflight] Running pre-flight checks\nI0814 10:33:24.525309     515 preflight.go:90] [preflight] Running general checks\nI0814 10:33:24.525450     515 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests\nI0814 10:33:24.525472     515 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf\nI0814 10:33:24.525482     515 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf\nI0814 10:33:24.525491     515 checks.go:105] validating the container runtime\nI0814 10:33:24.535474     515 checks.go:376] validating the presence of executable crictl\nI0814 10:33:24.535540     515 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0814 10:33:24.535758     515 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0814 10:33:24.535846     515 checks.go:646] validating whether swap is enabled or not\nI0814 10:33:24.535994     515 checks.go:376] validating the presence of executable ip\nI0814 10:33:24.536160     515 checks.go:376] validating the presence of executable iptables\nI0814 10:33:24.536211     515 checks.go:376] validating the presence of executable mount\nI0814 10:33:24.536260     515 checks.go:376] validating the presence of executable nsenter\nI0814 10:33:24.536320     515 checks.go:376] validating the presence of executable ebtables\nI0814 10:33:24.536416     515 checks.go:376] validating the presence of executable ethtool\nI0814 10:33:24.536470     515 checks.go:376] validating the presence of executable socat\nI0814 10:33:24.536525     515 checks.go:376] validating the presence of executable tc\nI0814 10:33:24.536634     515 checks.go:376] validating the presence of executable touch\nI0814 10:33:24.536743     515 checks.go:517] running all checks\nI0814 10:33:24.542257     515 checks.go:406] checking whether the given node name is reachable using net.LookupHost\nI0814 10:33:24.542537     515 checks.go:615] validating kubelet version\nI0814 10:33:24.614935     515 checks.go:131] validating if the service is enabled and active\nI0814 10:33:24.628103     515 checks.go:204] validating availability of port 10250\nI0814 10:33:24.628312     515 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt\nI0814 10:33:24.628333     515 checks.go:432] validating if the connectivity type is via proxy or direct\nI0814 10:33:24.628509     515 join.go:429] [preflight] Discovering cluster-info\nI0814 10:33:24.628647     515 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0814 10:33:24.629186     515 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0814 10:33:24.639067     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 9 milliseconds\nI0814 10:33:24.640077     515 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.4:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0814 10:33:29.640623     515 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0814 10:33:29.641287     515 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0814 10:33:29.644261     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds\nI0814 10:33:29.644699     515 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.4:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0814 10:33:34.644966     515 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0814 10:33:34.645988     515 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0814 10:33:34.648929     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds\nI0814 10:33:34.649287     515 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.4:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0814 10:33:39.649525     515 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0814 10:33:39.650086     515 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0814 10:33:39.653039     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds\nI0814 10:33:39.654189     515 token.go:109] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server \"172.17.0.4:6443\"\nI0814 10:33:39.654216     515 token.go:205] [discovery] Successfully established connection with API Server \"172.17.0.4:6443\"\nI0814 10:33:39.654259     515 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process\nI0814 10:33:39.654291     515 join.go:443] [preflight] Fetching init configuration\nI0814 10:33:39.654310     515 join.go:476] [preflight] Retrieving KubeConfig objects\n[preflight] Reading configuration from the cluster...\n[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\nI0814 10:33:39.663223     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 8 milliseconds\nI0814 10:33:39.666455     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 1 milliseconds\nI0814 10:33:39.669505     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 1 milliseconds\nI0814 10:33:39.671032     515 interface.go:384] Looking for default routes with IPv4 addresses\nI0814 10:33:39.671053     515 interface.go:389] Default route transits interface \"eth0\"\nI0814 10:33:39.671127     515 interface.go:196] Interface eth0 is up\nI0814 10:33:39.671160     515 interface.go:244] Interface \"eth0\" has 1 addresses :[172.17.0.3/16].\nI0814 10:33:39.671179     515 interface.go:211] Checking addr  172.17.0.3/16.\nI0814 10:33:39.671188     515 interface.go:218] IP found 172.17.0.3\nI0814 10:33:39.671197     515 interface.go:250] Found valid IPv4 address 172.17.0.3 for interface \"eth0\".\nI0814 10:33:39.671251     515 interface.go:395] Found active IP 172.17.0.3 \nI0814 10:33:39.671454     515 preflight.go:101] [preflight] Running configuration dependant checks\nI0814 10:33:39.671544     515 controlplaneprepare.go:213] [download-certs] Skipping certs download\nI0814 10:33:39.671555     515 kubelet.go:107] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf\nI0814 10:33:39.673334     515 kubelet.go:115] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt\nI0814 10:33:39.674491     515 loader.go:375] Config loaded from file:  /etc/kubernetes/bootstrap-kubelet.conf\nI0814 10:33:39.675388     515 kubelet.go:133] [kubelet-start] Stopping the kubelet\n[kubelet-start] Downloading configuration for the kubelet from the \"kubelet-config-1.17\" ConfigMap in the kube-system namespace\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\nI0814 10:33:39.692107     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 2 milliseconds\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\nI0814 10:33:39.704304     515 kubelet.go:150] [kubelet-start] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...\nI0814 10:33:40.805866     515 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0814 10:33:40.819959     515 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0814 10:33:40.821995     515 kubelet.go:168] [kubelet-start] preserving the crisocket information for the node\nI0814 10:33:40.822029     515 patchnode.go:30] [patchnode] Uploading the CRI Socket information \"/run/containerd/containerd.sock\" to the Node API object \"kind-worker\" as an annotation\nI0814 10:33:41.330801     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 8 milliseconds\nI0814 10:33:41.825217     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0814 10:33:42.325511     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0814 10:33:42.824948     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0814 10:33:43.325743     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0814 10:33:43.824979     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0814 10:33:44.325010     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0814 10:33:44.825737     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0814 10:33:45.324892     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0814 10:33:45.824948     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0814 10:33:46.325890     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0814 10:33:46.825327     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0814 10:33:47.325234     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0814 10:33:47.825082     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0814 10:33:48.324757     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0814 10:33:48.825302     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0814 10:33:49.325224     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0814 10:33:49.825136     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0814 10:33:50.325728     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0814 10:33:50.824559     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0814 10:33:51.325416     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0814 10:33:51.825890     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0814 10:33:52.325695     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0814 10:33:52.825204     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0814 10:33:53.325395     515 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 200 OK in 2 milliseconds\nI0814 10:33:53.332637     515 round_trippers.go:471] PATCH https://172.17.0.4:6443/api/v1/nodes/kind-worker 200 OK in 4 milliseconds\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the control-plane to see this node join the cluster.\n"
I0814 10:33:53.902] time="10:33:53" level=debug msg="I0814 10:33:24.524380     529 join.go:360] [preflight] found NodeName empty; using OS hostname as NodeName\nI0814 10:33:24.524418     529 joinconfiguration.go:75] loading configuration from \"/kind/kubeadm.conf\"\nI0814 10:33:24.525912     529 preflight.go:90] [preflight] Running general checks\n[preflight] Running pre-flight checks\nI0814 10:33:24.526000     529 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests\nI0814 10:33:24.526017     529 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf\nI0814 10:33:24.526026     529 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf\nI0814 10:33:24.526034     529 checks.go:105] validating the container runtime\nI0814 10:33:24.535500     529 checks.go:376] validating the presence of executable crictl\nI0814 10:33:24.535553     529 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0814 10:33:24.535628     529 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0814 10:33:24.535692     529 checks.go:646] validating whether swap is enabled or not\nI0814 10:33:24.535762     529 checks.go:376] validating the presence of executable ip\nI0814 10:33:24.535892     529 checks.go:376] validating the presence of executable iptables\nI0814 10:33:24.536004     529 checks.go:376] validating the presence of executable mount\nI0814 10:33:24.536050     529 checks.go:376] validating the presence of executable nsenter\nI0814 10:33:24.536114     529 checks.go:376] validating the presence of executable ebtables\nI0814 10:33:24.536238     529 checks.go:376] validating the presence of executable ethtool\nI0814 10:33:24.536306     529 checks.go:376] validating the presence of executable socat\nI0814 10:33:24.536377     529 checks.go:376] validating the presence of executable tc\nI0814 10:33:24.536466     529 checks.go:376] validating the presence of executable touch\nI0814 10:33:24.536561     529 checks.go:517] running all checks\nI0814 10:33:24.542834     529 checks.go:406] checking whether the given node name is reachable using net.LookupHost\nI0814 10:33:24.543173     529 checks.go:615] validating kubelet version\nI0814 10:33:24.617869     529 checks.go:131] validating if the service is enabled and active\nI0814 10:33:24.630296     529 checks.go:204] validating availability of port 10250\nI0814 10:33:24.630614     529 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt\nI0814 10:33:24.630638     529 checks.go:432] validating if the connectivity type is via proxy or direct\nI0814 10:33:24.630680     529 join.go:429] [preflight] Discovering cluster-info\nI0814 10:33:24.630755     529 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0814 10:33:24.631452     529 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0814 10:33:24.641345     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 9 milliseconds\nI0814 10:33:24.642436     529 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.4:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0814 10:33:29.642637     529 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0814 10:33:29.643178     529 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0814 10:33:29.645372     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds\nI0814 10:33:29.645541     529 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.4:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0814 10:33:34.645878     529 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0814 10:33:34.646633     529 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0814 10:33:34.650390     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 3 milliseconds\nI0814 10:33:34.650556     529 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.4:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0814 10:33:39.650736     529 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0814 10:33:39.651404     529 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0814 10:33:39.653311     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 1 milliseconds\nI0814 10:33:39.654571     529 token.go:109] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server \"172.17.0.4:6443\"\nI0814 10:33:39.654596     529 token.go:205] [discovery] Successfully established connection with API Server \"172.17.0.4:6443\"\nI0814 10:33:39.654633     529 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process\nI0814 10:33:39.654646     529 join.go:443] [preflight] Fetching init configuration\nI0814 10:33:39.654659     529 join.go:476] [preflight] Retrieving KubeConfig objects\n[preflight] Reading configuration from the cluster...\n[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\nI0814 10:33:39.661657     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 6 milliseconds\nI0814 10:33:39.664804     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 1 milliseconds\nI0814 10:33:39.667389     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 1 milliseconds\nI0814 10:33:39.668937     529 interface.go:384] Looking for default routes with IPv4 addresses\nI0814 10:33:39.668955     529 interface.go:389] Default route transits interface \"eth0\"\nI0814 10:33:39.669049     529 interface.go:196] Interface eth0 is up\nI0814 10:33:39.669137     529 interface.go:244] Interface \"eth0\" has 1 addresses :[172.17.0.2/16].\nI0814 10:33:39.669192     529 interface.go:211] Checking addr  172.17.0.2/16.\nI0814 10:33:39.669202     529 interface.go:218] IP found 172.17.0.2\nI0814 10:33:39.669210     529 interface.go:250] Found valid IPv4 address 172.17.0.2 for interface \"eth0\".\nI0814 10:33:39.669215     529 interface.go:395] Found active IP 172.17.0.2 \nI0814 10:33:39.669294     529 preflight.go:101] [preflight] Running configuration dependant checks\nI0814 10:33:39.669306     529 controlplaneprepare.go:213] [download-certs] Skipping certs download\nI0814 10:33:39.669315     529 kubelet.go:107] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf\nI0814 10:33:39.670582     529 kubelet.go:115] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt\nI0814 10:33:39.672232     529 loader.go:375] Config loaded from file:  /etc/kubernetes/bootstrap-kubelet.conf\nI0814 10:33:39.673812     529 kubelet.go:133] [kubelet-start] Stopping the kubelet\n[kubelet-start] Downloading configuration for the kubelet from the \"kubelet-config-1.17\" ConfigMap in the kube-system namespace\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\nI0814 10:33:39.692308     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 1 milliseconds\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\nI0814 10:33:39.704640     529 kubelet.go:150] [kubelet-start] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...\nI0814 10:33:40.805865     529 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0814 10:33:40.821441     529 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0814 10:33:40.823548     529 kubelet.go:168] [kubelet-start] preserving the crisocket information for the node\nI0814 10:33:40.823588     529 patchnode.go:30] [patchnode] Uploading the CRI Socket information \"/run/containerd/containerd.sock\" to the Node API object \"kind-worker2\" as an annotation\nI0814 10:33:41.331500     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 7 milliseconds\nI0814 10:33:41.825679     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 1 milliseconds\nI0814 10:33:42.326031     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0814 10:33:42.826033     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0814 10:33:43.325971     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0814 10:33:43.825587     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 1 milliseconds\nI0814 10:33:44.325698     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 1 milliseconds\nI0814 10:33:44.825730     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 1 milliseconds\nI0814 10:33:45.325571     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 1 milliseconds\nI0814 10:33:45.825444     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 1 milliseconds\nI0814 10:33:46.325764     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 1 milliseconds\nI0814 10:33:46.825773     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 1 milliseconds\nI0814 10:33:47.327284     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0814 10:33:47.826476     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0814 10:33:48.326030     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0814 10:33:48.825613     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 1 milliseconds\nI0814 10:33:49.325750     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 1 milliseconds\nI0814 10:33:49.826681     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0814 10:33:50.326262     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0814 10:33:50.825498     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 1 milliseconds\nI0814 10:33:51.325937     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0814 10:33:51.826689     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0814 10:33:52.325887     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0814 10:33:52.826124     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0814 10:33:53.325693     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 1 milliseconds\nI0814 10:33:53.827059     529 round_trippers.go:471] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 200 OK in 3 milliseconds\nI0814 10:33:53.832929     529 round_trippers.go:471] PATCH https://172.17.0.4:6443/api/v1/nodes/kind-worker2 200 OK in 3 milliseconds\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the control-plane to see this node join the cluster.\n"
I0814 10:33:53.903]  ✓ Joining worker nodes 🚜
I0814 10:33:53.903]  â€ĸ Waiting ≤ 1m0s for control-plane = Ready âŗ  ...
I0814 10:33:53.903] time="10:33:53" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master -o=jsonpath='{.items..status.conditions[-1:].status}']"
I0814 10:33:54.193]  ✓ Waiting ≤ 1m0s for control-plane = Ready âŗ
I0814 10:33:54.194]  â€ĸ Ready after 0s 💚
I0814 10:33:54.194] Cluster creation complete. You can now use the cluster with:
... skipping 335 lines ...
I0814 10:53:04.580] [10:53:04] Pod status is: Running
I0814 10:53:09.667] [10:53:09] Pod status is: Running
I0814 10:53:14.757] [10:53:14] Pod status is: Running
I0814 10:53:19.848] [10:53:19] Pod status is: Running
I0814 10:53:24.941] [10:53:24] Pod status is: Running
I0814 10:53:30.034] [10:53:30] Pod status is: Pending
W0814 10:53:35.124] Error from server (NotFound): pods "e2e-conformance-test" not found
W0814 10:53:35.128] + cleanup
W0814 10:53:35.128] + kind export logs /workspace/_artifacts/logs
I0814 10:53:37.245] Exported logs to: /workspace/_artifacts/logs
I0814 10:53:37.339] Deleting cluster "kind" ...
I0814 10:53:37.395] $KUBECONFIG is still set to use /root/.kube/kind-config-kind even though that file has been deleted, remember to unset it
W0814 10:53:37.495] + [[ true = true ]]
... skipping 8 lines ...
W0814 10:53:41.946]     check(*cmd)
W0814 10:53:41.946]   File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 30, in check
W0814 10:53:41.946]     subprocess.check_call(cmd)
W0814 10:53:41.946]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0814 10:53:41.952]     raise CalledProcessError(retcode, cmd)
W0814 10:53:41.952] subprocess.CalledProcessError: Command '('bash', '-c', 'cd ./../../k8s.io/kubernetes && source ./../test-infra/experiment/kind-conformance-image-e2e.sh')' returned non-zero exit status 1
E0814 10:53:41.954] Command failed
I0814 10:53:41.955] process 689 exited with code 1 after 25.0m
E0814 10:53:41.955] FAIL: pull-kubernetes-conformance-image-test
I0814 10:53:41.955] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0814 10:53:42.476] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0814 10:53:42.519] process 85393 exited with code 0 after 0.0m
I0814 10:53:42.519] Call:  gcloud config get-value account
I0814 10:53:42.773] process 85405 exited with code 0 after 0.0m
I0814 10:53:42.773] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0814 10:53:42.773] Upload result and artifacts...
I0814 10:53:42.773] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1161585115072040960
I0814 10:53:42.774] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1161585115072040960/artifacts
W0814 10:53:43.720] CommandException: One or more URLs matched no objects.
E0814 10:53:43.823] Command failed
I0814 10:53:43.824] process 85417 exited with code 1 after 0.0m
W0814 10:53:43.824] Remote dir gs://kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1161585115072040960/artifacts not exist yet
I0814 10:53:43.824] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1161585115072040960/artifacts
I0814 10:53:45.774] process 85559 exited with code 0 after 0.0m
W0814 10:53:45.774] metadata path /workspace/_artifacts/metadata.json does not exist
W0814 10:53:45.774] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...