This job view page is being replaced by Spyglass soon. Check out the new job view.
PRmgdevstack: Promote e2e "verifying service's sessionAffinity for ClusterIP and NodePort services" to Conformance
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2019-08-22 21:14
Elapsed1h30m
Revision
Buildergke-prow-ssd-pool-1a225945-66t6
pod99b327a0-c521-11e9-a573-2a75373fcbd2
infra-commit1c5f719f8
pod99b327a0-c521-11e9-a573-2a75373fcbd2
repok8s.io/test-infra
repo-commit1c5f719f89e6cbbfb38b93582edc7fe3e3b81fcc
repos{u'k8s.io/kubernetes': u'master:37651f1cef5ec5c7286ba409ee0fe298c4605d66,76443:fc84ff19464f8fb45653d491acb2e10db0dbacf9', u'k8s.io/test-infra': u'master'}

No Test Failures!


Error lines from build-log.txt

... skipping 664 lines ...
I0822 21:21:04.888] time="21:21:04" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane cat /kind/version]"
I0822 21:21:05.225] time="21:21:05" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-control-plane]"
I0822 21:21:05.320] time="21:21:05" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-worker2]"
I0822 21:21:05.321] time="21:21:05" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-control-plane]"
I0822 21:21:05.321] time="21:21:05" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-worker]"
I0822 21:21:05.407] time="21:21:05" level=debug msg="Configuration Input data: {kind v1.17.0-alpha.0.464+eb8301c8288564 172.17.0.4:6443 6443 127.0.0.1 false 172.17.0.2 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}}"
I0822 21:21:05.412] time="21:21:05" level=debug msg="Configuration generated:\n # config generated by kind\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterConfiguration\nmetadata:\n  name: config\nkubernetesVersion: v1.17.0-alpha.0.464+eb8301c8288564\nclusterName: \"kind\"\ncontrolPlaneEndpoint: \"172.17.0.4:6443\"\n# on docker for mac we have to expose the api server via port forward,\n# so we need to ensure the cert is valid for localhost so we can talk\n# to the cluster after rewriting the kubeconfig to point to localhost\napiServer:\n  certSANs: [localhost, \"127.0.0.1\"]\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\n    # configure ipv6 default addresses for IPv6 clusters\n    \nscheduler:\n  extraArgs:\n    # configure ipv6 default addresses for IPv6 clusters\n    \nnetworking:\n  podSubnet: \"10.244.0.0/16\"\n  serviceSubnet: \"10.96.0.0/12\"\n---\napiVersion: kubeadm.k8s.io/v1beta2\nkind: InitConfiguration\nmetadata:\n  name: config\n# we use a well know token for TLS bootstrap\nbootstrapTokens:\n- token: \"abcdef.0123456789abcdef\"\n# we use a well know port for making the API server discoverable inside docker network. \n# from the host machine such port will be accessible via a random local port instead.\nlocalAPIEndpoint:\n  advertiseAddress: \"172.17.0.2\"\n  bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.2\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeadm.k8s.io/v1beta2\nkind: JoinConfiguration\nmetadata:\n  name: config\n\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.2\"\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: \"172.17.0.4:6443\"\n    token: \"abcdef.0123456789abcdef\"\n    unsafeSkipCAVerification: true\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nmetadata:\n  name: config\n# configure ipv6 addresses in IPv6 mode\n\n# disable disk resource management by default\n# kubelet will see the host disk that the inner container runtime\n# is ultimately backed by and attempt to recover disk space. we don't want that.\nimageGCHighThresholdPercent: 100\nevictionHard:\n  nodefs.available: \"0%\"\n  nodefs.inodesFree: \"0%\"\n  imagefs.available: \"0%\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmetadata:\n  name: config\n"
I0822 21:21:05.415] time="21:21:05" level=debug msg="Configuration Input data: {kind v1.17.0-alpha.0.464+eb8301c8288564 172.17.0.4:6443 6443 127.0.0.1 true 172.17.0.4 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}}"
I0822 21:21:05.418] time="21:21:05" level=debug msg="Configuration generated:\n # config generated by kind\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterConfiguration\nmetadata:\n  name: config\nkubernetesVersion: v1.17.0-alpha.0.464+eb8301c8288564\nclusterName: \"kind\"\ncontrolPlaneEndpoint: \"172.17.0.4:6443\"\n# on docker for mac we have to expose the api server via port forward,\n# so we need to ensure the cert is valid for localhost so we can talk\n# to the cluster after rewriting the kubeconfig to point to localhost\napiServer:\n  certSANs: [localhost, \"127.0.0.1\"]\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\n    # configure ipv6 default addresses for IPv6 clusters\n    \nscheduler:\n  extraArgs:\n    # configure ipv6 default addresses for IPv6 clusters\n    \nnetworking:\n  podSubnet: \"10.244.0.0/16\"\n  serviceSubnet: \"10.96.0.0/12\"\n---\napiVersion: kubeadm.k8s.io/v1beta2\nkind: InitConfiguration\nmetadata:\n  name: config\n# we use a well know token for TLS bootstrap\nbootstrapTokens:\n- token: \"abcdef.0123456789abcdef\"\n# we use a well know port for making the API server discoverable inside docker network. \n# from the host machine such port will be accessible via a random local port instead.\nlocalAPIEndpoint:\n  advertiseAddress: \"172.17.0.4\"\n  bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.4\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeadm.k8s.io/v1beta2\nkind: JoinConfiguration\nmetadata:\n  name: config\ncontrolPlane:\n  localAPIEndpoint:\n    advertiseAddress: \"172.17.0.4\"\n    bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.4\"\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: \"172.17.0.4:6443\"\n    token: \"abcdef.0123456789abcdef\"\n    unsafeSkipCAVerification: true\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nmetadata:\n  name: config\n# configure ipv6 addresses in IPv6 mode\n\n# disable disk resource management by default\n# kubelet will see the host disk that the inner container runtime\n# is ultimately backed by and attempt to recover disk space. we don't want that.\nimageGCHighThresholdPercent: 100\nevictionHard:\n  nodefs.available: \"0%\"\n  nodefs.inodesFree: \"0%\"\n  imagefs.available: \"0%\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmetadata:\n  name: config\n"
I0822 21:21:05.419] time="21:21:05" level=debug msg="Configuration Input data: {kind v1.17.0-alpha.0.464+eb8301c8288564 172.17.0.4:6443 6443 127.0.0.1 false 172.17.0.3 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}}"
I0822 21:21:05.428] time="21:21:05" level=debug msg="Configuration generated:\n # config generated by kind\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterConfiguration\nmetadata:\n  name: config\nkubernetesVersion: v1.17.0-alpha.0.464+eb8301c8288564\nclusterName: \"kind\"\ncontrolPlaneEndpoint: \"172.17.0.4:6443\"\n# on docker for mac we have to expose the api server via port forward,\n# so we need to ensure the cert is valid for localhost so we can talk\n# to the cluster after rewriting the kubeconfig to point to localhost\napiServer:\n  certSANs: [localhost, \"127.0.0.1\"]\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\n    # configure ipv6 default addresses for IPv6 clusters\n    \nscheduler:\n  extraArgs:\n    # configure ipv6 default addresses for IPv6 clusters\n    \nnetworking:\n  podSubnet: \"10.244.0.0/16\"\n  serviceSubnet: \"10.96.0.0/12\"\n---\napiVersion: kubeadm.k8s.io/v1beta2\nkind: InitConfiguration\nmetadata:\n  name: config\n# we use a well know token for TLS bootstrap\nbootstrapTokens:\n- token: \"abcdef.0123456789abcdef\"\n# we use a well know port for making the API server discoverable inside docker network. \n# from the host machine such port will be accessible via a random local port instead.\nlocalAPIEndpoint:\n  advertiseAddress: \"172.17.0.3\"\n  bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.3\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeadm.k8s.io/v1beta2\nkind: JoinConfiguration\nmetadata:\n  name: config\n\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.3\"\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: \"172.17.0.4:6443\"\n    token: \"abcdef.0123456789abcdef\"\n    unsafeSkipCAVerification: true\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nmetadata:\n  name: config\n# configure ipv6 addresses in IPv6 mode\n\n# disable disk resource management by default\n# kubelet will see the host disk that the inner container runtime\n# is ultimately backed by and attempt to recover disk space. we don't want that.\nimageGCHighThresholdPercent: 100\nevictionHard:\n  nodefs.available: \"0%\"\n  nodefs.inodesFree: \"0%\"\n  imagefs.available: \"0%\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmetadata:\n  name: config\n"
I0822 21:21:05.438] time="21:21:05" level=debug msg="Using kubeadm config:\napiServer:\n  certSANs:\n  - localhost\n  - 127.0.0.1\napiVersion: kubeadm.k8s.io/v1beta2\nclusterName: kind\ncontrolPlaneEndpoint: 172.17.0.4:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.17.0-alpha.0.464+eb8301c8288564\nnetworking:\n  podSubnet: 10.244.0.0/16\n  serviceSubnet: 10.96.0.0/12\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.4\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.4\n---\napiVersion: kubeadm.k8s.io/v1beta2\ncontrolPlane:\n  localAPIEndpoint:\n    advertiseAddress: 172.17.0.4\n    bindPort: 6443\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.4:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.4\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
I0822 21:21:05.439] time="21:21:05" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane mkdir -p /kind]"
I0822 21:21:05.444] time="21:21:05" level=debug msg="Using kubeadm config:\napiServer:\n  certSANs:\n  - localhost\n  - 127.0.0.1\napiVersion: kubeadm.k8s.io/v1beta2\nclusterName: kind\ncontrolPlaneEndpoint: 172.17.0.4:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.17.0-alpha.0.464+eb8301c8288564\nnetworking:\n  podSubnet: 10.244.0.0/16\n  serviceSubnet: 10.96.0.0/12\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.2\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.2\n---\napiVersion: kubeadm.k8s.io/v1beta2\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.4:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.2\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
I0822 21:21:05.445] time="21:21:05" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker mkdir -p /kind]"
I0822 21:21:05.448] time="21:21:05" level=debug msg="Using kubeadm config:\napiServer:\n  certSANs:\n  - localhost\n  - 127.0.0.1\napiVersion: kubeadm.k8s.io/v1beta2\nclusterName: kind\ncontrolPlaneEndpoint: 172.17.0.4:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.17.0-alpha.0.464+eb8301c8288564\nnetworking:\n  podSubnet: 10.244.0.0/16\n  serviceSubnet: 10.96.0.0/12\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.3\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.3\n---\napiVersion: kubeadm.k8s.io/v1beta2\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.4:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.3\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
I0822 21:21:05.448] time="21:21:05" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker2 mkdir -p /kind]"
I0822 21:21:05.705] time="21:21:05" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-control-plane cp /dev/stdin /kind/kubeadm.conf]"
I0822 21:21:05.753] time="21:21:05" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-worker2 cp /dev/stdin /kind/kubeadm.conf]"
I0822 21:21:05.821] time="21:21:05" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-worker cp /dev/stdin /kind/kubeadm.conf]"
I0822 21:21:06.194]  ✓ Creating kubeadm config 📜
I0822 21:21:06.195]  â€ĸ Starting control-plane 🕹ī¸  ...
I0822 21:21:06.195] time="21:21:06" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane kubeadm init --ignore-preflight-errors=all --config=/kind/kubeadm.conf --skip-token-print --v=6]"
I0822 21:21:56.875] time="21:21:56" level=debug msg="I0822 21:21:06.750719      83 initconfiguration.go:186] loading configuration from \"/kind/kubeadm.conf\"\n[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration\nI0822 21:21:06.779627      83 feature_gate.go:216] feature gates: &{map[]}\nI0822 21:21:06.780449      83 checks.go:576] validating Kubernetes and kubeadm version\nI0822 21:21:06.780520      83 checks.go:168] validating if the firewall is enabled and active\n[init] Using Kubernetes version: v1.17.0-alpha.0.464+eb8301c8288564\n[preflight] Running pre-flight checks\nI0822 21:21:06.809188      83 checks.go:203] validating availability of port 6443\nI0822 21:21:06.809384      83 checks.go:203] validating availability of port 10251\nI0822 21:21:06.809415      83 checks.go:203] validating availability of port 10252\nI0822 21:21:06.809438      83 checks.go:288] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml\nI0822 21:21:06.809461      83 checks.go:288] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml\nI0822 21:21:06.809477      83 checks.go:288] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml\nI0822 21:21:06.809485      83 checks.go:288] validating the existence of file /etc/kubernetes/manifests/etcd.yaml\nI0822 21:21:06.809494      83 checks.go:434] validating if the connectivity type is via proxy or direct\nI0822 21:21:06.810643      83 checks.go:470] validating http connectivity to first IP address in the CIDR\nI0822 21:21:06.810675      83 checks.go:470] validating http connectivity to first IP address in the CIDR\nI0822 21:21:06.810685      83 checks.go:104] validating the container runtime\nI0822 21:21:06.989203      83 checks.go:378] validating the presence of executable crictl\nI0822 21:21:06.989283      83 checks.go:337] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0822 21:21:06.989360      83 checks.go:337] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0822 21:21:06.989431      83 checks.go:648] validating whether swap is enabled or not\nI0822 21:21:06.989476      83 checks.go:378] validating the presence of executable ip\nI0822 21:21:06.989591      83 checks.go:378] validating the presence of executable iptables\nI0822 21:21:06.989656      83 checks.go:378] validating the presence of executable mount\nI0822 21:21:06.989680      83 checks.go:378] validating the presence of executable nsenter\nI0822 21:21:06.989765      83 checks.go:378] validating the presence of executable ebtables\nI0822 21:21:06.989916      83 checks.go:378] validating the presence of executable ethtool\nI0822 21:21:06.989966      83 checks.go:378] validating the presence of executable socat\nI0822 21:21:06.990038      83 checks.go:378] validating the presence of executable tc\nI0822 21:21:06.990087      83 checks.go:378] validating the presence of executable touch\nI0822 21:21:06.990147      83 checks.go:519] running all checks\nI0822 21:21:07.000952      83 checks.go:408] checking whether the given node name is reachable using net.LookupHost\nI0822 21:21:07.001379      83 checks.go:617] validating kubelet version\nI0822 21:21:07.120851      83 checks.go:130] validating if the service is enabled and active\nI0822 21:21:07.135199      83 checks.go:203] validating availability of port 10250\nI0822 21:21:07.135318      83 checks.go:203] validating availability of port 2379\nI0822 21:21:07.135355      83 checks.go:203] validating availability of port 2380\nI0822 21:21:07.135398      83 checks.go:251] validating the existence and emptiness of directory /var/lib/etcd\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\nI0822 21:21:07.150141      83 checks.go:837] image exists: k8s.gcr.io/kube-apiserver:v1.17.0-alpha.0.464_eb8301c8288564\nI0822 21:21:07.164774      83 checks.go:837] image exists: k8s.gcr.io/kube-controller-manager:v1.17.0-alpha.0.464_eb8301c8288564\nI0822 21:21:07.178139      83 checks.go:837] image exists: k8s.gcr.io/kube-scheduler:v1.17.0-alpha.0.464_eb8301c8288564\nI0822 21:21:07.189375      83 checks.go:837] image exists: k8s.gcr.io/kube-proxy:v1.17.0-alpha.0.464_eb8301c8288564\nI0822 21:21:07.206155      83 checks.go:843] pulling k8s.gcr.io/pause:3.1\nI0822 21:21:07.785108      83 checks.go:843] pulling k8s.gcr.io/etcd:3.3.10\nI0822 21:21:15.365543      83 checks.go:843] pulling k8s.gcr.io/coredns:1.5.0\nI0822 21:21:17.182313      83 kubelet.go:61] Stopping the kubelet\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\nI0822 21:21:17.223120      83 kubelet.go:79] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[certs] Using certificateDir folder \"/etc/kubernetes/pki\"\nI0822 21:21:17.322234      83 certs.go:104] creating a new certificate authority for ca\n[certs] Generating \"ca\" certificate and key\n[certs] Generating \"apiserver\" certificate and key\n[certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.17.0.4 172.17.0.4 127.0.0.1]\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\nI0822 21:21:18.157706      83 certs.go:104] creating a new certificate authority for front-proxy-ca\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\nI0822 21:21:18.816966      83 certs.go:104] creating a new certificate authority for etcd-ca\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.4 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.4 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\nI0822 21:21:21.025616      83 certs.go:70] creating a new public/private key files for signing service account users\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\nI0822 21:21:21.272348      83 kubeconfig.go:79] creating kubeconfig file for admin.conf\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\nI0822 21:21:21.461465      83 kubeconfig.go:79] creating kubeconfig file for kubelet.conf\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\nI0822 21:21:22.047634      83 kubeconfig.go:79] creating kubeconfig file for controller-manager.conf\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\nI0822 21:21:22.532932      83 kubeconfig.go:79] creating kubeconfig file for scheduler.conf\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\nI0822 21:21:23.171691      83 manifests.go:91] [control-plane] getting StaticPodSpecs\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\nI0822 21:21:23.186009      83 manifests.go:116] [control-plane] wrote static Pod manifest for component \"kube-apiserver\" to \"/etc/kubernetes/manifests/kube-apiserver.yaml\"\nI0822 21:21:23.186080      83 manifests.go:91] [control-plane] getting StaticPodSpecs\nI0822 21:21:23.187962      83 manifests.go:116] [control-plane] wrote static Pod manifest for component \"kube-controller-manager\" to \"/etc/kubernetes/manifests/kube-controller-manager.yaml\"\nI0822 21:21:23.188007      83 manifests.go:91] [control-plane] getting StaticPodSpecs\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\nI0822 21:21:23.189234      83 manifests.go:116] [control-plane] wrote static Pod manifest for component \"kube-scheduler\" to \"/etc/kubernetes/manifests/kube-scheduler.yaml\"\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\nI0822 21:21:23.190301      83 local.go:69] [etcd] wrote Static Pod manifest for a local etcd member to \"/etc/kubernetes/manifests/etcd.yaml\"\nI0822 21:21:23.190331      83 waitcontrolplane.go:80] [wait-control-plane] Waiting for the API server to be healthy\nI0822 21:21:23.191965      83 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s\nI0822 21:21:23.202202      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 1 milliseconds\nI0822 21:21:23.702898      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:24.202949      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:24.702951      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:25.203603      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:25.703713      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:26.203595      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:26.704804      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:27.203613      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:27.703543      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:28.203729      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:28.703068      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:29.203038      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:29.702956      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:30.203522      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:30.704300      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:31.202960      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:31.703586      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:32.204328      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:32.703482      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:33.202885      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:33.702901      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:34.202950      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:34.702955      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:35.202976      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:35.702997      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:36.202965      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:36.702979      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:37.202891      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:37.702936      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:38.202973      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:38.702953      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:39.202924      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:39.702904      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:40.202934      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:40.702917      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:41.202932      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:41.703004      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:42.202959      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:42.702991      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:43.202952      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:43.702923      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:44.202902      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:44.702935      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:45.202947      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:45.703609      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:46.203471      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:46.702821      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:47.202961      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0822 21:21:51.955477      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s 500 Internal Server Error in 4252 milliseconds\nI0822 21:21:52.206351      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0822 21:21:52.709139      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0822 21:21:53.209106      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s 500 Internal Server Error in 5 milliseconds\nI0822 21:21:53.705116      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0822 21:21:54.205118      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0822 21:21:54.705149      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\n[apiclient] All control plane components are healthy after 32.007476 seconds\n[upload-config] Storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace\nI0822 21:21:55.205880      83 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s 200 OK in 3 milliseconds\nI0822 21:21:55.206056      83 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap\nI0822 21:21:55.211965      83 round_trippers.go:443] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 4 milliseconds\nI0822 21:21:55.217792      83 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 4 milliseconds\nI0822 21:21:55.226912      83 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 8 milliseconds\nI0822 21:21:55.227801      83 uploadconfig.go:122] [upload-config] Uploading the kubelet component config to a ConfigMap\n[kubelet] Creating a ConfigMap \"kubelet-config-1.17\" in namespace kube-system with the configuration for the kubelets in the cluster\nI0822 21:21:55.237895      83 round_trippers.go:443] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 8 milliseconds\nI0822 21:21:55.251394      83 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 13 milliseconds\nI0822 21:21:55.255415      83 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 3 milliseconds\nI0822 21:21:55.255579      83 uploadconfig.go:127] [upload-config] Preserving the CRISocket information for the control-plane node\nI0822 21:21:55.255592      83 patchnode.go:30] [patchnode] Uploading the CRI Socket information \"/run/containerd/containerd.sock\" to the Node API object \"kind-control-plane\" as an annotation\nI0822 21:21:55.763415      83 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-control-plane 200 OK in 7 milliseconds\n[upload-certs] Skipping phase. Please see --upload-certs\n[mark-control-plane] Marking the node kind-control-plane as control-plane by adding the label \"node-role.kubernetes.io/master=''\"\n[mark-control-plane] Marking the node kind-control-plane as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]\nI0822 21:21:55.813494      83 round_trippers.go:443] PATCH https://172.17.0.4:6443/api/v1/nodes/kind-control-plane 200 OK in 20 milliseconds\nI0822 21:21:56.317456      83 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-control-plane 200 OK in 3 milliseconds\nI0822 21:21:56.329389      83 round_trippers.go:443] PATCH https://172.17.0.4:6443/api/v1/nodes/kind-control-plane 200 OK in 6 milliseconds\n[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles\nI0822 21:21:56.332759      83 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-abcdef 404 Not Found in 2 milliseconds\nI0822 21:21:56.341912      83 round_trippers.go:443] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/secrets 201 Created in 8 milliseconds\n[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials\n[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token\nI0822 21:21:56.346983      83 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 4 milliseconds\n[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster\nI0822 21:21:56.364782      83 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 17 milliseconds\n[bootstrap-token] Creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace\nI0822 21:21:56.380967      83 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 10 milliseconds\nI0822 21:21:56.381177      83 clusterinfo.go:45] [bootstrap-token] loading admin kubeconfig\nI0822 21:21:56.382132      83 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\nI0822 21:21:56.382156      83 clusterinfo.go:53] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig\nI0822 21:21:56.382620      83 clusterinfo.go:65] [bootstrap-token] creating/updating ConfigMap in kube-public namespace\nI0822 21:21:56.389419      83 round_trippers.go:443] POST https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps 201 Created in 6 milliseconds\nI0822 21:21:56.389664      83 clusterinfo.go:79] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace\nI0822 21:21:56.397425      83 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles 201 Created in 7 milliseconds\nI0822 21:21:56.409666      83 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings 201 Created in 11 milliseconds\nI0822 21:21:56.413928      83 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kube-dns 404 Not Found in 3 milliseconds\nI0822 21:21:56.419235      83 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/coredns 404 Not Found in 4 milliseconds\nI0822 21:21:56.443678      83 round_trippers.go:443] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 24 milliseconds\nI0822 21:21:56.454800      83 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/clusterroles 201 Created in 7 milliseconds\nI0822 21:21:56.461054      83 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 5 milliseconds\nI0822 21:21:56.469634      83 round_trippers.go:443] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 7 milliseconds\nI0822 21:21:56.529225      83 round_trippers.go:443] POST https://172.17.0.4:6443/apis/apps/v1/namespaces/kube-system/deployments 201 Created in 48 milliseconds\nI0822 21:21:56.545340      83 round_trippers.go:443] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/services 201 Created in 13 milliseconds\n[addons] Applied essential addon: CoreDNS\nI0822 21:21:56.556166      83 round_trippers.go:443] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 10 milliseconds\nI0822 21:21:56.714459      83 request.go:538] Throttling request took 152.315088ms, request: POST:https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps\nI0822 21:21:56.721498      83 round_trippers.go:443] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 6 milliseconds\nI0822 21:21:56.741210      83 round_trippers.go:443] POST https://172.17.0.4:6443/apis/apps/v1/namespaces/kube-system/daemonsets 201 Created in 11 milliseconds\nI0822 21:21:56.746628      83 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 4 milliseconds\nI0822 21:21:56.751977      83 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 5 milliseconds\nI0822 21:21:56.755664      83 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 3 milliseconds\n[addons] Applied essential addon: kube-proxy\nI0822 21:21:56.757228      83 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\nI0822 21:21:56.758496      83 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\n\nYour Kubernetes control-plane has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n  mkdir -p $HOME/.kube\n  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n  sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nYou should now deploy a pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n  https://kubernetes.io/docs/concepts/cluster-administration/addons/\n\nYou can now join any number of control-plane nodes by copying certificate authorities \nand service account keys on each node and then running the following as root:\n\n  kubeadm join 172.17.0.4:6443 --token <value withheld> \\\n    --discovery-token-ca-cert-hash sha256:47044d0cc725ba4bb2d09932bf91c95b6de0cfd2ce93715e1f9d01f12611c3b8 \\\n    --control-plane \t  \n\nThen you can join any number of worker nodes by running the following on each as root:\n\nkubeadm join 172.17.0.4:6443 --token <value withheld> \\\n    --discovery-token-ca-cert-hash sha256:47044d0cc725ba4bb2d09932bf91c95b6de0cfd2ce93715e1f9d01f12611c3b8 "
I0822 21:21:56.876] time="21:21:56" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{(index (index .NetworkSettings.Ports \"6443/tcp\") 0).HostPort}} kind-control-plane]"
I0822 21:21:56.936] time="21:21:56" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane cat /etc/kubernetes/admin.conf]"
I0822 21:21:57.275]  ✓ Starting control-plane 🕹ī¸
I0822 21:21:57.276]  â€ĸ Installing CNI 🔌  ...
I0822 21:21:57.276] time="21:21:57" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane cat /kind/manifests/default-cni.yaml]"
I0822 21:21:57.638] time="21:21:57" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-control-plane kubectl create --kubeconfig=/etc/kubernetes/admin.conf -f -]"
I0822 21:21:58.806]  ✓ Installing CNI 🔌
I0822 21:21:58.807]  â€ĸ Installing StorageClass 💾  ...
I0822 21:21:58.808] time="21:21:58" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f -]"
I0822 21:21:59.404]  ✓ Installing StorageClass 💾
I0822 21:21:59.406]  â€ĸ Joining worker nodes 🚜  ...
I0822 21:21:59.407] time="21:21:59" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker2 kubeadm join --config /kind/kubeadm.conf --ignore-preflight-errors=all --v=6]"
I0822 21:21:59.408] time="21:21:59" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker kubeadm join --config /kind/kubeadm.conf --ignore-preflight-errors=all --v=6]"
I0822 21:22:32.388] time="21:22:32" level=debug msg="I0822 21:21:59.704490     507 join.go:363] [preflight] found NodeName empty; using OS hostname as NodeName\nI0822 21:21:59.704548     507 joinconfiguration.go:75] loading configuration from \"/kind/kubeadm.conf\"\nI0822 21:21:59.707115     507 preflight.go:90] [preflight] Running general checks\nI0822 21:21:59.707231     507 checks.go:251] validating the existence and emptiness of directory /etc/kubernetes/manifests\nI0822 21:21:59.707247     507 checks.go:288] validating the existence of file /etc/kubernetes/kubelet.conf\nI0822 21:21:59.707256     507 checks.go:288] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf\nI0822 21:21:59.707264     507 checks.go:104] validating the container runtime\n[preflight] Running pre-flight checks\nI0822 21:21:59.721160     507 checks.go:378] validating the presence of executable crictl\nI0822 21:21:59.721216     507 checks.go:337] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0822 21:21:59.721328     507 checks.go:337] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0822 21:21:59.721437     507 checks.go:648] validating whether swap is enabled or not\nI0822 21:21:59.721476     507 checks.go:378] validating the presence of executable ip\nI0822 21:21:59.721610     507 checks.go:378] validating the presence of executable iptables\nI0822 21:21:59.721662     507 checks.go:378] validating the presence of executable mount\nI0822 21:21:59.721708     507 checks.go:378] validating the presence of executable nsenter\nI0822 21:21:59.721767     507 checks.go:378] validating the presence of executable ebtables\nI0822 21:21:59.721845     507 checks.go:378] validating the presence of executable ethtool\nI0822 21:21:59.721871     507 checks.go:378] validating the presence of executable socat\nI0822 21:21:59.721926     507 checks.go:378] validating the presence of executable tc\nI0822 21:21:59.721945     507 checks.go:378] validating the presence of executable touch\nI0822 21:21:59.721988     507 checks.go:519] running all checks\nI0822 21:21:59.732895     507 checks.go:408] checking whether the given node name is reachable using net.LookupHost\nI0822 21:21:59.733307     507 checks.go:617] validating kubelet version\nI0822 21:21:59.844230     507 checks.go:130] validating if the service is enabled and active\nI0822 21:21:59.861823     507 checks.go:203] validating availability of port 10250\nI0822 21:21:59.862174     507 checks.go:288] validating the existence of file /etc/kubernetes/pki/ca.crt\nI0822 21:21:59.862197     507 checks.go:434] validating if the connectivity type is via proxy or direct\nI0822 21:21:59.862248     507 join.go:433] [preflight] Discovering cluster-info\nI0822 21:21:59.862420     507 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0822 21:21:59.866355     507 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0822 21:21:59.875693     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 9 milliseconds\nI0822 21:21:59.876389     507 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.4:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0822 21:22:04.876556     507 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0822 21:22:04.877325     507 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0822 21:22:04.880655     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 3 milliseconds\nI0822 21:22:04.880904     507 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.4:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0822 21:22:09.881084     507 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0822 21:22:09.881816     507 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0822 21:22:09.884554     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds\nI0822 21:22:09.884907     507 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.4:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0822 21:22:14.885104     507 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0822 21:22:14.886087     507 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0822 21:22:14.890773     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 4 milliseconds\nI0822 21:22:14.893220     507 token.go:109] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server \"172.17.0.4:6443\"\nI0822 21:22:14.893264     507 token.go:205] [discovery] Successfully established connection with API Server \"172.17.0.4:6443\"\nI0822 21:22:14.893348     507 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process\nI0822 21:22:14.893413     507 join.go:447] [preflight] Fetching init configuration\nI0822 21:22:14.893423     507 join.go:485] [preflight] Retrieving KubeConfig objects\n[preflight] Reading configuration from the cluster...\n[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\nI0822 21:22:14.908872     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 14 milliseconds\nI0822 21:22:14.916889     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 6 milliseconds\nI0822 21:22:14.925274     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 6 milliseconds\nI0822 21:22:14.928520     507 interface.go:384] Looking for default routes with IPv4 addresses\nI0822 21:22:14.928550     507 interface.go:389] Default route transits interface \"eth0\"\nI0822 21:22:14.928667     507 interface.go:196] Interface eth0 is up\nI0822 21:22:14.928724     507 interface.go:244] Interface \"eth0\" has 1 addresses :[172.17.0.3/16].\nI0822 21:22:14.928748     507 interface.go:211] Checking addr  172.17.0.3/16.\nI0822 21:22:14.928758     507 interface.go:218] IP found 172.17.0.3\nI0822 21:22:14.928768     507 interface.go:250] Found valid IPv4 address 172.17.0.3 for interface \"eth0\".\nI0822 21:22:14.928784     507 interface.go:395] Found active IP 172.17.0.3 \nI0822 21:22:14.928870     507 preflight.go:101] [preflight] Running configuration dependant checks\nI0822 21:22:14.928887     507 controlplaneprepare.go:211] [download-certs] Skipping certs download\nI0822 21:22:14.928900     507 kubelet.go:107] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf\nI0822 21:22:14.933490     507 kubelet.go:115] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt\nI0822 21:22:14.935141     507 loader.go:375] Config loaded from file:  /etc/kubernetes/bootstrap-kubelet.conf\nI0822 21:22:14.936096     507 kubelet.go:133] [kubelet-start] Stopping the kubelet\n[kubelet-start] Downloading configuration for the kubelet from the \"kubelet-config-1.17\" ConfigMap in the kube-system namespace\nI0822 21:22:14.961926     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 3 milliseconds\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\nI0822 21:22:14.977380     507 kubelet.go:150] [kubelet-start] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...\nI0822 21:22:16.168249     507 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0822 21:22:16.184622     507 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0822 21:22:16.186955     507 kubelet.go:168] [kubelet-start] preserving the crisocket information for the node\nI0822 21:22:16.187010     507 patchnode.go:30] [patchnode] Uploading the CRI Socket information \"/run/containerd/containerd.sock\" to the Node API object \"kind-worker2\" as an annotation\nI0822 21:22:16.698278     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 10 milliseconds\nI0822 21:22:17.363734     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 176 milliseconds\nI0822 21:22:17.699925     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 12 milliseconds\nI0822 21:22:18.191434     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 4 milliseconds\nI0822 21:22:18.690422     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0822 21:22:19.189941     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0822 21:22:19.691241     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0822 21:22:20.190883     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0822 21:22:20.940308     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 252 milliseconds\nI0822 21:22:21.189914     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0822 21:22:21.690682     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0822 21:22:22.190373     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0822 21:22:22.846547     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 159 milliseconds\nI0822 21:22:23.191751     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 4 milliseconds\nI0822 21:22:23.690542     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0822 21:22:24.191498     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0822 21:22:25.285303     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 597 milliseconds\nI0822 21:22:26.115861     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 428 milliseconds\nI0822 21:22:26.906673     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 719 milliseconds\nI0822 21:22:27.448267     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 260 milliseconds\nI0822 21:22:27.690299     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0822 21:22:28.190735     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0822 21:22:29.751458     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 200 OK in 1064 milliseconds\nI0822 21:22:32.301628     507 round_trippers.go:443] PATCH https://172.17.0.4:6443/api/v1/nodes/kind-worker2 200 OK in 2546 milliseconds\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the control-plane to see this node join the cluster.\n"
I0822 21:22:33.135] time="21:22:33" level=debug msg="[preflight] Running pre-flight checks\nI0822 21:21:59.708627     507 join.go:363] [preflight] found NodeName empty; using OS hostname as NodeName\nI0822 21:21:59.708675     507 joinconfiguration.go:75] loading configuration from \"/kind/kubeadm.conf\"\nI0822 21:21:59.711102     507 preflight.go:90] [preflight] Running general checks\nI0822 21:21:59.711221     507 checks.go:251] validating the existence and emptiness of directory /etc/kubernetes/manifests\nI0822 21:21:59.711238     507 checks.go:288] validating the existence of file /etc/kubernetes/kubelet.conf\nI0822 21:21:59.711247     507 checks.go:288] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf\nI0822 21:21:59.711256     507 checks.go:104] validating the container runtime\nI0822 21:21:59.728502     507 checks.go:378] validating the presence of executable crictl\nI0822 21:21:59.728560     507 checks.go:337] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0822 21:21:59.728610     507 checks.go:337] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0822 21:21:59.728666     507 checks.go:648] validating whether swap is enabled or not\nI0822 21:21:59.728746     507 checks.go:378] validating the presence of executable ip\nI0822 21:21:59.728804     507 checks.go:378] validating the presence of executable iptables\nI0822 21:21:59.728830     507 checks.go:378] validating the presence of executable mount\nI0822 21:21:59.728845     507 checks.go:378] validating the presence of executable nsenter\nI0822 21:21:59.728880     507 checks.go:378] validating the presence of executable ebtables\nI0822 21:21:59.728925     507 checks.go:378] validating the presence of executable ethtool\nI0822 21:21:59.728944     507 checks.go:378] validating the presence of executable socat\nI0822 21:21:59.728970     507 checks.go:378] validating the presence of executable tc\nI0822 21:21:59.728989     507 checks.go:378] validating the presence of executable touch\nI0822 21:21:59.729033     507 checks.go:519] running all checks\nI0822 21:21:59.737631     507 checks.go:408] checking whether the given node name is reachable using net.LookupHost\nI0822 21:21:59.738742     507 checks.go:617] validating kubelet version\nI0822 21:21:59.855150     507 checks.go:130] validating if the service is enabled and active\nI0822 21:21:59.876520     507 checks.go:203] validating availability of port 10250\nI0822 21:21:59.876776     507 checks.go:288] validating the existence of file /etc/kubernetes/pki/ca.crt\nI0822 21:21:59.876800     507 checks.go:434] validating if the connectivity type is via proxy or direct\nI0822 21:21:59.876840     507 join.go:433] [preflight] Discovering cluster-info\nI0822 21:21:59.876939     507 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0822 21:21:59.879075     507 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0822 21:21:59.887232     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 8 milliseconds\nI0822 21:21:59.888354     507 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.4:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0822 21:22:04.888532     507 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0822 21:22:04.889126     507 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0822 21:22:04.891925     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds\nI0822 21:22:04.892357     507 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.4:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0822 21:22:09.892570     507 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0822 21:22:09.893408     507 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0822 21:22:09.896607     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 3 milliseconds\nI0822 21:22:09.896912     507 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.4:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0822 21:22:14.897113     507 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0822 21:22:14.897813     507 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0822 21:22:14.902426     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 4 milliseconds\nI0822 21:22:14.904619     507 token.go:109] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server \"172.17.0.4:6443\"\nI0822 21:22:14.904667     507 token.go:205] [discovery] Successfully established connection with API Server \"172.17.0.4:6443\"\nI0822 21:22:14.904723     507 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process\nI0822 21:22:14.904774     507 join.go:447] [preflight] Fetching init configuration\nI0822 21:22:14.904782     507 join.go:485] [preflight] Retrieving KubeConfig objects\n[preflight] Reading configuration from the cluster...\n[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\nI0822 21:22:14.924616     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 18 milliseconds\nI0822 21:22:14.934380     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 3 milliseconds\nI0822 21:22:14.941677     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 5 milliseconds\nI0822 21:22:14.944495     507 interface.go:384] Looking for default routes with IPv4 addresses\nI0822 21:22:14.944632     507 interface.go:389] Default route transits interface \"eth0\"\nI0822 21:22:14.944841     507 interface.go:196] Interface eth0 is up\nI0822 21:22:14.944975     507 interface.go:244] Interface \"eth0\" has 1 addresses :[172.17.0.2/16].\nI0822 21:22:14.945051     507 interface.go:211] Checking addr  172.17.0.2/16.\nI0822 21:22:14.945079     507 interface.go:218] IP found 172.17.0.2\nI0822 21:22:14.945127     507 interface.go:250] Found valid IPv4 address 172.17.0.2 for interface \"eth0\".\nI0822 21:22:14.945176     507 interface.go:395] Found active IP 172.17.0.2 \nI0822 21:22:14.945404     507 preflight.go:101] [preflight] Running configuration dependant checks\nI0822 21:22:14.945479     507 controlplaneprepare.go:211] [download-certs] Skipping certs download\nI0822 21:22:14.945508     507 kubelet.go:107] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf\nI0822 21:22:14.951717     507 kubelet.go:115] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt\nI0822 21:22:14.952361     507 loader.go:375] Config loaded from file:  /etc/kubernetes/bootstrap-kubelet.conf\nI0822 21:22:14.952933     507 kubelet.go:133] [kubelet-start] Stopping the kubelet\n[kubelet-start] Downloading configuration for the kubelet from the \"kubelet-config-1.17\" ConfigMap in the kube-system namespace\nI0822 21:22:14.972619     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 3 milliseconds\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\nI0822 21:22:14.987965     507 kubelet.go:150] [kubelet-start] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...\nI0822 21:22:16.175645     507 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0822 21:22:16.609586     507 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0822 21:22:16.622841     507 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0822 21:22:16.624526     507 kubelet.go:168] [kubelet-start] preserving the crisocket information for the node\nI0822 21:22:16.624573     507 patchnode.go:30] [patchnode] Uploading the CRI Socket information \"/run/containerd/containerd.sock\" to the Node API object \"kind-worker\" as an annotation\nI0822 21:22:17.133951     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 9 milliseconds\nI0822 21:22:17.628407     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0822 21:22:18.128343     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0822 21:22:18.628293     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0822 21:22:19.185192     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 60 milliseconds\nI0822 21:22:19.629812     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 4 milliseconds\nI0822 21:22:20.127549     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0822 21:22:20.940304     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 315 milliseconds\nI0822 21:22:21.128001     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0822 21:22:21.628257     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0822 21:22:22.128040     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0822 21:22:22.846547     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 221 milliseconds\nI0822 21:22:23.127743     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0822 21:22:23.628168     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0822 21:22:24.178658     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 53 milliseconds\nI0822 21:22:25.285837     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 660 milliseconds\nI0822 21:22:26.116155     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 491 milliseconds\nI0822 21:22:26.904834     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 779 milliseconds\nI0822 21:22:27.447127     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 322 milliseconds\nI0822 21:22:27.627884     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0822 21:22:28.128359     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0822 21:22:28.933611     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 308 milliseconds\nI0822 21:22:31.179401     507 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 200 OK in 2054 milliseconds\nI0822 21:22:33.014155     507 round_trippers.go:443] PATCH https://172.17.0.4:6443/api/v1/nodes/kind-worker 200 OK in 1828 milliseconds\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the control-plane to see this node join the cluster.\n"
I0822 21:22:33.136]  ✓ Joining worker nodes 🚜
I0822 21:22:33.136]  â€ĸ Waiting ≤ 1m0s for control-plane = Ready âŗ  ...
I0822 21:22:33.136] time="21:22:33" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master -o=jsonpath='{.items..status.conditions[-1:].status}']"
I0822 21:22:35.279] time="21:22:35" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master -o=jsonpath='{.items..status.conditions[-1:].status}']"
I0822 21:22:37.480] time="21:22:37" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master -o=jsonpath='{.items..status.conditions[-1:].status}']"
I0822 21:22:37.908] time="21:22:37" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master -o=jsonpath='{.items..status.conditions[-1:].status}']"
... skipping 1079 lines ...
I0822 22:44:04.512] [22:44:04] Pod status is: Running
I0822 22:44:09.618] [22:44:09] Pod status is: Running
I0822 22:44:14.705] [22:44:14] Pod status is: Running
I0822 22:44:19.805] [22:44:19] Pod status is: Running
I0822 22:44:24.903] [22:44:24] Pod status is: Running
I0822 22:44:29.996] [22:44:29] Pod status is: Running
W0822 22:44:35.089] Error from server (NotFound): pods "e2e-conformance-test" not found
W0822 22:44:35.094] + cleanup
W0822 22:44:35.094] + kind export logs /workspace/_artifacts/logs
I0822 22:44:37.557] Exported logs to: /workspace/_artifacts/logs
I0822 22:44:37.654] Deleting cluster "kind" ...
I0822 22:44:37.711] $KUBECONFIG is still set to use /root/.kube/kind-config-kind even though that file has been deleted, remember to unset it
W0822 22:44:37.812] + [[ true = true ]]
... skipping 8 lines ...
W0822 22:44:44.150]     check(*cmd)
W0822 22:44:44.151]   File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 30, in check
W0822 22:44:44.151]     subprocess.check_call(cmd)
W0822 22:44:44.151]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0822 22:44:44.151]     raise CalledProcessError(retcode, cmd)
W0822 22:44:44.151] subprocess.CalledProcessError: Command '('bash', '-c', 'cd ./../../k8s.io/kubernetes && source ./../test-infra/experiment/kind-conformance-image-e2e.sh')' returned non-zero exit status 1
E0822 22:44:44.156] Command failed
I0822 22:44:44.157] process 684 exited with code 1 after 89.0m
E0822 22:44:44.157] FAIL: pull-kubernetes-conformance-image-test
I0822 22:44:44.157] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0822 22:44:44.686] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0822 22:44:44.743] process 179100 exited with code 0 after 0.0m
I0822 22:44:44.743] Call:  gcloud config get-value account
I0822 22:44:45.056] process 179112 exited with code 0 after 0.0m
I0822 22:44:45.056] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0822 22:44:45.057] Upload result and artifacts...
I0822 22:44:45.057] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1164646826524020736
I0822 22:44:45.057] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1164646826524020736/artifacts
W0822 22:44:46.239] CommandException: One or more URLs matched no objects.
E0822 22:44:46.377] Command failed
I0822 22:44:46.377] process 179124 exited with code 1 after 0.0m
W0822 22:44:46.377] Remote dir gs://kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1164646826524020736/artifacts not exist yet
I0822 22:44:46.377] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1164646826524020736/artifacts
I0822 22:44:48.931] process 179266 exited with code 0 after 0.0m
W0822 22:44:48.932] metadata path /workspace/_artifacts/metadata.json does not exist
W0822 22:44:48.932] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...