This job view page is being replaced by Spyglass soon. Check out the new job view.
PRmgdevstack: Promote e2e "verifying service's sessionAffinity for ClusterIP and NodePort services" to Conformance
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2019-09-28 01:36
Elapsed43m49s
Revision
Buildergke-prow-ssd-pool-1a225945-mv4c
pod3bb101aa-e190-11e9-b177-a2483959e586
infra-commit6c6cf1700
pod3bb101aa-e190-11e9-b177-a2483959e586
repok8s.io/test-infra
repo-commit6c6cf1700e06b02b1fb55a6b3596bd0fb63be1d6
repos{u'k8s.io/kubernetes': u'master:fe29e0f444142cf9d66768cfac77acfba24db07d,76443:502b8fde25bc41c0ebcea81f4df93d79b01fecd0', u'k8s.io/test-infra': u'master'}

No Test Failures!


Error lines from build-log.txt

... skipping 704 lines ...
I0928 01:42:02.907] time="01:42:02" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-control-plane]"
I0928 01:42:03.002] time="01:42:03" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-worker2]"
I0928 01:42:03.003] time="01:42:03" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-worker]"
I0928 01:42:03.005] time="01:42:03" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-control-plane]"
I0928 01:42:03.088] time="01:42:03" level=debug msg="Configuration Input data: {kind v1.17.0-alpha.0.1889+75fb7cda082e08 172.17.0.3:6443 6443 127.0.0.1 false 172.17.0.4 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}}"
I0928 01:42:03.094] time="01:42:03" level=debug msg="Configuration Input data: {kind v1.17.0-alpha.0.1889+75fb7cda082e08 172.17.0.3:6443 6443 127.0.0.1 false 172.17.0.2 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}}"
I0928 01:42:03.105] time="01:42:03" level=debug msg="Configuration generated:\n # config generated by kind\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterConfiguration\nmetadata:\n  name: config\nkubernetesVersion: v1.17.0-alpha.0.1889+75fb7cda082e08\nclusterName: \"kind\"\ncontrolPlaneEndpoint: \"172.17.0.3:6443\"\n# on docker for mac we have to expose the api server via port forward,\n# so we need to ensure the cert is valid for localhost so we can talk\n# to the cluster after rewriting the kubeconfig to point to localhost\napiServer:\n  certSANs: [localhost, \"127.0.0.1\"]\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\n    # configure ipv6 default addresses for IPv6 clusters\n    \nscheduler:\n  extraArgs:\n    # configure ipv6 default addresses for IPv6 clusters\n    \nnetworking:\n  podSubnet: \"10.244.0.0/16\"\n  serviceSubnet: \"10.96.0.0/12\"\n---\napiVersion: kubeadm.k8s.io/v1beta2\nkind: InitConfiguration\nmetadata:\n  name: config\n# we use a well know token for TLS bootstrap\nbootstrapTokens:\n- token: \"abcdef.0123456789abcdef\"\n# we use a well know port for making the API server discoverable inside docker network. \n# from the host machine such port will be accessible via a random local port instead.\nlocalAPIEndpoint:\n  advertiseAddress: \"172.17.0.4\"\n  bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.4\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeadm.k8s.io/v1beta2\nkind: JoinConfiguration\nmetadata:\n  name: config\n\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.4\"\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: \"172.17.0.3:6443\"\n    token: \"abcdef.0123456789abcdef\"\n    unsafeSkipCAVerification: true\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nmetadata:\n  name: config\n# configure ipv6 addresses in IPv6 mode\n\n# disable disk resource management by default\n# kubelet will see the host disk that the inner container runtime\n# is ultimately backed by and attempt to recover disk space. we don't want that.\nimageGCHighThresholdPercent: 100\nevictionHard:\n  nodefs.available: \"0%\"\n  nodefs.inodesFree: \"0%\"\n  imagefs.available: \"0%\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmetadata:\n  name: config\n"
I0928 01:42:03.105] time="01:42:03" level=debug msg="Configuration Input data: {kind v1.17.0-alpha.0.1889+75fb7cda082e08 172.17.0.3:6443 6443 127.0.0.1 true 172.17.0.3 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}}"
I0928 01:42:03.108] time="01:42:03" level=debug msg="Configuration generated:\n # config generated by kind\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterConfiguration\nmetadata:\n  name: config\nkubernetesVersion: v1.17.0-alpha.0.1889+75fb7cda082e08\nclusterName: \"kind\"\ncontrolPlaneEndpoint: \"172.17.0.3:6443\"\n# on docker for mac we have to expose the api server via port forward,\n# so we need to ensure the cert is valid for localhost so we can talk\n# to the cluster after rewriting the kubeconfig to point to localhost\napiServer:\n  certSANs: [localhost, \"127.0.0.1\"]\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\n    # configure ipv6 default addresses for IPv6 clusters\n    \nscheduler:\n  extraArgs:\n    # configure ipv6 default addresses for IPv6 clusters\n    \nnetworking:\n  podSubnet: \"10.244.0.0/16\"\n  serviceSubnet: \"10.96.0.0/12\"\n---\napiVersion: kubeadm.k8s.io/v1beta2\nkind: InitConfiguration\nmetadata:\n  name: config\n# we use a well know token for TLS bootstrap\nbootstrapTokens:\n- token: \"abcdef.0123456789abcdef\"\n# we use a well know port for making the API server discoverable inside docker network. \n# from the host machine such port will be accessible via a random local port instead.\nlocalAPIEndpoint:\n  advertiseAddress: \"172.17.0.3\"\n  bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.3\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeadm.k8s.io/v1beta2\nkind: JoinConfiguration\nmetadata:\n  name: config\ncontrolPlane:\n  localAPIEndpoint:\n    advertiseAddress: \"172.17.0.3\"\n    bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.3\"\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: \"172.17.0.3:6443\"\n    token: \"abcdef.0123456789abcdef\"\n    unsafeSkipCAVerification: true\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nmetadata:\n  name: config\n# configure ipv6 addresses in IPv6 mode\n\n# disable disk resource management by default\n# kubelet will see the host disk that the inner container runtime\n# is ultimately backed by and attempt to recover disk space. we don't want that.\nimageGCHighThresholdPercent: 100\nevictionHard:\n  nodefs.available: \"0%\"\n  nodefs.inodesFree: \"0%\"\n  imagefs.available: \"0%\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmetadata:\n  name: config\n"
I0928 01:42:03.127] time="01:42:03" level=debug msg="Configuration generated:\n # config generated by kind\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterConfiguration\nmetadata:\n  name: config\nkubernetesVersion: v1.17.0-alpha.0.1889+75fb7cda082e08\nclusterName: \"kind\"\ncontrolPlaneEndpoint: \"172.17.0.3:6443\"\n# on docker for mac we have to expose the api server via port forward,\n# so we need to ensure the cert is valid for localhost so we can talk\n# to the cluster after rewriting the kubeconfig to point to localhost\napiServer:\n  certSANs: [localhost, \"127.0.0.1\"]\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\n    # configure ipv6 default addresses for IPv6 clusters\n    \nscheduler:\n  extraArgs:\n    # configure ipv6 default addresses for IPv6 clusters\n    \nnetworking:\n  podSubnet: \"10.244.0.0/16\"\n  serviceSubnet: \"10.96.0.0/12\"\n---\napiVersion: kubeadm.k8s.io/v1beta2\nkind: InitConfiguration\nmetadata:\n  name: config\n# we use a well know token for TLS bootstrap\nbootstrapTokens:\n- token: \"abcdef.0123456789abcdef\"\n# we use a well know port for making the API server discoverable inside docker network. \n# from the host machine such port will be accessible via a random local port instead.\nlocalAPIEndpoint:\n  advertiseAddress: \"172.17.0.2\"\n  bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.2\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeadm.k8s.io/v1beta2\nkind: JoinConfiguration\nmetadata:\n  name: config\n\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.2\"\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: \"172.17.0.3:6443\"\n    token: \"abcdef.0123456789abcdef\"\n    unsafeSkipCAVerification: true\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nmetadata:\n  name: config\n# configure ipv6 addresses in IPv6 mode\n\n# disable disk resource management by default\n# kubelet will see the host disk that the inner container runtime\n# is ultimately backed by and attempt to recover disk space. we don't want that.\nimageGCHighThresholdPercent: 100\nevictionHard:\n  nodefs.available: \"0%\"\n  nodefs.inodesFree: \"0%\"\n  imagefs.available: \"0%\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmetadata:\n  name: config\n"
I0928 01:42:03.133] time="01:42:03" level=debug msg="Using kubeadm config:\napiServer:\n  certSANs:\n  - localhost\n  - 127.0.0.1\napiVersion: kubeadm.k8s.io/v1beta2\nclusterName: kind\ncontrolPlaneEndpoint: 172.17.0.3:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.17.0-alpha.0.1889+75fb7cda082e08\nnetworking:\n  podSubnet: 10.244.0.0/16\n  serviceSubnet: 10.96.0.0/12\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.4\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.4\n---\napiVersion: kubeadm.k8s.io/v1beta2\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.3:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.4\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
I0928 01:42:03.133] time="01:42:03" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker2 mkdir -p /kind]"
I0928 01:42:03.140] time="01:42:03" level=debug msg="Using kubeadm config:\napiServer:\n  certSANs:\n  - localhost\n  - 127.0.0.1\napiVersion: kubeadm.k8s.io/v1beta2\nclusterName: kind\ncontrolPlaneEndpoint: 172.17.0.3:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.17.0-alpha.0.1889+75fb7cda082e08\nnetworking:\n  podSubnet: 10.244.0.0/16\n  serviceSubnet: 10.96.0.0/12\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.3\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.3\n---\napiVersion: kubeadm.k8s.io/v1beta2\ncontrolPlane:\n  localAPIEndpoint:\n    advertiseAddress: 172.17.0.3\n    bindPort: 6443\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.3:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.3\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
I0928 01:42:03.141] time="01:42:03" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane mkdir -p /kind]"
I0928 01:42:03.145] time="01:42:03" level=debug msg="Using kubeadm config:\napiServer:\n  certSANs:\n  - localhost\n  - 127.0.0.1\napiVersion: kubeadm.k8s.io/v1beta2\nclusterName: kind\ncontrolPlaneEndpoint: 172.17.0.3:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.17.0-alpha.0.1889+75fb7cda082e08\nnetworking:\n  podSubnet: 10.244.0.0/16\n  serviceSubnet: 10.96.0.0/12\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.2\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.2\n---\napiVersion: kubeadm.k8s.io/v1beta2\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.3:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.2\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
I0928 01:42:03.145] time="01:42:03" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker mkdir -p /kind]"
I0928 01:42:03.403] time="01:42:03" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-worker2 cp /dev/stdin /kind/kubeadm.conf]"
I0928 01:42:03.448] time="01:42:03" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-worker cp /dev/stdin /kind/kubeadm.conf]"
I0928 01:42:03.468] time="01:42:03" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-control-plane cp /dev/stdin /kind/kubeadm.conf]"
I0928 01:42:03.845]  ✓ Creating kubeadm config 📜
I0928 01:42:03.846]  â€ĸ Starting control-plane 🕹ī¸  ...
I0928 01:42:03.846] time="01:42:03" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane kubeadm init --ignore-preflight-errors=all --config=/kind/kubeadm.conf --skip-token-print --v=6]"
I0928 01:42:22.566] time="01:42:22" level=debug msg="I0928 01:42:04.346456      78 initconfiguration.go:190] loading configuration from \"/kind/kubeadm.conf\"\n[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration\nI0928 01:42:04.359420      78 feature_gate.go:216] feature gates: &{map[]}\nI0928 01:42:04.359741      78 checks.go:578] validating Kubernetes and kubeadm version\nI0928 01:42:04.359763      78 checks.go:167] validating if the firewall is enabled and active\n[init] Using Kubernetes version: v1.17.0-alpha.0.1889+75fb7cda082e08\n[preflight] Running pre-flight checks\nI0928 01:42:04.382718      78 checks.go:202] validating availability of port 6443\nI0928 01:42:04.382930      78 checks.go:202] validating availability of port 10251\nI0928 01:42:04.382964      78 checks.go:202] validating availability of port 10252\nI0928 01:42:04.382995      78 checks.go:287] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml\nI0928 01:42:04.383010      78 checks.go:287] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml\nI0928 01:42:04.383018      78 checks.go:287] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml\nI0928 01:42:04.383028      78 checks.go:287] validating the existence of file /etc/kubernetes/manifests/etcd.yaml\nI0928 01:42:04.383087      78 checks.go:433] validating if the connectivity type is via proxy or direct\nI0928 01:42:04.384136      78 checks.go:472] validating http connectivity to first IP address in the CIDR\nI0928 01:42:04.384277      78 checks.go:472] validating http connectivity to first IP address in the CIDR\nI0928 01:42:04.384303      78 checks.go:103] validating the container runtime\nI0928 01:42:04.526718      78 checks.go:377] validating the presence of executable crictl\nI0928 01:42:04.526793      78 checks.go:336] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0928 01:42:04.526885      78 checks.go:336] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0928 01:42:04.526941      78 checks.go:650] validating whether swap is enabled or not\nI0928 01:42:04.526987      78 checks.go:377] validating the presence of executable ip\nI0928 01:42:04.527076      78 checks.go:377] validating the presence of executable iptables\nI0928 01:42:04.527113      78 checks.go:377] validating the presence of executable mount\nI0928 01:42:04.527140      78 checks.go:377] validating the presence of executable nsenter\nI0928 01:42:04.527198      78 checks.go:377] validating the presence of executable ebtables\nI0928 01:42:04.527268      78 checks.go:377] validating the presence of executable ethtool\nI0928 01:42:04.527298      78 checks.go:377] validating the presence of executable socat\nI0928 01:42:04.527331      78 checks.go:377] validating the presence of executable tc\nI0928 01:42:04.527356      78 checks.go:377] validating the presence of executable touch\nI0928 01:42:04.527396      78 checks.go:521] running all checks\nI0928 01:42:04.538224      78 checks.go:407] checking whether the given node name is reachable using net.LookupHost\nI0928 01:42:04.539841      78 checks.go:619] validating kubelet version\nI0928 01:42:04.628563      78 checks.go:129] validating if the service is enabled and active\nI0928 01:42:04.646339      78 checks.go:202] validating availability of port 10250\nI0928 01:42:04.646449      78 checks.go:202] validating availability of port 2379\nI0928 01:42:04.646478      78 checks.go:202] validating availability of port 2380\nI0928 01:42:04.646510      78 checks.go:250] validating the existence and emptiness of directory /var/lib/etcd\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\nI0928 01:42:04.668193      78 checks.go:839] image exists: k8s.gcr.io/kube-apiserver:v1.17.0-alpha.0.1889_75fb7cda082e08\nI0928 01:42:04.676889      78 checks.go:839] image exists: k8s.gcr.io/kube-controller-manager:v1.17.0-alpha.0.1889_75fb7cda082e08\nI0928 01:42:04.689090      78 checks.go:839] image exists: k8s.gcr.io/kube-scheduler:v1.17.0-alpha.0.1889_75fb7cda082e08\nI0928 01:42:04.698160      78 checks.go:839] image exists: k8s.gcr.io/kube-proxy:v1.17.0-alpha.0.1889_75fb7cda082e08\nI0928 01:42:04.713706      78 checks.go:839] image exists: k8s.gcr.io/pause:3.1\nI0928 01:42:04.725112      78 checks.go:839] image exists: k8s.gcr.io/etcd:3.3.15-0\nI0928 01:42:04.735771      78 checks.go:839] image exists: k8s.gcr.io/coredns:1.6.2\nI0928 01:42:04.736037      78 kubelet.go:61] Stopping the kubelet\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\nI0928 01:42:04.773658      78 kubelet.go:79] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[certs] Using certificateDir folder \"/etc/kubernetes/pki\"\nI0928 01:42:04.856089      78 certs.go:104] creating a new certificate authority for ca\n[certs] Generating \"ca\" certificate and key\n[certs] Generating \"apiserver\" certificate and key\n[certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.17.0.3 172.17.0.3 127.0.0.1]\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\nI0928 01:42:05.578436      78 certs.go:104] creating a new certificate authority for front-proxy-ca\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\nI0928 01:42:05.966560      78 certs.go:104] creating a new certificate authority for etcd-ca\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.3 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.3 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\nI0928 01:42:07.838795      78 certs.go:70] creating a new public/private key files for signing service account users\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\nI0928 01:42:08.154658      78 kubeconfig.go:79] creating kubeconfig file for admin.conf\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\nI0928 01:42:08.294192      78 kubeconfig.go:79] creating kubeconfig file for kubelet.conf\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\nI0928 01:42:08.600308      78 kubeconfig.go:79] creating kubeconfig file for controller-manager.conf\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\nI0928 01:42:08.989860      78 kubeconfig.go:79] creating kubeconfig file for scheduler.conf\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\nI0928 01:42:09.456162      78 manifests.go:91] [control-plane] getting StaticPodSpecs\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\nI0928 01:42:09.472326      78 manifests.go:116] [control-plane] wrote static Pod manifest for component \"kube-apiserver\" to \"/etc/kubernetes/manifests/kube-apiserver.yaml\"\nI0928 01:42:09.472367      78 manifests.go:91] [control-plane] getting StaticPodSpecs\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\nI0928 01:42:09.474312      78 manifests.go:116] [control-plane] wrote static Pod manifest for component \"kube-controller-manager\" to \"/etc/kubernetes/manifests/kube-controller-manager.yaml\"\nI0928 01:42:09.474354      78 manifests.go:91] [control-plane] getting StaticPodSpecs\nI0928 01:42:09.475619      78 manifests.go:116] [control-plane] wrote static Pod manifest for component \"kube-scheduler\" to \"/etc/kubernetes/manifests/kube-scheduler.yaml\"\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\nI0928 01:42:09.480217      78 local.go:69] [etcd] wrote Static Pod manifest for a local etcd member to \"/etc/kubernetes/manifests/etcd.yaml\"\nI0928 01:42:09.480244      78 waitcontrolplane.go:80] [wait-control-plane] Waiting for the API server to be healthy\nI0928 01:42:09.481627      78 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s\nI0928 01:42:09.492103      78 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s  in 2 milliseconds\nI0928 01:42:09.992906      78 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds\nI0928 01:42:10.492928      78 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds\nI0928 01:42:10.993259      78 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds\nI0928 01:42:11.492851      78 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds\nI0928 01:42:11.992900      78 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds\nI0928 01:42:12.492935      78 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds\nI0928 01:42:12.992884      78 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds\nI0928 01:42:13.492946      78 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds\nI0928 01:42:13.992835      78 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds\nI0928 01:42:14.492874      78 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds\nI0928 01:42:19.314565      78 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s 500 Internal Server Error in 4321 milliseconds\nI0928 01:42:19.495106      78 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0928 01:42:19.995167      78 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0928 01:42:20.494043      78 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s 500 Internal Server Error in 1 milliseconds\nI0928 01:42:20.995317      78 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s 200 OK in 2 milliseconds\nI0928 01:42:20.995432      78 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap\n[apiclient] All control plane components are healthy after 11.508299 seconds\n[upload-config] Storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace\nI0928 01:42:21.002319      78 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 5 milliseconds\nI0928 01:42:21.008188      78 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 4 milliseconds\nI0928 01:42:21.014189      78 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 4 milliseconds\n[kubelet] Creating a ConfigMap \"kubelet-config-1.17\" in namespace kube-system with the configuration for the kubelets in the cluster\nI0928 01:42:21.015162      78 uploadconfig.go:122] [upload-config] Uploading the kubelet component config to a ConfigMap\nI0928 01:42:21.020305      78 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 3 milliseconds\nI0928 01:42:21.024968      78 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 4 milliseconds\nI0928 01:42:21.028955      78 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 3 milliseconds\nI0928 01:42:21.029146      78 uploadconfig.go:127] [upload-config] Preserving the CRISocket information for the control-plane node\nI0928 01:42:21.029158      78 patchnode.go:30] [patchnode] Uploading the CRI Socket information \"/run/containerd/containerd.sock\" to the Node API object \"kind-control-plane\" as an annotation\nI0928 01:42:21.533146      78 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-control-plane 200 OK in 3 milliseconds\nI0928 01:42:21.541312      78 round_trippers.go:443] PATCH https://172.17.0.3:6443/api/v1/nodes/kind-control-plane 200 OK in 5 milliseconds\n[upload-certs] Skipping phase. Please see --upload-certs\n[mark-control-plane] Marking the node kind-control-plane as control-plane by adding the label \"node-role.kubernetes.io/master=''\"\n[mark-control-plane] Marking the node kind-control-plane as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]\nI0928 01:42:22.045381      78 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-control-plane 200 OK in 3 milliseconds\nI0928 01:42:22.051076      78 round_trippers.go:443] PATCH https://172.17.0.3:6443/api/v1/nodes/kind-control-plane 200 OK in 4 milliseconds\nI0928 01:42:22.054069      78 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-abcdef 404 Not Found in 2 milliseconds\n[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles\nI0928 01:42:22.060815      78 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/secrets 201 Created in 6 milliseconds\n[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials\nI0928 01:42:22.066470      78 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 4 milliseconds\n[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token\nI0928 01:42:22.070619      78 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 3 milliseconds\n[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster\n[bootstrap-token] Creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace\nI0928 01:42:22.074865      78 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 3 milliseconds\nI0928 01:42:22.075012      78 clusterinfo.go:45] [bootstrap-token] loading admin kubeconfig\nI0928 01:42:22.078519      78 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\nI0928 01:42:22.078553      78 clusterinfo.go:53] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig\nI0928 01:42:22.079045      78 clusterinfo.go:65] [bootstrap-token] creating/updating ConfigMap in kube-public namespace\nI0928 01:42:22.083553      78 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps 201 Created in 4 milliseconds\nI0928 01:42:22.083804      78 clusterinfo.go:79] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace\nI0928 01:42:22.087645      78 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles 201 Created in 3 milliseconds\nI0928 01:42:22.093394      78 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings 201 Created in 5 milliseconds\nI0928 01:42:22.095879      78 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kube-dns 404 Not Found in 2 milliseconds\nI0928 01:42:22.098369      78 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/coredns 404 Not Found in 1 milliseconds\nI0928 01:42:22.101250      78 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 2 milliseconds\nI0928 01:42:22.107391      78 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/clusterroles 201 Created in 4 milliseconds\nI0928 01:42:22.111156      78 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 3 milliseconds\nI0928 01:42:22.116594      78 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 4 milliseconds\nI0928 01:42:22.138091      78 round_trippers.go:443] POST https://172.17.0.3:6443/apis/apps/v1/namespaces/kube-system/deployments 201 Created in 14 milliseconds\nI0928 01:42:22.150157      78 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/services 201 Created in 9 milliseconds\n[addons] Applied essential addon: CoreDNS\nI0928 01:42:22.242344      78 request.go:538] Throttling request took 91.695993ms, request: POST:https://172.17.0.3:6443/api/v1/namespaces/kube-system/serviceaccounts\nI0928 01:42:22.246869      78 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 4 milliseconds\nI0928 01:42:22.442301      78 request.go:538] Throttling request took 192.349947ms, request: POST:https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps\nI0928 01:42:22.447098      78 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 4 milliseconds\nI0928 01:42:22.466370      78 round_trippers.go:443] POST https://172.17.0.3:6443/apis/apps/v1/namespaces/kube-system/daemonsets 201 Created in 10 milliseconds\nI0928 01:42:22.471304      78 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 4 milliseconds\nI0928 01:42:22.475025      78 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 3 milliseconds\nI0928 01:42:22.479084      78 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 3 milliseconds\n[addons] Applied essential addon: kube-proxy\nI0928 01:42:22.480882      78 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\nI0928 01:42:22.482173      78 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\n\nYour Kubernetes control-plane has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n  mkdir -p $HOME/.kube\n  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n  sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nYou should now deploy a pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n  https://kubernetes.io/docs/concepts/cluster-administration/addons/\n\nYou can now join any number of control-plane nodes by copying certificate authorities \nand service account keys on each node and then running the following as root:\n\n  kubeadm join 172.17.0.3:6443 --token <value withheld> \\\n    --discovery-token-ca-cert-hash sha256:c0899171d77acfb25a742e87f6f540821d4c94962c4e00d3f776bfb138cb2bdd \\\n    --control-plane \t  \n\nThen you can join any number of worker nodes by running the following on each as root:\n\nkubeadm join 172.17.0.3:6443 --token <value withheld> \\\n    --discovery-token-ca-cert-hash sha256:c0899171d77acfb25a742e87f6f540821d4c94962c4e00d3f776bfb138cb2bdd "
I0928 01:42:22.567] time="01:42:22" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{(index (index .NetworkSettings.Ports \"6443/tcp\") 0).HostPort}} kind-control-plane]"
I0928 01:42:22.619] time="01:42:22" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane cat /etc/kubernetes/admin.conf]"
I0928 01:42:22.848]  ✓ Starting control-plane 🕹ī¸
I0928 01:42:22.848]  â€ĸ Installing CNI 🔌  ...
I0928 01:42:22.848] time="01:42:22" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane cat /kind/manifests/default-cni.yaml]"
I0928 01:42:23.076] time="01:42:23" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-control-plane kubectl create --kubeconfig=/etc/kubernetes/admin.conf -f -]"
I0928 01:42:24.121]  ✓ Installing CNI 🔌
I0928 01:42:24.121]  â€ĸ Installing StorageClass 💾  ...
I0928 01:42:24.121] time="01:42:24" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f -]"
I0928 01:42:24.591]  ✓ Installing StorageClass 💾
I0928 01:42:24.592]  â€ĸ Joining worker nodes 🚜  ...
I0928 01:42:24.593] time="01:42:24" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker2 kubeadm join --config /kind/kubeadm.conf --ignore-preflight-errors=all --v=6]"
I0928 01:42:24.593] time="01:42:24" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker kubeadm join --config /kind/kubeadm.conf --ignore-preflight-errors=all --v=6]"
I0928 01:42:53.768] time="01:42:53" level=debug msg="I0928 01:42:24.837598     272 join.go:368] [preflight] found NodeName empty; using OS hostname as NodeName\nI0928 01:42:24.837656     272 joinconfiguration.go:75] loading configuration from \"/kind/kubeadm.conf\"\nI0928 01:42:24.840126     272 preflight.go:90] [preflight] Running general checks\n[preflight] Running pre-flight checks\nI0928 01:42:24.840217     272 checks.go:250] validating the existence and emptiness of directory /etc/kubernetes/manifests\nI0928 01:42:24.840231     272 checks.go:287] validating the existence of file /etc/kubernetes/kubelet.conf\nI0928 01:42:24.840240     272 checks.go:287] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf\nI0928 01:42:24.840250     272 checks.go:103] validating the container runtime\nI0928 01:42:24.852548     272 checks.go:377] validating the presence of executable crictl\nI0928 01:42:24.852600     272 checks.go:336] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0928 01:42:24.852659     272 checks.go:336] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0928 01:42:24.852749     272 checks.go:650] validating whether swap is enabled or not\nI0928 01:42:24.852795     272 checks.go:377] validating the presence of executable ip\nI0928 01:42:24.852904     272 checks.go:377] validating the presence of executable iptables\nI0928 01:42:24.853117     272 checks.go:377] validating the presence of executable mount\nI0928 01:42:24.853178     272 checks.go:377] validating the presence of executable nsenter\nI0928 01:42:24.853291     272 checks.go:377] validating the presence of executable ebtables\nI0928 01:42:24.853404     272 checks.go:377] validating the presence of executable ethtool\nI0928 01:42:24.853447     272 checks.go:377] validating the presence of executable socat\nI0928 01:42:24.853533     272 checks.go:377] validating the presence of executable tc\nI0928 01:42:24.853587     272 checks.go:377] validating the presence of executable touch\nI0928 01:42:24.853659     272 checks.go:521] running all checks\nI0928 01:42:24.866806     272 checks.go:407] checking whether the given node name is reachable using net.LookupHost\nI0928 01:42:24.867317     272 checks.go:619] validating kubelet version\nI0928 01:42:24.949081     272 checks.go:129] validating if the service is enabled and active\nI0928 01:42:24.966911     272 checks.go:202] validating availability of port 10250\nI0928 01:42:24.967183     272 checks.go:287] validating the existence of file /etc/kubernetes/pki/ca.crt\nI0928 01:42:24.967201     272 checks.go:433] validating if the connectivity type is via proxy or direct\nI0928 01:42:24.967250     272 join.go:438] [preflight] Discovering cluster-info\nI0928 01:42:24.967384     272 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.3:6443\"\nI0928 01:42:24.968211     272 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.3:6443\"\nI0928 01:42:24.977992     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 9 milliseconds\nI0928 01:42:24.979103     272 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.3:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0928 01:42:29.979401     272 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.3:6443\"\nI0928 01:42:29.980179     272 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.3:6443\"\nI0928 01:42:29.983490     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 3 milliseconds\nI0928 01:42:29.984075     272 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.3:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0928 01:42:34.984293     272 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.3:6443\"\nI0928 01:42:34.985096     272 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.3:6443\"\nI0928 01:42:34.988355     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 3 milliseconds\nI0928 01:42:34.988904     272 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.3:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0928 01:42:39.989117     272 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.3:6443\"\nI0928 01:42:39.989939     272 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.3:6443\"\nI0928 01:42:39.993795     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 3 milliseconds\nI0928 01:42:39.995539     272 token.go:109] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server \"172.17.0.3:6443\"\nI0928 01:42:39.995565     272 token.go:205] [discovery] Successfully established connection with API Server \"172.17.0.3:6443\"\nI0928 01:42:39.995598     272 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process\nI0928 01:42:39.995613     272 join.go:452] [preflight] Fetching init configuration\nI0928 01:42:39.995627     272 join.go:490] [preflight] Retrieving KubeConfig objects\n[preflight] Reading configuration from the cluster...\n[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\nI0928 01:42:40.009842     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 13 milliseconds\nI0928 01:42:40.013936     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 2 milliseconds\nI0928 01:42:40.017067     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 1 milliseconds\nI0928 01:42:40.022631     272 interface.go:384] Looking for default routes with IPv4 addresses\nI0928 01:42:40.022750     272 interface.go:389] Default route transits interface \"eth0\"\nI0928 01:42:40.022953     272 interface.go:196] Interface eth0 is up\nI0928 01:42:40.023152     272 interface.go:244] Interface \"eth0\" has 1 addresses :[172.17.0.4/16].\nI0928 01:42:40.023213     272 interface.go:211] Checking addr  172.17.0.4/16.\nI0928 01:42:40.023244     272 interface.go:218] IP found 172.17.0.4\nI0928 01:42:40.023277     272 interface.go:250] Found valid IPv4 address 172.17.0.4 for interface \"eth0\".\nI0928 01:42:40.023302     272 interface.go:395] Found active IP 172.17.0.4 \nI0928 01:42:40.023464     272 preflight.go:101] [preflight] Running configuration dependant checks\nI0928 01:42:40.023526     272 controlplaneprepare.go:211] [download-certs] Skipping certs download\nI0928 01:42:40.023581     272 kubelet.go:107] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf\nI0928 01:42:40.025830     272 kubelet.go:115] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt\nI0928 01:42:40.027063     272 loader.go:375] Config loaded from file:  /etc/kubernetes/bootstrap-kubelet.conf\nI0928 01:42:40.028838     272 kubelet.go:133] [kubelet-start] Stopping the kubelet\n[kubelet-start] Downloading configuration for the kubelet from the \"kubelet-config-1.17\" ConfigMap in the kube-system namespace\nI0928 01:42:40.046177     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 2 milliseconds\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\nI0928 01:42:40.057363     272 kubelet.go:150] [kubelet-start] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...\nI0928 01:42:41.165819     272 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0928 01:42:41.178497     272 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0928 01:42:41.180728     272 kubelet.go:168] [kubelet-start] preserving the crisocket information for the node\nI0928 01:42:41.181241     272 patchnode.go:30] [patchnode] Uploading the CRI Socket information \"/run/containerd/containerd.sock\" to the Node API object \"kind-worker2\" as an annotation\nI0928 01:42:41.690325     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 8 milliseconds\nI0928 01:42:42.185917     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 4 milliseconds\nI0928 01:42:42.684858     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0928 01:42:43.185832     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 4 milliseconds\nI0928 01:42:43.685162     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0928 01:42:44.184568     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0928 01:42:44.687217     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 5 milliseconds\nI0928 01:42:45.184603     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0928 01:42:45.684841     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0928 01:42:46.184739     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0928 01:42:46.685226     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0928 01:42:47.186429     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0928 01:42:47.684815     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0928 01:42:48.186033     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 4 milliseconds\nI0928 01:42:48.685608     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0928 01:42:49.185572     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0928 01:42:49.684598     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0928 01:42:50.185251     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0928 01:42:50.684665     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0928 01:42:51.184440     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0928 01:42:51.685853     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 4 milliseconds\nI0928 01:42:52.186312     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 4 milliseconds\nI0928 01:42:52.684667     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0928 01:42:53.184552     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0928 01:42:53.684821     272 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 200 OK in 3 milliseconds\nI0928 01:42:53.691629     272 round_trippers.go:443] PATCH https://172.17.0.3:6443/api/v1/nodes/kind-worker2 200 OK in 4 milliseconds\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the control-plane to see this node join the cluster.\n"
I0928 01:42:54.282] time="01:42:54" level=debug msg="I0928 01:42:24.839260     267 join.go:368] [preflight] found NodeName empty; using OS hostname as NodeName\nI0928 01:42:24.839310     267 joinconfiguration.go:75] loading configuration from \"/kind/kubeadm.conf\"\n[preflight] Running pre-flight checks\nI0928 01:42:24.841401     267 preflight.go:90] [preflight] Running general checks\nI0928 01:42:24.841526     267 checks.go:250] validating the existence and emptiness of directory /etc/kubernetes/manifests\nI0928 01:42:24.841595     267 checks.go:287] validating the existence of file /etc/kubernetes/kubelet.conf\nI0928 01:42:24.841628     267 checks.go:287] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf\nI0928 01:42:24.841664     267 checks.go:103] validating the container runtime\nI0928 01:42:24.852368     267 checks.go:377] validating the presence of executable crictl\nI0928 01:42:24.852437     267 checks.go:336] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0928 01:42:24.852532     267 checks.go:336] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0928 01:42:24.852716     267 checks.go:650] validating whether swap is enabled or not\nI0928 01:42:24.853092     267 checks.go:377] validating the presence of executable ip\nI0928 01:42:24.853166     267 checks.go:377] validating the presence of executable iptables\nI0928 01:42:24.853194     267 checks.go:377] validating the presence of executable mount\nI0928 01:42:24.853214     267 checks.go:377] validating the presence of executable nsenter\nI0928 01:42:24.853244     267 checks.go:377] validating the presence of executable ebtables\nI0928 01:42:24.853305     267 checks.go:377] validating the presence of executable ethtool\nI0928 01:42:24.853334     267 checks.go:377] validating the presence of executable socat\nI0928 01:42:24.853370     267 checks.go:377] validating the presence of executable tc\nI0928 01:42:24.853396     267 checks.go:377] validating the presence of executable touch\nI0928 01:42:24.853439     267 checks.go:521] running all checks\nI0928 01:42:24.866364     267 checks.go:407] checking whether the given node name is reachable using net.LookupHost\nI0928 01:42:24.867282     267 checks.go:619] validating kubelet version\nI0928 01:42:24.960163     267 checks.go:129] validating if the service is enabled and active\nI0928 01:42:24.973388     267 checks.go:202] validating availability of port 10250\nI0928 01:42:24.973879     267 checks.go:287] validating the existence of file /etc/kubernetes/pki/ca.crt\nI0928 01:42:24.974056     267 checks.go:433] validating if the connectivity type is via proxy or direct\nI0928 01:42:24.974194     267 join.go:438] [preflight] Discovering cluster-info\nI0928 01:42:24.974397     267 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.3:6443\"\nI0928 01:42:24.975299     267 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.3:6443\"\nI0928 01:42:24.985523     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 10 milliseconds\nI0928 01:42:24.986424     267 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.3:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0928 01:42:29.986679     267 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.3:6443\"\nI0928 01:42:29.987384     267 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.3:6443\"\nI0928 01:42:29.990434     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds\nI0928 01:42:29.990740     267 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.3:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0928 01:42:34.990938     267 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.3:6443\"\nI0928 01:42:34.991594     267 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.3:6443\"\nI0928 01:42:34.994672     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds\nI0928 01:42:34.994954     267 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.3:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0928 01:42:39.995134     267 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.3:6443\"\nI0928 01:42:39.996180     267 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.3:6443\"\nI0928 01:42:39.999611     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 3 milliseconds\nI0928 01:42:40.001099     267 token.go:109] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server \"172.17.0.3:6443\"\nI0928 01:42:40.001126     267 token.go:205] [discovery] Successfully established connection with API Server \"172.17.0.3:6443\"\nI0928 01:42:40.001163     267 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process\nI0928 01:42:40.001179     267 join.go:452] [preflight] Fetching init configuration\nI0928 01:42:40.001188     267 join.go:490] [preflight] Retrieving KubeConfig objects\n[preflight] Reading configuration from the cluster...\n[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\nI0928 01:42:40.012112     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 9 milliseconds\nI0928 01:42:40.016643     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 3 milliseconds\nI0928 01:42:40.020491     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 2 milliseconds\nI0928 01:42:40.023002     267 interface.go:384] Looking for default routes with IPv4 addresses\nI0928 01:42:40.023025     267 interface.go:389] Default route transits interface \"eth0\"\nI0928 01:42:40.023133     267 interface.go:196] Interface eth0 is up\nI0928 01:42:40.023196     267 interface.go:244] Interface \"eth0\" has 1 addresses :[172.17.0.2/16].\nI0928 01:42:40.023216     267 interface.go:211] Checking addr  172.17.0.2/16.\nI0928 01:42:40.023227     267 interface.go:218] IP found 172.17.0.2\nI0928 01:42:40.023238     267 interface.go:250] Found valid IPv4 address 172.17.0.2 for interface \"eth0\".\nI0928 01:42:40.023247     267 interface.go:395] Found active IP 172.17.0.2 \nI0928 01:42:40.023348     267 preflight.go:101] [preflight] Running configuration dependant checks\nI0928 01:42:40.023368     267 controlplaneprepare.go:211] [download-certs] Skipping certs download\nI0928 01:42:40.023381     267 kubelet.go:107] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf\nI0928 01:42:40.025991     267 kubelet.go:115] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt\nI0928 01:42:40.026956     267 loader.go:375] Config loaded from file:  /etc/kubernetes/bootstrap-kubelet.conf\nI0928 01:42:40.028208     267 kubelet.go:133] [kubelet-start] Stopping the kubelet\n[kubelet-start] Downloading configuration for the kubelet from the \"kubelet-config-1.17\" ConfigMap in the kube-system namespace\nI0928 01:42:40.044465     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 2 milliseconds\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\nI0928 01:42:40.056638     267 kubelet.go:150] [kubelet-start] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...\nI0928 01:42:41.665907     267 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0928 01:42:41.680594     267 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0928 01:42:41.682160     267 kubelet.go:168] [kubelet-start] preserving the crisocket information for the node\nI0928 01:42:41.682187     267 patchnode.go:30] [patchnode] Uploading the CRI Socket information \"/run/containerd/containerd.sock\" to the Node API object \"kind-worker\" as an annotation\nI0928 01:42:42.191811     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 9 milliseconds\nI0928 01:42:42.685015     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0928 01:42:43.186013     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0928 01:42:43.685332     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0928 01:42:44.185208     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0928 01:42:44.687215     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 4 milliseconds\nI0928 01:42:45.185484     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0928 01:42:45.684896     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0928 01:42:46.185085     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0928 01:42:46.685673     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0928 01:42:47.185749     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0928 01:42:47.684830     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0928 01:42:48.185255     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0928 01:42:48.685353     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0928 01:42:49.186117     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0928 01:42:49.685072     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0928 01:42:50.185253     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0928 01:42:50.685399     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0928 01:42:51.185152     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0928 01:42:51.685209     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0928 01:42:52.185480     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0928 01:42:52.685097     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0928 01:42:53.184760     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0928 01:42:53.684810     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0928 01:42:54.186514     267 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 200 OK in 4 milliseconds\nI0928 01:42:54.196462     267 round_trippers.go:443] PATCH https://172.17.0.3:6443/api/v1/nodes/kind-worker 200 OK in 6 milliseconds\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the control-plane to see this node join the cluster.\n"
I0928 01:42:54.282]  ✓ Joining worker nodes 🚜
I0928 01:42:54.282]  â€ĸ Waiting ≤ 1m0s for control-plane = Ready âŗ  ...
I0928 01:42:54.283] time="01:42:54" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master -o=jsonpath='{.items..status.conditions[-1:].status}']"
I0928 01:42:54.723]  ✓ Waiting ≤ 1m0s for control-plane = Ready âŗ
I0928 01:42:54.723]  â€ĸ Ready after 0s 💚
I0928 01:42:54.724] Cluster creation complete. You can now use the cluster with:
... skipping 526 lines ...
I0928 02:19:10.151] [02:19:10] Pod status is: Running
I0928 02:19:15.240] [02:19:15] Pod status is: Running
I0928 02:19:20.586] [02:19:20] Pod status is: Running
I0928 02:19:25.679] [02:19:25] Pod status is: Running
I0928 02:19:30.765] [02:19:30] Pod status is: Running
I0928 02:19:35.854] [02:19:35] Pod status is: Running
W0928 02:19:40.946] Error from server (NotFound): pods "e2e-conformance-test" not found
W0928 02:19:40.950] + cleanup
W0928 02:19:40.950] + kind export logs /workspace/_artifacts/logs
I0928 02:19:43.313] Exported logs to: /workspace/_artifacts/logs
W0928 02:19:43.413] + [[ true = true ]]
W0928 02:19:43.414] + kind delete cluster
I0928 02:19:43.514] Deleting cluster "kind" ...
... skipping 8 lines ...
W0928 02:19:49.903]     check(*cmd)
W0928 02:19:49.903]   File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 30, in check
W0928 02:19:49.903]     subprocess.check_call(cmd)
W0928 02:19:49.903]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0928 02:19:49.904]     raise CalledProcessError(retcode, cmd)
W0928 02:19:49.904] subprocess.CalledProcessError: Command '('bash', '-c', 'cd ./../../k8s.io/kubernetes && source ./../test-infra/experiment/kind-conformance-image-e2e.sh')' returned non-zero exit status 1
E0928 02:19:49.908] Command failed
I0928 02:19:49.909] process 670 exited with code 1 after 42.3m
E0928 02:19:49.909] FAIL: pull-kubernetes-conformance-image-test
I0928 02:19:49.909] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0928 02:19:50.437] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0928 02:19:50.490] process 106193 exited with code 0 after 0.0m
I0928 02:19:50.491] Call:  gcloud config get-value account
I0928 02:19:50.786] process 106205 exited with code 0 after 0.0m
I0928 02:19:50.786] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0928 02:19:50.787] Upload result and artifacts...
I0928 02:19:50.787] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1177758830621102080
I0928 02:19:50.787] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1177758830621102080/artifacts
W0928 02:19:51.869] CommandException: One or more URLs matched no objects.
E0928 02:19:52.010] Command failed
I0928 02:19:52.010] process 106217 exited with code 1 after 0.0m
W0928 02:19:52.010] Remote dir gs://kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1177758830621102080/artifacts not exist yet
I0928 02:19:52.010] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1177758830621102080/artifacts
I0928 02:19:54.421] process 106359 exited with code 0 after 0.0m
W0928 02:19:54.422] metadata path /workspace/_artifacts/metadata.json does not exist
W0928 02:19:54.422] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...