This job view page is being replaced by Spyglass soon. Check out the new job view.
PRmgdevstack: Promote e2e "verifying service's sessionAffinity for ClusterIP and NodePort services" to Conformance
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2019-09-29 02:32
Elapsed1h16m
Revision
Buildergke-prow-ssd-pool-1a225945-49gz
pod46b55973-e261-11e9-9c8b-2a8c1da8b840
infra-commitdba192364
pod46b55973-e261-11e9-9c8b-2a8c1da8b840
repok8s.io/test-infra
repo-commitdba192364d996a698c268331b693f3c48ae8ee76
repos{u'k8s.io/kubernetes': u'master:29f23e6647b0e25708a70935818dab318fea54a3,76443:502b8fde25bc41c0ebcea81f4df93d79b01fecd0', u'k8s.io/test-infra': u'master'}

No Test Failures!


Error lines from build-log.txt

... skipping 705 lines ...
I0929 02:37:37.553] time="02:37:37" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-worker2]"
I0929 02:37:37.554] time="02:37:37" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-worker]"
I0929 02:37:37.554] time="02:37:37" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-control-plane]"
I0929 02:37:37.610] time="02:37:37" level=debug msg="Configuration Input data: {kind v1.17.0-alpha.0.1904+21753e8ec6ecb4 172.17.0.4:6443 6443 127.0.0.1 true 172.17.0.4 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}}"
I0929 02:37:37.611] time="02:37:37" level=debug msg="Configuration Input data: {kind v1.17.0-alpha.0.1904+21753e8ec6ecb4 172.17.0.4:6443 6443 127.0.0.1 false 172.17.0.3 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}}"
I0929 02:37:37.611] time="02:37:37" level=debug msg="Configuration Input data: {kind v1.17.0-alpha.0.1904+21753e8ec6ecb4 172.17.0.4:6443 6443 127.0.0.1 false 172.17.0.2 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}}"
I0929 02:37:37.616] time="02:37:37" level=debug msg="Configuration generated:\n # config generated by kind\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterConfiguration\nmetadata:\n  name: config\nkubernetesVersion: v1.17.0-alpha.0.1904+21753e8ec6ecb4\nclusterName: \"kind\"\ncontrolPlaneEndpoint: \"172.17.0.4:6443\"\n# on docker for mac we have to expose the api server via port forward,\n# so we need to ensure the cert is valid for localhost so we can talk\n# to the cluster after rewriting the kubeconfig to point to localhost\napiServer:\n  certSANs: [localhost, \"127.0.0.1\"]\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\n    # configure ipv6 default addresses for IPv6 clusters\n    \nscheduler:\n  extraArgs:\n    # configure ipv6 default addresses for IPv6 clusters\n    \nnetworking:\n  podSubnet: \"10.244.0.0/16\"\n  serviceSubnet: \"10.96.0.0/12\"\n---\napiVersion: kubeadm.k8s.io/v1beta2\nkind: InitConfiguration\nmetadata:\n  name: config\n# we use a well know token for TLS bootstrap\nbootstrapTokens:\n- token: \"abcdef.0123456789abcdef\"\n# we use a well know port for making the API server discoverable inside docker network. \n# from the host machine such port will be accessible via a random local port instead.\nlocalAPIEndpoint:\n  advertiseAddress: \"172.17.0.3\"\n  bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.3\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeadm.k8s.io/v1beta2\nkind: JoinConfiguration\nmetadata:\n  name: config\n\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.3\"\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: \"172.17.0.4:6443\"\n    token: \"abcdef.0123456789abcdef\"\n    unsafeSkipCAVerification: true\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nmetadata:\n  name: config\n# configure ipv6 addresses in IPv6 mode\n\n# disable disk resource management by default\n# kubelet will see the host disk that the inner container runtime\n# is ultimately backed by and attempt to recover disk space. we don't want that.\nimageGCHighThresholdPercent: 100\nevictionHard:\n  nodefs.available: \"0%\"\n  nodefs.inodesFree: \"0%\"\n  imagefs.available: \"0%\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmetadata:\n  name: config\n"
I0929 02:37:37.618] time="02:37:37" level=debug msg="Configuration generated:\n # config generated by kind\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterConfiguration\nmetadata:\n  name: config\nkubernetesVersion: v1.17.0-alpha.0.1904+21753e8ec6ecb4\nclusterName: \"kind\"\ncontrolPlaneEndpoint: \"172.17.0.4:6443\"\n# on docker for mac we have to expose the api server via port forward,\n# so we need to ensure the cert is valid for localhost so we can talk\n# to the cluster after rewriting the kubeconfig to point to localhost\napiServer:\n  certSANs: [localhost, \"127.0.0.1\"]\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\n    # configure ipv6 default addresses for IPv6 clusters\n    \nscheduler:\n  extraArgs:\n    # configure ipv6 default addresses for IPv6 clusters\n    \nnetworking:\n  podSubnet: \"10.244.0.0/16\"\n  serviceSubnet: \"10.96.0.0/12\"\n---\napiVersion: kubeadm.k8s.io/v1beta2\nkind: InitConfiguration\nmetadata:\n  name: config\n# we use a well know token for TLS bootstrap\nbootstrapTokens:\n- token: \"abcdef.0123456789abcdef\"\n# we use a well know port for making the API server discoverable inside docker network. \n# from the host machine such port will be accessible via a random local port instead.\nlocalAPIEndpoint:\n  advertiseAddress: \"172.17.0.2\"\n  bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.2\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeadm.k8s.io/v1beta2\nkind: JoinConfiguration\nmetadata:\n  name: config\n\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.2\"\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: \"172.17.0.4:6443\"\n    token: \"abcdef.0123456789abcdef\"\n    unsafeSkipCAVerification: true\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nmetadata:\n  name: config\n# configure ipv6 addresses in IPv6 mode\n\n# disable disk resource management by default\n# kubelet will see the host disk that the inner container runtime\n# is ultimately backed by and attempt to recover disk space. we don't want that.\nimageGCHighThresholdPercent: 100\nevictionHard:\n  nodefs.available: \"0%\"\n  nodefs.inodesFree: \"0%\"\n  imagefs.available: \"0%\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmetadata:\n  name: config\n"
I0929 02:37:37.621] time="02:37:37" level=debug msg="Configuration generated:\n # config generated by kind\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterConfiguration\nmetadata:\n  name: config\nkubernetesVersion: v1.17.0-alpha.0.1904+21753e8ec6ecb4\nclusterName: \"kind\"\ncontrolPlaneEndpoint: \"172.17.0.4:6443\"\n# on docker for mac we have to expose the api server via port forward,\n# so we need to ensure the cert is valid for localhost so we can talk\n# to the cluster after rewriting the kubeconfig to point to localhost\napiServer:\n  certSANs: [localhost, \"127.0.0.1\"]\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\n    # configure ipv6 default addresses for IPv6 clusters\n    \nscheduler:\n  extraArgs:\n    # configure ipv6 default addresses for IPv6 clusters\n    \nnetworking:\n  podSubnet: \"10.244.0.0/16\"\n  serviceSubnet: \"10.96.0.0/12\"\n---\napiVersion: kubeadm.k8s.io/v1beta2\nkind: InitConfiguration\nmetadata:\n  name: config\n# we use a well know token for TLS bootstrap\nbootstrapTokens:\n- token: \"abcdef.0123456789abcdef\"\n# we use a well know port for making the API server discoverable inside docker network. \n# from the host machine such port will be accessible via a random local port instead.\nlocalAPIEndpoint:\n  advertiseAddress: \"172.17.0.4\"\n  bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.4\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeadm.k8s.io/v1beta2\nkind: JoinConfiguration\nmetadata:\n  name: config\ncontrolPlane:\n  localAPIEndpoint:\n    advertiseAddress: \"172.17.0.4\"\n    bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.4\"\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: \"172.17.0.4:6443\"\n    token: \"abcdef.0123456789abcdef\"\n    unsafeSkipCAVerification: true\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nmetadata:\n  name: config\n# configure ipv6 addresses in IPv6 mode\n\n# disable disk resource management by default\n# kubelet will see the host disk that the inner container runtime\n# is ultimately backed by and attempt to recover disk space. we don't want that.\nimageGCHighThresholdPercent: 100\nevictionHard:\n  nodefs.available: \"0%\"\n  nodefs.inodesFree: \"0%\"\n  imagefs.available: \"0%\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmetadata:\n  name: config\n"
I0929 02:37:37.636] time="02:37:37" level=debug msg="Using kubeadm config:\napiServer:\n  certSANs:\n  - localhost\n  - 127.0.0.1\napiVersion: kubeadm.k8s.io/v1beta2\nclusterName: kind\ncontrolPlaneEndpoint: 172.17.0.4:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.17.0-alpha.0.1904+21753e8ec6ecb4\nnetworking:\n  podSubnet: 10.244.0.0/16\n  serviceSubnet: 10.96.0.0/12\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.4\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.4\n---\napiVersion: kubeadm.k8s.io/v1beta2\ncontrolPlane:\n  localAPIEndpoint:\n    advertiseAddress: 172.17.0.4\n    bindPort: 6443\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.4:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.4\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
I0929 02:37:37.637] time="02:37:37" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane mkdir -p /kind]"
I0929 02:37:37.638] time="02:37:37" level=debug msg="Using kubeadm config:\napiServer:\n  certSANs:\n  - localhost\n  - 127.0.0.1\napiVersion: kubeadm.k8s.io/v1beta2\nclusterName: kind\ncontrolPlaneEndpoint: 172.17.0.4:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.17.0-alpha.0.1904+21753e8ec6ecb4\nnetworking:\n  podSubnet: 10.244.0.0/16\n  serviceSubnet: 10.96.0.0/12\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.3\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.3\n---\napiVersion: kubeadm.k8s.io/v1beta2\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.4:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.3\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
I0929 02:37:37.639] time="02:37:37" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker mkdir -p /kind]"
I0929 02:37:37.640] time="02:37:37" level=debug msg="Using kubeadm config:\napiServer:\n  certSANs:\n  - localhost\n  - 127.0.0.1\napiVersion: kubeadm.k8s.io/v1beta2\nclusterName: kind\ncontrolPlaneEndpoint: 172.17.0.4:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.17.0-alpha.0.1904+21753e8ec6ecb4\nnetworking:\n  podSubnet: 10.244.0.0/16\n  serviceSubnet: 10.96.0.0/12\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.2\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.2\n---\napiVersion: kubeadm.k8s.io/v1beta2\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.4:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.2\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
I0929 02:37:37.641] time="02:37:37" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker2 mkdir -p /kind]"
I0929 02:37:37.880] time="02:37:37" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-worker cp /dev/stdin /kind/kubeadm.conf]"
I0929 02:37:37.887] time="02:37:37" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-control-plane cp /dev/stdin /kind/kubeadm.conf]"
I0929 02:37:37.896] time="02:37:37" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-worker2 cp /dev/stdin /kind/kubeadm.conf]"
I0929 02:37:38.212]  ✓ Creating kubeadm config 📜
I0929 02:37:38.213]  â€ĸ Starting control-plane 🕹ī¸  ...
I0929 02:37:38.213] time="02:37:38" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane kubeadm init --ignore-preflight-errors=all --config=/kind/kubeadm.conf --skip-token-print --v=6]"
I0929 02:38:15.296] time="02:38:15" level=debug msg="I0929 02:37:38.615735      81 initconfiguration.go:190] loading configuration from \"/kind/kubeadm.conf\"\n[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration\nI0929 02:37:38.628395      81 feature_gate.go:216] feature gates: &{map[]}\n[init] Using Kubernetes version: v1.17.0-alpha.0.1904+21753e8ec6ecb4\n[preflight] Running pre-flight checks\nI0929 02:37:38.628630      81 checks.go:578] validating Kubernetes and kubeadm version\nI0929 02:37:38.628655      81 checks.go:167] validating if the firewall is enabled and active\nI0929 02:37:38.646978      81 checks.go:202] validating availability of port 6443\nI0929 02:37:38.647232      81 checks.go:202] validating availability of port 10251\nI0929 02:37:38.647285      81 checks.go:202] validating availability of port 10252\nI0929 02:37:38.647360      81 checks.go:287] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml\nI0929 02:37:38.647389      81 checks.go:287] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml\nI0929 02:37:38.647426      81 checks.go:287] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml\nI0929 02:37:38.647432      81 checks.go:287] validating the existence of file /etc/kubernetes/manifests/etcd.yaml\nI0929 02:37:38.647439      81 checks.go:433] validating if the connectivity type is via proxy or direct\nI0929 02:37:38.647975      81 checks.go:472] validating http connectivity to first IP address in the CIDR\nI0929 02:37:38.647998      81 checks.go:472] validating http connectivity to first IP address in the CIDR\nI0929 02:37:38.648007      81 checks.go:103] validating the container runtime\nI0929 02:37:38.789495      81 checks.go:377] validating the presence of executable crictl\nI0929 02:37:38.789553      81 checks.go:336] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0929 02:37:38.789722      81 checks.go:336] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0929 02:37:38.789797      81 checks.go:650] validating whether swap is enabled or not\nI0929 02:37:38.789855      81 checks.go:377] validating the presence of executable ip\nI0929 02:37:38.789980      81 checks.go:377] validating the presence of executable iptables\nI0929 02:37:38.790030      81 checks.go:377] validating the presence of executable mount\nI0929 02:37:38.790045      81 checks.go:377] validating the presence of executable nsenter\nI0929 02:37:38.790107      81 checks.go:377] validating the presence of executable ebtables\nI0929 02:37:38.790229      81 checks.go:377] validating the presence of executable ethtool\nI0929 02:37:38.790343      81 checks.go:377] validating the presence of executable socat\nI0929 02:37:38.790405      81 checks.go:377] validating the presence of executable tc\nI0929 02:37:38.790445      81 checks.go:377] validating the presence of executable touch\nI0929 02:37:38.790483      81 checks.go:521] running all checks\nI0929 02:37:38.797666      81 checks.go:407] checking whether the given node name is reachable using net.LookupHost\nI0929 02:37:38.797955      81 checks.go:619] validating kubelet version\nI0929 02:37:38.881210      81 checks.go:129] validating if the service is enabled and active\nI0929 02:37:38.893625      81 checks.go:202] validating availability of port 10250\nI0929 02:37:38.893708      81 checks.go:202] validating availability of port 2379\nI0929 02:37:38.893734      81 checks.go:202] validating availability of port 2380\nI0929 02:37:38.893763      81 checks.go:250] validating the existence and emptiness of directory /var/lib/etcd\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\nI0929 02:37:38.905497      81 checks.go:839] image exists: k8s.gcr.io/kube-apiserver:v1.17.0-alpha.0.1904_21753e8ec6ecb4\nI0929 02:37:38.913485      81 checks.go:839] image exists: k8s.gcr.io/kube-controller-manager:v1.17.0-alpha.0.1904_21753e8ec6ecb4\nI0929 02:37:38.921810      81 checks.go:839] image exists: k8s.gcr.io/kube-scheduler:v1.17.0-alpha.0.1904_21753e8ec6ecb4\nI0929 02:37:38.929463      81 checks.go:839] image exists: k8s.gcr.io/kube-proxy:v1.17.0-alpha.0.1904_21753e8ec6ecb4\nI0929 02:37:38.936824      81 checks.go:839] image exists: k8s.gcr.io/pause:3.1\nI0929 02:37:38.943855      81 checks.go:839] image exists: k8s.gcr.io/etcd:3.3.15-0\nI0929 02:37:38.952703      81 checks.go:839] image exists: k8s.gcr.io/coredns:1.6.2\nI0929 02:37:38.952953      81 kubelet.go:61] Stopping the kubelet\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\nI0929 02:37:38.976030      81 kubelet.go:79] Starting the kubelet\n[kubelet-start] Activating the kubelet service\nI0929 02:37:39.044431      81 certs.go:104] creating a new certificate authority for ca\n[certs] Using certificateDir folder \"/etc/kubernetes/pki\"\n[certs] Generating \"ca\" certificate and key\n[certs] Generating \"apiserver\" certificate and key\n[certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.17.0.4 172.17.0.4 127.0.0.1]\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\nI0929 02:37:40.329324      81 certs.go:104] creating a new certificate authority for front-proxy-ca\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\nI0929 02:37:40.688308      81 certs.go:104] creating a new certificate authority for etcd-ca\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.4 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.4 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\nI0929 02:37:41.863037      81 certs.go:70] creating a new public/private key files for signing service account users\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\nI0929 02:37:42.052110      81 kubeconfig.go:79] creating kubeconfig file for admin.conf\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\nI0929 02:37:42.395044      81 kubeconfig.go:79] creating kubeconfig file for kubelet.conf\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\nI0929 02:37:42.545541      81 kubeconfig.go:79] creating kubeconfig file for controller-manager.conf\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\nI0929 02:37:42.828813      81 kubeconfig.go:79] creating kubeconfig file for scheduler.conf\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\nI0929 02:37:43.724690      81 manifests.go:91] [control-plane] getting StaticPodSpecs\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\nI0929 02:37:43.734506      81 manifests.go:116] [control-plane] wrote static Pod manifest for component \"kube-apiserver\" to \"/etc/kubernetes/manifests/kube-apiserver.yaml\"\nI0929 02:37:43.734542      81 manifests.go:91] [control-plane] getting StaticPodSpecs\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\nI0929 02:37:43.735899      81 manifests.go:116] [control-plane] wrote static Pod manifest for component \"kube-controller-manager\" to \"/etc/kubernetes/manifests/kube-controller-manager.yaml\"\nI0929 02:37:43.735936      81 manifests.go:91] [control-plane] getting StaticPodSpecs\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\nI0929 02:37:43.736904      81 manifests.go:116] [control-plane] wrote static Pod manifest for component \"kube-scheduler\" to \"/etc/kubernetes/manifests/kube-scheduler.yaml\"\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\nI0929 02:37:43.737802      81 local.go:69] [etcd] wrote Static Pod manifest for a local etcd member to \"/etc/kubernetes/manifests/etcd.yaml\"\nI0929 02:37:43.737824      81 waitcontrolplane.go:80] [wait-control-plane] Waiting for the API server to be healthy\nI0929 02:37:43.739075      81 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s\nI0929 02:37:43.743811      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 2 milliseconds\nI0929 02:37:44.244522      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:44.744489      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:45.244462      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:45.744434      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:46.244441      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:46.744424      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:47.244411      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:47.744463      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:48.244411      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:48.744388      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:49.244366      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:49.744464      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:50.244395      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:50.744538      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:51.244566      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:51.744422      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:52.244468      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:52.744493      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:53.244523      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:53.744488      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:54.244470      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:54.744514      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:55.244487      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:55.744422      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:56.244403      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:56.744403      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:57.244454      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:57.744524      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:58.244493      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:58.744436      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:59.244435      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:37:59.744539      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:38:00.244406      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:38:00.744453      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:38:01.244409      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:38:01.744433      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:38:02.244405      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:38:02.744398      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:38:03.244370      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:38:03.744428      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:38:04.244504      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:38:04.744371      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:38:05.244538      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:38:05.744459      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:38:06.244532      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:38:06.744421      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:38:07.244455      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:38:07.744548      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0929 02:38:12.248424      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s 500 Internal Server Error in 4004 milliseconds\nI0929 02:38:12.746812      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0929 02:38:13.245417      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s 500 Internal Server Error in 1 milliseconds\nI0929 02:38:13.746461      81 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=32s 200 OK in 2 milliseconds\nI0929 02:38:13.746569      81 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap\n[apiclient] All control plane components are healthy after 30.005515 seconds\n[upload-config] Storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace\nI0929 02:38:13.755644      81 round_trippers.go:443] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 4 milliseconds\nI0929 02:38:13.759354      81 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 3 milliseconds\nI0929 02:38:13.763205      81 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 2 milliseconds\nI0929 02:38:13.763832      81 uploadconfig.go:122] [upload-config] Uploading the kubelet component config to a ConfigMap\n[kubelet] Creating a ConfigMap \"kubelet-config-1.17\" in namespace kube-system with the configuration for the kubelets in the cluster\nI0929 02:38:13.767506      81 round_trippers.go:443] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 2 milliseconds\nI0929 02:38:13.769938      81 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 2 milliseconds\nI0929 02:38:13.772468      81 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 2 milliseconds\nI0929 02:38:13.772567      81 uploadconfig.go:127] [upload-config] Preserving the CRISocket information for the control-plane node\nI0929 02:38:13.772578      81 patchnode.go:30] [patchnode] Uploading the CRI Socket information \"/run/containerd/containerd.sock\" to the Node API object \"kind-control-plane\" as an annotation\nI0929 02:38:14.275891      81 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-control-plane 200 OK in 3 milliseconds\nI0929 02:38:14.281933      81 round_trippers.go:443] PATCH https://172.17.0.4:6443/api/v1/nodes/kind-control-plane 200 OK in 3 milliseconds\n[upload-certs] Skipping phase. Please see --upload-certs\n[mark-control-plane] Marking the node kind-control-plane as control-plane by adding the label \"node-role.kubernetes.io/master=''\"\n[mark-control-plane] Marking the node kind-control-plane as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]\nI0929 02:38:14.785312      81 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-control-plane 200 OK in 2 milliseconds\nI0929 02:38:14.793158      81 round_trippers.go:443] PATCH https://172.17.0.4:6443/api/v1/nodes/kind-control-plane 200 OK in 4 milliseconds\n[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles\nI0929 02:38:14.795611      81 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-abcdef 404 Not Found in 1 milliseconds\nI0929 02:38:14.799682      81 round_trippers.go:443] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/secrets 201 Created in 3 milliseconds\n[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials\nI0929 02:38:14.802903      81 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 2 milliseconds\n[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token\n[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster\nI0929 02:38:14.806674      81 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 2 milliseconds\n[bootstrap-token] Creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace\nI0929 02:38:14.808902      81 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 2 milliseconds\nI0929 02:38:14.809014      81 clusterinfo.go:45] [bootstrap-token] loading admin kubeconfig\nI0929 02:38:14.809597      81 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\nI0929 02:38:14.809618      81 clusterinfo.go:53] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig\nI0929 02:38:14.809916      81 clusterinfo.go:65] [bootstrap-token] creating/updating ConfigMap in kube-public namespace\nI0929 02:38:14.812491      81 round_trippers.go:443] POST https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps 201 Created in 2 milliseconds\nI0929 02:38:14.812682      81 clusterinfo.go:79] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace\nI0929 02:38:14.814956      81 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles 201 Created in 2 milliseconds\nI0929 02:38:14.817477      81 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings 201 Created in 2 milliseconds\nI0929 02:38:14.819633      81 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kube-dns 404 Not Found in 1 milliseconds\nI0929 02:38:14.821846      81 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/coredns 404 Not Found in 1 milliseconds\nI0929 02:38:14.824375      81 round_trippers.go:443] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 2 milliseconds\nI0929 02:38:14.830727      81 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/clusterroles 201 Created in 5 milliseconds\nI0929 02:38:14.833272      81 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 2 milliseconds\nI0929 02:38:14.838329      81 round_trippers.go:443] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 3 milliseconds\nI0929 02:38:14.861164      81 round_trippers.go:443] POST https://172.17.0.4:6443/apis/apps/v1/namespaces/kube-system/deployments 201 Created in 11 milliseconds\nI0929 02:38:14.868968      81 round_trippers.go:443] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/services 201 Created in 6 milliseconds\n[addons] Applied essential addon: CoreDNS\nI0929 02:38:14.982609      81 request.go:538] Throttling request took 113.272891ms, request: POST:https://172.17.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts\nI0929 02:38:14.985607      81 round_trippers.go:443] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 2 milliseconds\nI0929 02:38:15.182687      81 request.go:538] Throttling request took 195.273148ms, request: POST:https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps\nI0929 02:38:15.186132      81 round_trippers.go:443] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 3 milliseconds\nI0929 02:38:15.201305      81 round_trippers.go:443] POST https://172.17.0.4:6443/apis/apps/v1/namespaces/kube-system/daemonsets 201 Created in 10 milliseconds\nI0929 02:38:15.205407      81 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 2 milliseconds\nI0929 02:38:15.208208      81 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 2 milliseconds\n[addons] Applied essential addon: kube-proxy\nI0929 02:38:15.210619      81 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 2 milliseconds\nI0929 02:38:15.211494      81 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\nI0929 02:38:15.212579      81 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\n\nYour Kubernetes control-plane has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n  mkdir -p $HOME/.kube\n  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n  sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nYou should now deploy a pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n  https://kubernetes.io/docs/concepts/cluster-administration/addons/\n\nYou can now join any number of control-plane nodes by copying certificate authorities \nand service account keys on each node and then running the following as root:\n\n  kubeadm join 172.17.0.4:6443 --token <value withheld> \\\n    --discovery-token-ca-cert-hash sha256:5333253def867970cf85516f61d1fa0b76f756250511c4431e2ef6d0c5c10861 \\\n    --control-plane \t  \n\nThen you can join any number of worker nodes by running the following on each as root:\n\nkubeadm join 172.17.0.4:6443 --token <value withheld> \\\n    --discovery-token-ca-cert-hash sha256:5333253def867970cf85516f61d1fa0b76f756250511c4431e2ef6d0c5c10861 "
I0929 02:38:15.297] time="02:38:15" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{(index (index .NetworkSettings.Ports \"6443/tcp\") 0).HostPort}} kind-control-plane]"
I0929 02:38:15.338] time="02:38:15" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane cat /etc/kubernetes/admin.conf]"
I0929 02:38:15.553]  ✓ Starting control-plane 🕹ī¸
I0929 02:38:15.553]  â€ĸ Installing CNI 🔌  ...
I0929 02:38:15.554] time="02:38:15" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane cat /kind/manifests/default-cni.yaml]"
I0929 02:38:15.772] time="02:38:15" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-control-plane kubectl create --kubeconfig=/etc/kubernetes/admin.conf -f -]"
I0929 02:38:16.746]  ✓ Installing CNI 🔌
I0929 02:38:16.746]  â€ĸ Installing StorageClass 💾  ...
I0929 02:38:16.747] time="02:38:16" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f -]"
I0929 02:38:17.202]  ✓ Installing StorageClass 💾
I0929 02:38:17.202]  â€ĸ Joining worker nodes 🚜  ...
I0929 02:38:17.202] time="02:38:17" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker2 kubeadm join --config /kind/kubeadm.conf --ignore-preflight-errors=all --v=6]"
I0929 02:38:17.203] time="02:38:17" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker kubeadm join --config /kind/kubeadm.conf --ignore-preflight-errors=all --v=6]"
I0929 02:38:51.300] time="02:38:51" level=debug msg="I0929 02:38:17.427371     421 join.go:368] [preflight] found NodeName empty; using OS hostname as NodeName\nI0929 02:38:17.427409     421 joinconfiguration.go:75] loading configuration from \"/kind/kubeadm.conf\"\nI0929 02:38:17.429131     421 preflight.go:90] [preflight] Running general checks\nI0929 02:38:17.429199     421 checks.go:250] validating the existence and emptiness of directory /etc/kubernetes/manifests\n[preflight] Running pre-flight checks\nI0929 02:38:17.429267     421 checks.go:287] validating the existence of file /etc/kubernetes/kubelet.conf\nI0929 02:38:17.429307     421 checks.go:287] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf\nI0929 02:38:17.429318     421 checks.go:103] validating the container runtime\nI0929 02:38:17.438910     421 checks.go:377] validating the presence of executable crictl\nI0929 02:38:17.439019     421 checks.go:336] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0929 02:38:17.439118     421 checks.go:336] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0929 02:38:17.439171     421 checks.go:650] validating whether swap is enabled or not\nI0929 02:38:17.439238     421 checks.go:377] validating the presence of executable ip\nI0929 02:38:17.439356     421 checks.go:377] validating the presence of executable iptables\nI0929 02:38:17.439431     421 checks.go:377] validating the presence of executable mount\nI0929 02:38:17.439473     421 checks.go:377] validating the presence of executable nsenter\nI0929 02:38:17.439534     421 checks.go:377] validating the presence of executable ebtables\nI0929 02:38:17.439612     421 checks.go:377] validating the presence of executable ethtool\nI0929 02:38:17.439675     421 checks.go:377] validating the presence of executable socat\nI0929 02:38:17.439741     421 checks.go:377] validating the presence of executable tc\nI0929 02:38:17.439805     421 checks.go:377] validating the presence of executable touch\nI0929 02:38:17.439884     421 checks.go:521] running all checks\nI0929 02:38:17.452551     421 checks.go:407] checking whether the given node name is reachable using net.LookupHost\nI0929 02:38:17.452872     421 checks.go:619] validating kubelet version\nI0929 02:38:17.536141     421 checks.go:129] validating if the service is enabled and active\nI0929 02:38:17.551918     421 checks.go:202] validating availability of port 10250\nI0929 02:38:17.552131     421 checks.go:287] validating the existence of file /etc/kubernetes/pki/ca.crt\nI0929 02:38:17.552149     421 checks.go:433] validating if the connectivity type is via proxy or direct\nI0929 02:38:17.552183     421 join.go:438] [preflight] Discovering cluster-info\nI0929 02:38:17.552278     421 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0929 02:38:17.552759     421 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0929 02:38:17.560187     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 7 milliseconds\nI0929 02:38:17.560927     421 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.4:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0929 02:38:22.561152     421 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0929 02:38:22.561981     421 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0929 02:38:22.564284     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds\nI0929 02:38:22.564552     421 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.4:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0929 02:38:27.564781     421 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0929 02:38:27.565452     421 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0929 02:38:27.567562     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds\nI0929 02:38:27.567989     421 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.4:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0929 02:38:32.568171     421 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0929 02:38:32.568957     421 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0929 02:38:32.570775     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 1 milliseconds\nI0929 02:38:32.571081     421 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.4:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0929 02:38:37.571482     421 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0929 02:38:37.571970     421 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0929 02:38:37.574493     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds\nI0929 02:38:37.575944     421 token.go:109] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server \"172.17.0.4:6443\"\nI0929 02:38:37.575972     421 token.go:205] [discovery] Successfully established connection with API Server \"172.17.0.4:6443\"\nI0929 02:38:37.575998     421 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process\nI0929 02:38:37.576183     421 join.go:452] [preflight] Fetching init configuration\nI0929 02:38:37.576202     421 join.go:490] [preflight] Retrieving KubeConfig objects\n[preflight] Reading configuration from the cluster...\n[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\nI0929 02:38:37.583422     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 6 milliseconds\nI0929 02:38:37.587773     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 3 milliseconds\nI0929 02:38:37.590733     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 1 milliseconds\nI0929 02:38:37.592522     421 interface.go:384] Looking for default routes with IPv4 addresses\nI0929 02:38:37.592588     421 interface.go:389] Default route transits interface \"eth0\"\nI0929 02:38:37.592725     421 interface.go:196] Interface eth0 is up\nI0929 02:38:37.592898     421 interface.go:244] Interface \"eth0\" has 1 addresses :[172.17.0.2/16].\nI0929 02:38:37.592950     421 interface.go:211] Checking addr  172.17.0.2/16.\nI0929 02:38:37.592961     421 interface.go:218] IP found 172.17.0.2\nI0929 02:38:37.592971     421 interface.go:250] Found valid IPv4 address 172.17.0.2 for interface \"eth0\".\nI0929 02:38:37.592979     421 interface.go:395] Found active IP 172.17.0.2 \nI0929 02:38:37.593082     421 preflight.go:101] [preflight] Running configuration dependant checks\nI0929 02:38:37.593131     421 controlplaneprepare.go:211] [download-certs] Skipping certs download\nI0929 02:38:37.593151     421 kubelet.go:107] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf\nI0929 02:38:37.596817     421 kubelet.go:115] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt\nI0929 02:38:37.597438     421 loader.go:375] Config loaded from file:  /etc/kubernetes/bootstrap-kubelet.conf\nI0929 02:38:37.598028     421 kubelet.go:133] [kubelet-start] Stopping the kubelet\n[kubelet-start] Downloading configuration for the kubelet from the \"kubelet-config-1.17\" ConfigMap in the kube-system namespace\nI0929 02:38:37.612821     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 2 milliseconds\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\nI0929 02:38:37.622521     421 kubelet.go:150] [kubelet-start] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...\nI0929 02:38:38.701834     421 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0929 02:38:38.718661     421 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0929 02:38:38.720533     421 kubelet.go:168] [kubelet-start] preserving the crisocket information for the node\nI0929 02:38:38.720567     421 patchnode.go:30] [patchnode] Uploading the CRI Socket information \"/run/containerd/containerd.sock\" to the Node API object \"kind-worker2\" as an annotation\nI0929 02:38:39.228549     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 7 milliseconds\nI0929 02:38:39.723105     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0929 02:38:40.223065     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0929 02:38:40.723132     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0929 02:38:41.223105     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0929 02:38:41.723182     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0929 02:38:42.223286     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0929 02:38:42.722772     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 1 milliseconds\nI0929 02:38:43.223021     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0929 02:38:43.722830     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 1 milliseconds\nI0929 02:38:44.223042     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0929 02:38:44.723032     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0929 02:38:45.223262     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0929 02:38:45.723289     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0929 02:38:46.224000     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0929 02:38:46.723143     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0929 02:38:47.223398     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0929 02:38:47.723678     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0929 02:38:48.223162     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0929 02:38:48.722980     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0929 02:38:49.223273     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0929 02:38:49.723620     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0929 02:38:50.223647     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0929 02:38:50.723469     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0929 02:38:51.223415     421 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker2 200 OK in 2 milliseconds\nI0929 02:38:51.233164     421 round_trippers.go:443] PATCH https://172.17.0.4:6443/api/v1/nodes/kind-worker2 200 OK in 4 milliseconds\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the control-plane to see this node join the cluster.\n"
I0929 02:38:51.788] time="02:38:51" level=debug msg="I0929 02:38:17.419588     427 join.go:368] [preflight] found NodeName empty; using OS hostname as NodeName\nI0929 02:38:17.419626     427 joinconfiguration.go:75] loading configuration from \"/kind/kubeadm.conf\"\n[preflight] Running pre-flight checks\nI0929 02:38:17.421334     427 preflight.go:90] [preflight] Running general checks\nI0929 02:38:17.421398     427 checks.go:250] validating the existence and emptiness of directory /etc/kubernetes/manifests\nI0929 02:38:17.421412     427 checks.go:287] validating the existence of file /etc/kubernetes/kubelet.conf\nI0929 02:38:17.421418     427 checks.go:287] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf\nI0929 02:38:17.421424     427 checks.go:103] validating the container runtime\nI0929 02:38:17.430574     427 checks.go:377] validating the presence of executable crictl\nI0929 02:38:17.430639     427 checks.go:336] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0929 02:38:17.430713     427 checks.go:336] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0929 02:38:17.430766     427 checks.go:650] validating whether swap is enabled or not\nI0929 02:38:17.430831     427 checks.go:377] validating the presence of executable ip\nI0929 02:38:17.430888     427 checks.go:377] validating the presence of executable iptables\nI0929 02:38:17.430933     427 checks.go:377] validating the presence of executable mount\nI0929 02:38:17.430955     427 checks.go:377] validating the presence of executable nsenter\nI0929 02:38:17.430991     427 checks.go:377] validating the presence of executable ebtables\nI0929 02:38:17.431069     427 checks.go:377] validating the presence of executable ethtool\nI0929 02:38:17.431102     427 checks.go:377] validating the presence of executable socat\nI0929 02:38:17.431141     427 checks.go:377] validating the presence of executable tc\nI0929 02:38:17.431169     427 checks.go:377] validating the presence of executable touch\nI0929 02:38:17.431213     427 checks.go:521] running all checks\nI0929 02:38:17.440820     427 checks.go:407] checking whether the given node name is reachable using net.LookupHost\nI0929 02:38:17.441194     427 checks.go:619] validating kubelet version\nI0929 02:38:17.520430     427 checks.go:129] validating if the service is enabled and active\nI0929 02:38:17.534125     427 checks.go:202] validating availability of port 10250\nI0929 02:38:17.535053     427 checks.go:287] validating the existence of file /etc/kubernetes/pki/ca.crt\nI0929 02:38:17.535150     427 checks.go:433] validating if the connectivity type is via proxy or direct\nI0929 02:38:17.535350     427 join.go:438] [preflight] Discovering cluster-info\nI0929 02:38:17.535636     427 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0929 02:38:17.536602     427 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0929 02:38:17.547336     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 10 milliseconds\nI0929 02:38:17.548714     427 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.4:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0929 02:38:22.548979     427 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0929 02:38:22.549924     427 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0929 02:38:22.552644     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds\nI0929 02:38:22.552975     427 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.4:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0929 02:38:27.553191     427 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0929 02:38:27.553897     427 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0929 02:38:27.555982     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 1 milliseconds\nI0929 02:38:27.556164     427 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.4:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0929 02:38:32.556349     427 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0929 02:38:32.557636     427 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0929 02:38:32.562428     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 4 milliseconds\nI0929 02:38:32.564687     427 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.4:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0929 02:38:37.565141     427 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.4:6443\"\nI0929 02:38:37.565714     427 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.4:6443\"\nI0929 02:38:37.568004     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds\nI0929 02:38:37.569455     427 token.go:109] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server \"172.17.0.4:6443\"\nI0929 02:38:37.569485     427 token.go:205] [discovery] Successfully established connection with API Server \"172.17.0.4:6443\"\nI0929 02:38:37.569548     427 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process\nI0929 02:38:37.569562     427 join.go:452] [preflight] Fetching init configuration\nI0929 02:38:37.569567     427 join.go:490] [preflight] Retrieving KubeConfig objects\n[preflight] Reading configuration from the cluster...\n[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\nI0929 02:38:37.578963     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 8 milliseconds\nI0929 02:38:37.582514     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 2 milliseconds\nI0929 02:38:37.585406     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 1 milliseconds\nI0929 02:38:37.586794     427 interface.go:384] Looking for default routes with IPv4 addresses\nI0929 02:38:37.586810     427 interface.go:389] Default route transits interface \"eth0\"\nI0929 02:38:37.587056     427 interface.go:196] Interface eth0 is up\nI0929 02:38:37.587121     427 interface.go:244] Interface \"eth0\" has 1 addresses :[172.17.0.3/16].\nI0929 02:38:37.587143     427 interface.go:211] Checking addr  172.17.0.3/16.\nI0929 02:38:37.587152     427 interface.go:218] IP found 172.17.0.3\nI0929 02:38:37.587162     427 interface.go:250] Found valid IPv4 address 172.17.0.3 for interface \"eth0\".\nI0929 02:38:37.587170     427 interface.go:395] Found active IP 172.17.0.3 \nI0929 02:38:37.587271     427 preflight.go:101] [preflight] Running configuration dependant checks\nI0929 02:38:37.587284     427 controlplaneprepare.go:211] [download-certs] Skipping certs download\nI0929 02:38:37.587300     427 kubelet.go:107] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf\nI0929 02:38:37.588704     427 kubelet.go:115] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt\nI0929 02:38:37.589398     427 loader.go:375] Config loaded from file:  /etc/kubernetes/bootstrap-kubelet.conf\nI0929 02:38:37.591061     427 kubelet.go:133] [kubelet-start] Stopping the kubelet\n[kubelet-start] Downloading configuration for the kubelet from the \"kubelet-config-1.17\" ConfigMap in the kube-system namespace\nI0929 02:38:37.605013     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 2 milliseconds\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\nI0929 02:38:37.614744     427 kubelet.go:150] [kubelet-start] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...\nI0929 02:38:39.196440     427 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0929 02:38:39.208273     427 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0929 02:38:39.211064     427 kubelet.go:168] [kubelet-start] preserving the crisocket information for the node\nI0929 02:38:39.211093     427 patchnode.go:30] [patchnode] Uploading the CRI Socket information \"/run/containerd/containerd.sock\" to the Node API object \"kind-worker\" as an annotation\nI0929 02:38:39.718803     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 7 milliseconds\nI0929 02:38:40.214164     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0929 02:38:40.714114     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0929 02:38:41.214407     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0929 02:38:41.714042     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0929 02:38:42.214852     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0929 02:38:42.714533     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0929 02:38:43.214321     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0929 02:38:43.713719     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0929 02:38:44.214019     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0929 02:38:44.714073     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0929 02:38:45.213765     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0929 02:38:45.714794     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0929 02:38:46.214435     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0929 02:38:46.719313     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 7 milliseconds\nI0929 02:38:47.215159     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0929 02:38:47.714097     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0929 02:38:48.214376     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0929 02:38:48.714048     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0929 02:38:49.214052     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0929 02:38:49.714039     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0929 02:38:50.214013     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0929 02:38:50.714013     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0929 02:38:51.214181     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0929 02:38:51.715652     427 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/nodes/kind-worker 200 OK in 4 milliseconds\nI0929 02:38:51.722614     427 round_trippers.go:443] PATCH https://172.17.0.4:6443/api/v1/nodes/kind-worker 200 OK in 4 milliseconds\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the control-plane to see this node join the cluster.\n"
I0929 02:38:51.789]  ✓ Joining worker nodes 🚜
I0929 02:38:51.789]  â€ĸ Waiting ≤ 1m0s for control-plane = Ready âŗ  ...
I0929 02:38:51.790] time="02:38:51" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master -o=jsonpath='{.items..status.conditions[-1:].status}']"
I0929 02:38:52.121]  ✓ Waiting ≤ 1m0s for control-plane = Ready âŗ
I0929 02:38:52.122]  â€ĸ Ready after 0s 💚
I0929 02:38:52.122] Cluster creation complete. You can now use the cluster with:
... skipping 928 lines ...
I0929 03:48:12.095] [03:48:12] Pod status is: Running
I0929 03:48:17.181] [03:48:17] Pod status is: Running
I0929 03:48:22.265] [03:48:22] Pod status is: Running
I0929 03:48:27.352] [03:48:27] Pod status is: Running
I0929 03:48:32.435] [03:48:32] Pod status is: Running
I0929 03:48:37.519] [03:48:37] Pod status is: Running
W0929 03:48:42.601] Error from server (NotFound): pods "e2e-conformance-test" not found
W0929 03:48:42.605] + cleanup
W0929 03:48:42.605] + kind export logs /workspace/_artifacts/logs
I0929 03:48:44.807] Exported logs to: /workspace/_artifacts/logs
W0929 03:48:44.908] + [[ true = true ]]
W0929 03:48:44.908] + kind delete cluster
I0929 03:48:45.008] Deleting cluster "kind" ...
... skipping 8 lines ...
W0929 03:48:51.880]     check(*cmd)
W0929 03:48:51.880]   File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 30, in check
W0929 03:48:51.880]     subprocess.check_call(cmd)
W0929 03:48:51.880]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0929 03:48:51.880]     raise CalledProcessError(retcode, cmd)
W0929 03:48:51.881] subprocess.CalledProcessError: Command '('bash', '-c', 'cd ./../../k8s.io/kubernetes && source ./../test-infra/experiment/kind-conformance-image-e2e.sh')' returned non-zero exit status 1
E0929 03:48:51.885] Command failed
I0929 03:48:51.886] process 691 exited with code 1 after 75.4m
E0929 03:48:51.886] FAIL: pull-kubernetes-conformance-image-test
I0929 03:48:51.886] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0929 03:48:52.380] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0929 03:48:52.429] process 180626 exited with code 0 after 0.0m
I0929 03:48:52.430] Call:  gcloud config get-value account
I0929 03:48:52.708] process 180638 exited with code 0 after 0.0m
I0929 03:48:52.709] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0929 03:48:52.709] Upload result and artifacts...
I0929 03:48:52.709] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1178135306826682368
I0929 03:48:52.709] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1178135306826682368/artifacts
W0929 03:48:53.779] CommandException: One or more URLs matched no objects.
E0929 03:48:53.895] Command failed
I0929 03:48:53.896] process 180650 exited with code 1 after 0.0m
W0929 03:48:53.896] Remote dir gs://kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1178135306826682368/artifacts not exist yet
I0929 03:48:53.896] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1178135306826682368/artifacts
I0929 03:48:56.105] process 180792 exited with code 0 after 0.0m
W0929 03:48:56.106] metadata path /workspace/_artifacts/metadata.json does not exist
W0929 03:48:56.106] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...