This job view page is being replaced by Spyglass soon. Check out the new job view.
PRmgdevstack: Promote e2e "verifying service's sessionAffinity for ClusterIP and NodePort services" to Conformance
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2019-08-23 01:20
Elapsed1h30m
Revision
Buildergke-prow-ssd-pool-1a225945-v0k1
pod140eb6f9-c544-11e9-9342-e69cf4ca5bc2
infra-commitc62e95a9f
pod140eb6f9-c544-11e9-9342-e69cf4ca5bc2
repok8s.io/test-infra
repo-commitc62e95a9f3be1dd17c6013765be0ebcb6e8ac7e9
repos{u'k8s.io/kubernetes': u'master:c369cf187ea765c0a2387f2b39abe6ed18c8e6a8,76443:fc84ff19464f8fb45653d491acb2e10db0dbacf9', u'k8s.io/test-infra': u'master'}

No Test Failures!


Error lines from build-log.txt

... skipping 664 lines ...
I0823 01:27:29.227] time="01:27:29" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane cat /kind/version]"
I0823 01:27:29.972] time="01:27:29" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-control-plane]"
I0823 01:27:30.073] time="01:27:30" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-worker2]"
I0823 01:27:30.074] time="01:27:30" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-control-plane]"
I0823 01:27:30.074] time="01:27:30" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-worker]"
I0823 01:27:30.157] time="01:27:30" level=debug msg="Configuration Input data: {kind v1.17.0-alpha.0.492+6a67aecf55063c 172.17.0.2:6443 6443 127.0.0.1 false 172.17.0.3 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}}"
I0823 01:27:30.163] time="01:27:30" level=debug msg="Configuration generated:\n # config generated by kind\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterConfiguration\nmetadata:\n  name: config\nkubernetesVersion: v1.17.0-alpha.0.492+6a67aecf55063c\nclusterName: \"kind\"\ncontrolPlaneEndpoint: \"172.17.0.2:6443\"\n# on docker for mac we have to expose the api server via port forward,\n# so we need to ensure the cert is valid for localhost so we can talk\n# to the cluster after rewriting the kubeconfig to point to localhost\napiServer:\n  certSANs: [localhost, \"127.0.0.1\"]\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\n    # configure ipv6 default addresses for IPv6 clusters\n    \nscheduler:\n  extraArgs:\n    # configure ipv6 default addresses for IPv6 clusters\n    \nnetworking:\n  podSubnet: \"10.244.0.0/16\"\n  serviceSubnet: \"10.96.0.0/12\"\n---\napiVersion: kubeadm.k8s.io/v1beta2\nkind: InitConfiguration\nmetadata:\n  name: config\n# we use a well know token for TLS bootstrap\nbootstrapTokens:\n- token: \"abcdef.0123456789abcdef\"\n# we use a well know port for making the API server discoverable inside docker network. \n# from the host machine such port will be accessible via a random local port instead.\nlocalAPIEndpoint:\n  advertiseAddress: \"172.17.0.3\"\n  bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.3\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeadm.k8s.io/v1beta2\nkind: JoinConfiguration\nmetadata:\n  name: config\n\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.3\"\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: \"172.17.0.2:6443\"\n    token: \"abcdef.0123456789abcdef\"\n    unsafeSkipCAVerification: true\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nmetadata:\n  name: config\n# configure ipv6 addresses in IPv6 mode\n\n# disable disk resource management by default\n# kubelet will see the host disk that the inner container runtime\n# is ultimately backed by and attempt to recover disk space. we don't want that.\nimageGCHighThresholdPercent: 100\nevictionHard:\n  nodefs.available: \"0%\"\n  nodefs.inodesFree: \"0%\"\n  imagefs.available: \"0%\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmetadata:\n  name: config\n"
I0823 01:27:30.168] time="01:27:30" level=debug msg="Configuration Input data: {kind v1.17.0-alpha.0.492+6a67aecf55063c 172.17.0.2:6443 6443 127.0.0.1 false 172.17.0.4 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}}"
I0823 01:27:30.174] time="01:27:30" level=debug msg="Configuration generated:\n # config generated by kind\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterConfiguration\nmetadata:\n  name: config\nkubernetesVersion: v1.17.0-alpha.0.492+6a67aecf55063c\nclusterName: \"kind\"\ncontrolPlaneEndpoint: \"172.17.0.2:6443\"\n# on docker for mac we have to expose the api server via port forward,\n# so we need to ensure the cert is valid for localhost so we can talk\n# to the cluster after rewriting the kubeconfig to point to localhost\napiServer:\n  certSANs: [localhost, \"127.0.0.1\"]\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\n    # configure ipv6 default addresses for IPv6 clusters\n    \nscheduler:\n  extraArgs:\n    # configure ipv6 default addresses for IPv6 clusters\n    \nnetworking:\n  podSubnet: \"10.244.0.0/16\"\n  serviceSubnet: \"10.96.0.0/12\"\n---\napiVersion: kubeadm.k8s.io/v1beta2\nkind: InitConfiguration\nmetadata:\n  name: config\n# we use a well know token for TLS bootstrap\nbootstrapTokens:\n- token: \"abcdef.0123456789abcdef\"\n# we use a well know port for making the API server discoverable inside docker network. \n# from the host machine such port will be accessible via a random local port instead.\nlocalAPIEndpoint:\n  advertiseAddress: \"172.17.0.4\"\n  bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.4\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeadm.k8s.io/v1beta2\nkind: JoinConfiguration\nmetadata:\n  name: config\n\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.4\"\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: \"172.17.0.2:6443\"\n    token: \"abcdef.0123456789abcdef\"\n    unsafeSkipCAVerification: true\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nmetadata:\n  name: config\n# configure ipv6 addresses in IPv6 mode\n\n# disable disk resource management by default\n# kubelet will see the host disk that the inner container runtime\n# is ultimately backed by and attempt to recover disk space. we don't want that.\nimageGCHighThresholdPercent: 100\nevictionHard:\n  nodefs.available: \"0%\"\n  nodefs.inodesFree: \"0%\"\n  imagefs.available: \"0%\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmetadata:\n  name: config\n"
I0823 01:27:30.176] time="01:27:30" level=debug msg="Configuration Input data: {kind v1.17.0-alpha.0.492+6a67aecf55063c 172.17.0.2:6443 6443 127.0.0.1 true 172.17.0.2 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}}"
I0823 01:27:30.179] time="01:27:30" level=debug msg="Configuration generated:\n # config generated by kind\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterConfiguration\nmetadata:\n  name: config\nkubernetesVersion: v1.17.0-alpha.0.492+6a67aecf55063c\nclusterName: \"kind\"\ncontrolPlaneEndpoint: \"172.17.0.2:6443\"\n# on docker for mac we have to expose the api server via port forward,\n# so we need to ensure the cert is valid for localhost so we can talk\n# to the cluster after rewriting the kubeconfig to point to localhost\napiServer:\n  certSANs: [localhost, \"127.0.0.1\"]\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\n    # configure ipv6 default addresses for IPv6 clusters\n    \nscheduler:\n  extraArgs:\n    # configure ipv6 default addresses for IPv6 clusters\n    \nnetworking:\n  podSubnet: \"10.244.0.0/16\"\n  serviceSubnet: \"10.96.0.0/12\"\n---\napiVersion: kubeadm.k8s.io/v1beta2\nkind: InitConfiguration\nmetadata:\n  name: config\n# we use a well know token for TLS bootstrap\nbootstrapTokens:\n- token: \"abcdef.0123456789abcdef\"\n# we use a well know port for making the API server discoverable inside docker network. \n# from the host machine such port will be accessible via a random local port instead.\nlocalAPIEndpoint:\n  advertiseAddress: \"172.17.0.2\"\n  bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.2\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeadm.k8s.io/v1beta2\nkind: JoinConfiguration\nmetadata:\n  name: config\ncontrolPlane:\n  localAPIEndpoint:\n    advertiseAddress: \"172.17.0.2\"\n    bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.2\"\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: \"172.17.0.2:6443\"\n    token: \"abcdef.0123456789abcdef\"\n    unsafeSkipCAVerification: true\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nmetadata:\n  name: config\n# configure ipv6 addresses in IPv6 mode\n\n# disable disk resource management by default\n# kubelet will see the host disk that the inner container runtime\n# is ultimately backed by and attempt to recover disk space. we don't want that.\nimageGCHighThresholdPercent: 100\nevictionHard:\n  nodefs.available: \"0%\"\n  nodefs.inodesFree: \"0%\"\n  imagefs.available: \"0%\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmetadata:\n  name: config\n"
I0823 01:27:30.191] time="01:27:30" level=debug msg="Using kubeadm config:\napiServer:\n  certSANs:\n  - localhost\n  - 127.0.0.1\napiVersion: kubeadm.k8s.io/v1beta2\nclusterName: kind\ncontrolPlaneEndpoint: 172.17.0.2:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.17.0-alpha.0.492+6a67aecf55063c\nnetworking:\n  podSubnet: 10.244.0.0/16\n  serviceSubnet: 10.96.0.0/12\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.4\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.4\n---\napiVersion: kubeadm.k8s.io/v1beta2\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.2:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.4\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
I0823 01:27:30.192] time="01:27:30" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker mkdir -p /kind]"
I0823 01:27:30.199] time="01:27:30" level=debug msg="Using kubeadm config:\napiServer:\n  certSANs:\n  - localhost\n  - 127.0.0.1\napiVersion: kubeadm.k8s.io/v1beta2\nclusterName: kind\ncontrolPlaneEndpoint: 172.17.0.2:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.17.0-alpha.0.492+6a67aecf55063c\nnetworking:\n  podSubnet: 10.244.0.0/16\n  serviceSubnet: 10.96.0.0/12\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.3\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.3\n---\napiVersion: kubeadm.k8s.io/v1beta2\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.2:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.3\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
I0823 01:27:30.200] time="01:27:30" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker2 mkdir -p /kind]"
I0823 01:27:30.208] time="01:27:30" level=debug msg="Using kubeadm config:\napiServer:\n  certSANs:\n  - localhost\n  - 127.0.0.1\napiVersion: kubeadm.k8s.io/v1beta2\nclusterName: kind\ncontrolPlaneEndpoint: 172.17.0.2:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.17.0-alpha.0.492+6a67aecf55063c\nnetworking:\n  podSubnet: 10.244.0.0/16\n  serviceSubnet: 10.96.0.0/12\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.2\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.2\n---\napiVersion: kubeadm.k8s.io/v1beta2\ncontrolPlane:\n  localAPIEndpoint:\n    advertiseAddress: 172.17.0.2\n    bindPort: 6443\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.2:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.2\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
I0823 01:27:30.208] time="01:27:30" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane mkdir -p /kind]"
I0823 01:27:30.681] time="01:27:30" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-worker cp /dev/stdin /kind/kubeadm.conf]"
I0823 01:27:30.715] time="01:27:30" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-control-plane cp /dev/stdin /kind/kubeadm.conf]"
I0823 01:27:30.729] time="01:27:30" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-worker2 cp /dev/stdin /kind/kubeadm.conf]"
I0823 01:27:31.196]  ✓ Creating kubeadm config 📜
I0823 01:27:31.197]  â€ĸ Starting control-plane 🕹ī¸  ...
I0823 01:27:31.197] time="01:27:31" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane kubeadm init --ignore-preflight-errors=all --config=/kind/kubeadm.conf --skip-token-print --v=6]"
I0823 01:28:25.693] time="01:28:25" level=debug msg="I0823 01:27:31.929443      24 initconfiguration.go:186] loading configuration from \"/kind/kubeadm.conf\"\n[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration\nI0823 01:27:31.947945      24 feature_gate.go:216] feature gates: &{map[]}\n[init] Using Kubernetes version: v1.17.0-alpha.0.492+6a67aecf55063c\n[preflight] Running pre-flight checks\nI0823 01:27:31.949147      24 checks.go:576] validating Kubernetes and kubeadm version\nI0823 01:27:31.949209      24 checks.go:168] validating if the firewall is enabled and active\nI0823 01:27:31.981686      24 checks.go:203] validating availability of port 6443\nI0823 01:27:31.982051      24 checks.go:203] validating availability of port 10251\nI0823 01:27:31.982132      24 checks.go:203] validating availability of port 10252\nI0823 01:27:31.982170      24 checks.go:288] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml\nI0823 01:27:31.982187      24 checks.go:288] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml\nI0823 01:27:31.982201      24 checks.go:288] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml\nI0823 01:27:31.982209      24 checks.go:288] validating the existence of file /etc/kubernetes/manifests/etcd.yaml\nI0823 01:27:31.982223      24 checks.go:434] validating if the connectivity type is via proxy or direct\nI0823 01:27:31.985761      24 checks.go:470] validating http connectivity to first IP address in the CIDR\nI0823 01:27:31.985881      24 checks.go:470] validating http connectivity to first IP address in the CIDR\nI0823 01:27:31.985949      24 checks.go:104] validating the container runtime\n\t[WARNING CRI]: container runtime is not running: output: NAME:\n   crictl info - Display information of the container runtime\n\nUSAGE:\n   crictl info [command options] [arguments...]\n\nOPTIONS:\n   --output value, -o value  Output format, One of: json|yaml (default: \"json\")\n   --quiet, -q               Do not show verbose information\n   \ntime=\"2019-08-23T01:27:34Z\" level=fatal msg=\"failed to connect: failed to connect: context deadline exceeded\"\n, error: exit status 1\nI0823 01:27:34.146041      24 checks.go:378] validating the presence of executable crictl\nI0823 01:27:34.146088      24 checks.go:337] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0823 01:27:34.146143      24 checks.go:337] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0823 01:27:34.146346      24 checks.go:648] validating whether swap is enabled or not\nI0823 01:27:34.146399      24 checks.go:378] validating the presence of executable ip\nI0823 01:27:34.146512      24 checks.go:378] validating the presence of executable iptables\nI0823 01:27:34.146550      24 checks.go:378] validating the presence of executable mount\nI0823 01:27:34.146600      24 checks.go:378] validating the presence of executable nsenter\nI0823 01:27:34.146646      24 checks.go:378] validating the presence of executable ebtables\nI0823 01:27:34.149289      24 checks.go:378] validating the presence of executable ethtool\nI0823 01:27:34.149447      24 checks.go:378] validating the presence of executable socat\nI0823 01:27:34.149526      24 checks.go:378] validating the presence of executable tc\nI0823 01:27:34.149566      24 checks.go:378] validating the presence of executable touch\nI0823 01:27:34.149625      24 checks.go:519] running all checks\nI0823 01:27:34.162140      24 checks.go:408] checking whether the given node name is reachable using net.LookupHost\nI0823 01:27:34.167163      24 checks.go:617] validating kubelet version\nI0823 01:27:35.198675      24 checks.go:130] validating if the service is enabled and active\nI0823 01:27:35.264985      24 checks.go:203] validating availability of port 10250\nI0823 01:27:35.265097      24 checks.go:203] validating availability of port 2379\nI0823 01:27:35.265143      24 checks.go:203] validating availability of port 2380\nI0823 01:27:35.265178      24 checks.go:251] validating the existence and emptiness of directory /var/lib/etcd\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\nI0823 01:27:36.296949      24 checks.go:837] image exists: k8s.gcr.io/kube-apiserver:v1.17.0-alpha.0.492_6a67aecf55063c\nI0823 01:27:36.308912      24 checks.go:837] image exists: k8s.gcr.io/kube-controller-manager:v1.17.0-alpha.0.492_6a67aecf55063c\nI0823 01:27:36.320018      24 checks.go:837] image exists: k8s.gcr.io/kube-scheduler:v1.17.0-alpha.0.492_6a67aecf55063c\nI0823 01:27:36.330947      24 checks.go:837] image exists: k8s.gcr.io/kube-proxy:v1.17.0-alpha.0.492_6a67aecf55063c\nI0823 01:27:36.349148      24 checks.go:843] pulling k8s.gcr.io/pause:3.1\nI0823 01:27:38.461667      24 checks.go:843] pulling k8s.gcr.io/etcd:3.3.10\nI0823 01:27:56.674085      24 checks.go:843] pulling k8s.gcr.io/coredns:1.5.0\nI0823 01:27:59.198306      24 kubelet.go:61] Stopping the kubelet\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\nI0823 01:27:59.252441      24 kubelet.go:79] Starting the kubelet\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Activating the kubelet service\n[certs] Using certificateDir folder \"/etc/kubernetes/pki\"\nI0823 01:27:59.350538      24 certs.go:104] creating a new certificate authority for ca\n[certs] Generating \"ca\" certificate and key\n[certs] Generating \"apiserver\" certificate and key\n[certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.17.0.2 172.17.0.2 127.0.0.1]\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\nI0823 01:28:00.230342      24 certs.go:104] creating a new certificate authority for front-proxy-ca\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\nI0823 01:28:00.583827      24 certs.go:104] creating a new certificate authority for etcd-ca\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\nI0823 01:28:02.657235      24 certs.go:70] creating a new public/private key files for signing service account users\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\nI0823 01:28:02.977061      24 kubeconfig.go:79] creating kubeconfig file for admin.conf\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\nI0823 01:28:03.557007      24 kubeconfig.go:79] creating kubeconfig file for kubelet.conf\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\nI0823 01:28:04.157136      24 kubeconfig.go:79] creating kubeconfig file for controller-manager.conf\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\nI0823 01:28:04.546313      24 kubeconfig.go:79] creating kubeconfig file for scheduler.conf\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\nI0823 01:28:04.762229      24 manifests.go:91] [control-plane] getting StaticPodSpecs\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\nI0823 01:28:04.781394      24 manifests.go:116] [control-plane] wrote static Pod manifest for component \"kube-apiserver\" to \"/etc/kubernetes/manifests/kube-apiserver.yaml\"\nI0823 01:28:04.781442      24 manifests.go:91] [control-plane] getting StaticPodSpecs\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\nI0823 01:28:04.783225      24 manifests.go:116] [control-plane] wrote static Pod manifest for component \"kube-controller-manager\" to \"/etc/kubernetes/manifests/kube-controller-manager.yaml\"\nI0823 01:28:04.783264      24 manifests.go:91] [control-plane] getting StaticPodSpecs\nI0823 01:28:04.784432      24 manifests.go:116] [control-plane] wrote static Pod manifest for component \"kube-scheduler\" to \"/etc/kubernetes/manifests/kube-scheduler.yaml\"\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\nI0823 01:28:04.785503      24 local.go:69] [etcd] wrote Static Pod manifest for a local etcd member to \"/etc/kubernetes/manifests/etcd.yaml\"\nI0823 01:28:04.785538      24 waitcontrolplane.go:80] [wait-control-plane] Waiting for the API server to be healthy\nI0823 01:28:04.787136      24 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s\nI0823 01:28:04.795272      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds\nI0823 01:28:05.296815      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0823 01:28:05.796007      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0823 01:28:06.296023      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0823 01:28:06.796431      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0823 01:28:07.295987      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0823 01:28:07.797029      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0823 01:28:08.295929      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0823 01:28:08.796014      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0823 01:28:09.295984      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0823 01:28:09.795976      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0823 01:28:10.295961      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0823 01:28:10.795977      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0823 01:28:11.295949      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0823 01:28:11.795995      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0823 01:28:12.295871      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0823 01:28:12.795977      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0823 01:28:13.295860      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0823 01:28:13.795996      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0823 01:28:14.296421      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0823 01:28:14.796532      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds\nI0823 01:28:20.279446      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 4981 milliseconds\nI0823 01:28:20.328015      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 32 milliseconds\nI0823 01:28:20.798024      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0823 01:28:21.315556      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 19 milliseconds\nI0823 01:28:21.798249      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds\nI0823 01:28:22.336311      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 40 milliseconds\nI0823 01:28:22.804414      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 8 milliseconds\nI0823 01:28:23.304267      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 8 milliseconds\nI0823 01:28:23.993943      24 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s 200 OK in 198 milliseconds\nI0823 01:28:23.994067      24 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap\n[apiclient] All control plane components are healthy after 19.203083 seconds\n[upload-config] Storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace\nI0823 01:28:24.014170      24 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 9 milliseconds\nI0823 01:28:24.026125      24 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 10 milliseconds\nI0823 01:28:24.039365      24 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 12 milliseconds\n[kubelet] Creating a ConfigMap \"kubelet-config-1.17\" in namespace kube-system with the configuration for the kubelets in the cluster\nI0823 01:28:24.041193      24 uploadconfig.go:122] [upload-config] Uploading the kubelet component config to a ConfigMap\nI0823 01:28:24.058773      24 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 14 milliseconds\nI0823 01:28:24.067808      24 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 8 milliseconds\nI0823 01:28:24.074104      24 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 5 milliseconds\nI0823 01:28:24.074647      24 uploadconfig.go:127] [upload-config] Preserving the CRISocket information for the control-plane node\nI0823 01:28:24.074676      24 patchnode.go:30] [patchnode] Uploading the CRI Socket information \"/run/containerd/containerd.sock\" to the Node API object \"kind-control-plane\" as an annotation\nI0823 01:28:24.578980      24 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane 200 OK in 3 milliseconds\nI0823 01:28:24.589908      24 round_trippers.go:443] PATCH https://172.17.0.2:6443/api/v1/nodes/kind-control-plane 200 OK in 7 milliseconds\n[upload-certs] Skipping phase. Please see --upload-certs\n[mark-control-plane] Marking the node kind-control-plane as control-plane by adding the label \"node-role.kubernetes.io/master=''\"\n[mark-control-plane] Marking the node kind-control-plane as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]\nI0823 01:28:25.094011      24 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane 200 OK in 3 milliseconds\nI0823 01:28:25.102345      24 round_trippers.go:443] PATCH https://172.17.0.2:6443/api/v1/nodes/kind-control-plane 200 OK in 7 milliseconds\n[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles\nI0823 01:28:25.113685      24 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-abcdef 404 Not Found in 10 milliseconds\nI0823 01:28:25.122744      24 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/secrets 201 Created in 6 milliseconds\n[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials\nI0823 01:28:25.128410      24 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 4 milliseconds\n[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token\nI0823 01:28:25.132879      24 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 3 milliseconds\n[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster\nI0823 01:28:25.137695      24 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 4 milliseconds\nI0823 01:28:25.138303      24 clusterinfo.go:45] [bootstrap-token] loading admin kubeconfig\n[bootstrap-token] Creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace\nI0823 01:28:25.139154      24 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\nI0823 01:28:25.139178      24 clusterinfo.go:53] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig\nI0823 01:28:25.139601      24 clusterinfo.go:65] [bootstrap-token] creating/updating ConfigMap in kube-public namespace\nI0823 01:28:25.144054      24 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps 201 Created in 4 milliseconds\nI0823 01:28:25.144240      24 clusterinfo.go:79] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace\nI0823 01:28:25.148850      24 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles 201 Created in 4 milliseconds\nI0823 01:28:25.153741      24 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings 201 Created in 4 milliseconds\nI0823 01:28:25.156830      24 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kube-dns 404 Not Found in 2 milliseconds\nI0823 01:28:25.159795      24 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/coredns 404 Not Found in 2 milliseconds\nI0823 01:28:25.163457      24 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 3 milliseconds\nI0823 01:28:25.178254      24 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterroles 201 Created in 13 milliseconds\nI0823 01:28:25.189212      24 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 9 milliseconds\nI0823 01:28:25.199880      24 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 9 milliseconds\nI0823 01:28:25.263394      24 round_trippers.go:443] POST https://172.17.0.2:6443/apis/apps/v1/namespaces/kube-system/deployments 201 Created in 47 milliseconds\n[addons] Applied essential addon: CoreDNS\nI0823 01:28:25.335467      24 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/services 201 Created in 58 milliseconds\nI0823 01:28:25.340879      24 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 4 milliseconds\nI0823 01:28:25.490724      24 request.go:538] Throttling request took 147.802432ms, request: POST:https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps\nI0823 01:28:25.496945      24 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 6 milliseconds\nI0823 01:28:25.523154      24 round_trippers.go:443] POST https://172.17.0.2:6443/apis/apps/v1/namespaces/kube-system/daemonsets 201 Created in 18 milliseconds\nI0823 01:28:25.533110      24 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 9 milliseconds\nI0823 01:28:25.540419      24 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 7 milliseconds\nI0823 01:28:25.577069      24 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 36 milliseconds\n[addons] Applied essential addon: kube-proxy\nI0823 01:28:25.578341      24 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\nI0823 01:28:25.579429      24 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf\n\nYour Kubernetes control-plane has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n  mkdir -p $HOME/.kube\n  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n  sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nYou should now deploy a pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n  https://kubernetes.io/docs/concepts/cluster-administration/addons/\n\nYou can now join any number of control-plane nodes by copying certificate authorities \nand service account keys on each node and then running the following as root:\n\n  kubeadm join 172.17.0.2:6443 --token <value withheld> \\\n    --discovery-token-ca-cert-hash sha256:f10a1ff8a8a1b6739eb4151be2cf1987030fa6c2503e7985dccecf2efccca864 \\\n    --control-plane \t  \n\nThen you can join any number of worker nodes by running the following on each as root:\n\nkubeadm join 172.17.0.2:6443 --token <value withheld> \\\n    --discovery-token-ca-cert-hash sha256:f10a1ff8a8a1b6739eb4151be2cf1987030fa6c2503e7985dccecf2efccca864 "
I0823 01:28:25.694] time="01:28:25" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{(index (index .NetworkSettings.Ports \"6443/tcp\") 0).HostPort}} kind-control-plane]"
I0823 01:28:25.783] time="01:28:25" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane cat /etc/kubernetes/admin.conf]"
I0823 01:28:26.461]  ✓ Starting control-plane 🕹ī¸
I0823 01:28:26.461]  â€ĸ Installing CNI 🔌  ...
I0823 01:28:26.461] time="01:28:26" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane cat /kind/manifests/default-cni.yaml]"
I0823 01:28:26.916] time="01:28:26" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-control-plane kubectl create --kubeconfig=/etc/kubernetes/admin.conf -f -]"
I0823 01:28:28.507]  ✓ Installing CNI 🔌
I0823 01:28:28.507]  â€ĸ Installing StorageClass 💾  ...
I0823 01:28:28.507] time="01:28:28" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f -]"
I0823 01:28:29.333]  ✓ Installing StorageClass 💾
I0823 01:28:29.334]  â€ĸ Joining worker nodes 🚜  ...
I0823 01:28:29.334] time="01:28:29" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker2 kubeadm join --config /kind/kubeadm.conf --ignore-preflight-errors=all --v=6]"
I0823 01:28:29.335] time="01:28:29" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-worker kubeadm join --config /kind/kubeadm.conf --ignore-preflight-errors=all --v=6]"
I0823 01:28:59.550] time="01:28:59" level=debug msg="I0823 01:28:29.810058     499 join.go:363] [preflight] found NodeName empty; using OS hostname as NodeName\nI0823 01:28:29.810113     499 joinconfiguration.go:75] loading configuration from \"/kind/kubeadm.conf\"\nI0823 01:28:29.813194     499 preflight.go:90] [preflight] Running general checks\nI0823 01:28:29.813300     499 checks.go:251] validating the existence and emptiness of directory /etc/kubernetes/manifests\nI0823 01:28:29.813315     499 checks.go:288] validating the existence of file /etc/kubernetes/kubelet.conf\nI0823 01:28:29.813324     499 checks.go:288] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf\nI0823 01:28:29.813332     499 checks.go:104] validating the container runtime\n[preflight] Running pre-flight checks\nI0823 01:28:29.826989     499 checks.go:378] validating the presence of executable crictl\nI0823 01:28:29.827063     499 checks.go:337] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0823 01:28:29.827160     499 checks.go:337] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0823 01:28:29.827222     499 checks.go:648] validating whether swap is enabled or not\nI0823 01:28:29.827269     499 checks.go:378] validating the presence of executable ip\nI0823 01:28:29.827341     499 checks.go:378] validating the presence of executable iptables\nI0823 01:28:29.827374     499 checks.go:378] validating the presence of executable mount\nI0823 01:28:29.827398     499 checks.go:378] validating the presence of executable nsenter\nI0823 01:28:29.827435     499 checks.go:378] validating the presence of executable ebtables\nI0823 01:28:29.827499     499 checks.go:378] validating the presence of executable ethtool\nI0823 01:28:29.827528     499 checks.go:378] validating the presence of executable socat\nI0823 01:28:29.827567     499 checks.go:378] validating the presence of executable tc\nI0823 01:28:29.827600     499 checks.go:378] validating the presence of executable touch\nI0823 01:28:29.827635     499 checks.go:519] running all checks\nI0823 01:28:29.835993     499 checks.go:408] checking whether the given node name is reachable using net.LookupHost\nI0823 01:28:29.836414     499 checks.go:617] validating kubelet version\nI0823 01:28:30.045462     499 checks.go:130] validating if the service is enabled and active\nI0823 01:28:30.068296     499 checks.go:203] validating availability of port 10250\nI0823 01:28:30.068645     499 checks.go:288] validating the existence of file /etc/kubernetes/pki/ca.crt\nI0823 01:28:30.068665     499 checks.go:434] validating if the connectivity type is via proxy or direct\nI0823 01:28:30.068714     499 join.go:433] [preflight] Discovering cluster-info\nI0823 01:28:30.068786     499 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.2:6443\"\nI0823 01:28:30.069614     499 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.2:6443\"\nI0823 01:28:30.090814     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 21 milliseconds\nI0823 01:28:30.092052     499 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.2:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0823 01:28:35.092234     499 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.2:6443\"\nI0823 01:28:35.093838     499 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.2:6443\"\nI0823 01:28:35.102068     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 3 milliseconds\nI0823 01:28:35.102291     499 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.2:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0823 01:28:40.102507     499 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.2:6443\"\nI0823 01:28:40.103300     499 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.2:6443\"\nI0823 01:28:40.106282     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds\nI0823 01:28:40.106565     499 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.2:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0823 01:28:45.106727     499 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.2:6443\"\nI0823 01:28:45.107437     499 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.2:6443\"\nI0823 01:28:45.119509     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 11 milliseconds\nI0823 01:28:45.122458     499 token.go:109] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server \"172.17.0.2:6443\"\nI0823 01:28:45.122491     499 token.go:205] [discovery] Successfully established connection with API Server \"172.17.0.2:6443\"\nI0823 01:28:45.122521     499 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process\nI0823 01:28:45.122539     499 join.go:447] [preflight] Fetching init configuration\nI0823 01:28:45.122546     499 join.go:485] [preflight] Retrieving KubeConfig objects\n[preflight] Reading configuration from the cluster...\n[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\nI0823 01:28:45.145918     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 22 milliseconds\nI0823 01:28:45.158043     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 10 milliseconds\nI0823 01:28:45.164023     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 4 milliseconds\nI0823 01:28:45.167718     499 interface.go:384] Looking for default routes with IPv4 addresses\nI0823 01:28:45.167769     499 interface.go:389] Default route transits interface \"eth0\"\nI0823 01:28:45.167935     499 interface.go:196] Interface eth0 is up\nI0823 01:28:45.168017     499 interface.go:244] Interface \"eth0\" has 1 addresses :[172.17.0.4/16].\nI0823 01:28:45.168041     499 interface.go:211] Checking addr  172.17.0.4/16.\nI0823 01:28:45.168052     499 interface.go:218] IP found 172.17.0.4\nI0823 01:28:45.168085     499 interface.go:250] Found valid IPv4 address 172.17.0.4 for interface \"eth0\".\nI0823 01:28:45.168094     499 interface.go:395] Found active IP 172.17.0.4 \nI0823 01:28:45.168211     499 preflight.go:101] [preflight] Running configuration dependant checks\nI0823 01:28:45.168245     499 controlplaneprepare.go:211] [download-certs] Skipping certs download\nI0823 01:28:45.168260     499 kubelet.go:107] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf\nI0823 01:28:45.171352     499 kubelet.go:115] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt\nI0823 01:28:45.172083     499 loader.go:375] Config loaded from file:  /etc/kubernetes/bootstrap-kubelet.conf\nI0823 01:28:45.172794     499 kubelet.go:133] [kubelet-start] Stopping the kubelet\n[kubelet-start] Downloading configuration for the kubelet from the \"kubelet-config-1.17\" ConfigMap in the kube-system namespace\nI0823 01:28:45.196250     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 3 milliseconds\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\nI0823 01:28:45.213749     499 kubelet.go:150] [kubelet-start] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...\nI0823 01:28:46.366923     499 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0823 01:28:46.393386     499 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0823 01:28:46.400590     499 kubelet.go:168] [kubelet-start] preserving the crisocket information for the node\nI0823 01:28:46.400638     499 patchnode.go:30] [patchnode] Uploading the CRI Socket information \"/run/containerd/containerd.sock\" to the Node API object \"kind-worker\" as an annotation\nI0823 01:28:46.911651     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 10 milliseconds\nI0823 01:28:47.410239     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 9 milliseconds\nI0823 01:28:47.903472     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0823 01:28:48.403484     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0823 01:28:48.904240     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0823 01:28:49.405606     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 4 milliseconds\nI0823 01:28:49.904444     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0823 01:28:50.403577     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0823 01:28:50.903504     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0823 01:28:51.403332     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0823 01:28:51.903279     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0823 01:28:52.403452     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0823 01:28:52.903094     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0823 01:28:53.403396     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0823 01:28:53.903440     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0823 01:28:54.404323     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0823 01:28:54.903511     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0823 01:28:55.404515     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds\nI0823 01:28:55.903520     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0823 01:28:56.403616     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0823 01:28:56.903121     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0823 01:28:57.404926     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0823 01:28:57.903698     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0823 01:28:58.403417     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0823 01:28:58.903594     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds\nI0823 01:28:59.404401     499 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker 200 OK in 3 milliseconds\nI0823 01:28:59.414421     499 round_trippers.go:443] PATCH https://172.17.0.2:6443/api/v1/nodes/kind-worker 200 OK in 6 milliseconds\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the control-plane to see this node join the cluster.\n"
I0823 01:29:00.021] time="01:29:00" level=debug msg="I0823 01:28:29.752745     513 join.go:363] [preflight] found NodeName empty; using OS hostname as NodeName\nI0823 01:28:29.752808     513 joinconfiguration.go:75] loading configuration from \"/kind/kubeadm.conf\"\n[preflight] Running pre-flight checks\nI0823 01:28:29.756473     513 preflight.go:90] [preflight] Running general checks\nI0823 01:28:29.756584     513 checks.go:251] validating the existence and emptiness of directory /etc/kubernetes/manifests\nI0823 01:28:29.756600     513 checks.go:288] validating the existence of file /etc/kubernetes/kubelet.conf\nI0823 01:28:29.756617     513 checks.go:288] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf\nI0823 01:28:29.756634     513 checks.go:104] validating the container runtime\nI0823 01:28:29.784447     513 checks.go:378] validating the presence of executable crictl\nI0823 01:28:29.784530     513 checks.go:337] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0823 01:28:29.784623     513 checks.go:337] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0823 01:28:29.784689     513 checks.go:648] validating whether swap is enabled or not\nI0823 01:28:29.784735     513 checks.go:378] validating the presence of executable ip\nI0823 01:28:29.784816     513 checks.go:378] validating the presence of executable iptables\nI0823 01:28:29.784858     513 checks.go:378] validating the presence of executable mount\nI0823 01:28:29.784880     513 checks.go:378] validating the presence of executable nsenter\nI0823 01:28:29.784972     513 checks.go:378] validating the presence of executable ebtables\nI0823 01:28:29.785054     513 checks.go:378] validating the presence of executable ethtool\nI0823 01:28:29.785087     513 checks.go:378] validating the presence of executable socat\nI0823 01:28:29.785129     513 checks.go:378] validating the presence of executable tc\nI0823 01:28:29.785162     513 checks.go:378] validating the presence of executable touch\nI0823 01:28:29.785209     513 checks.go:519] running all checks\nI0823 01:28:29.793652     513 checks.go:408] checking whether the given node name is reachable using net.LookupHost\nI0823 01:28:29.793949     513 checks.go:617] validating kubelet version\nI0823 01:28:30.019625     513 checks.go:130] validating if the service is enabled and active\nI0823 01:28:30.043393     513 checks.go:203] validating availability of port 10250\nI0823 01:28:30.043618     513 checks.go:288] validating the existence of file /etc/kubernetes/pki/ca.crt\nI0823 01:28:30.043634     513 checks.go:434] validating if the connectivity type is via proxy or direct\nI0823 01:28:30.043680     513 join.go:433] [preflight] Discovering cluster-info\nI0823 01:28:30.043760     513 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.2:6443\"\nI0823 01:28:30.046318     513 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.2:6443\"\nI0823 01:28:30.057783     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 11 milliseconds\nI0823 01:28:30.059085     513 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.2:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0823 01:28:35.059269     513 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.2:6443\"\nI0823 01:28:35.060975     513 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.2:6443\"\nI0823 01:28:35.065766     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 4 milliseconds\nI0823 01:28:35.066054     513 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.2:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0823 01:28:40.066254     513 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.2:6443\"\nI0823 01:28:40.067093     513 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.2:6443\"\nI0823 01:28:40.070188     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds\nI0823 01:28:40.070449     513 token.go:202] [discovery] Failed to connect to API Server \"172.17.0.2:6443\": token id \"abcdef\" is invalid for this cluster or it has expired. Use \"kubeadm token create\" on the control-plane node to create a new valid token\nI0823 01:28:45.070615     513 token.go:199] [discovery] Trying to connect to API Server \"172.17.0.2:6443\"\nI0823 01:28:45.071419     513 token.go:74] [discovery] Created cluster-info discovery client, requesting info from \"https://172.17.0.2:6443\"\nI0823 01:28:45.074755     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 3 milliseconds\nI0823 01:28:45.076611     513 token.go:109] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server \"172.17.0.2:6443\"\nI0823 01:28:45.076641     513 token.go:205] [discovery] Successfully established connection with API Server \"172.17.0.2:6443\"\nI0823 01:28:45.076668     513 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process\nI0823 01:28:45.076693     513 join.go:447] [preflight] Fetching init configuration\nI0823 01:28:45.076702     513 join.go:485] [preflight] Retrieving KubeConfig objects\n[preflight] Reading configuration from the cluster...\n[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\nI0823 01:28:45.098686     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 21 milliseconds\nI0823 01:28:45.119716     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 15 milliseconds\nI0823 01:28:45.129685     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 8 milliseconds\nI0823 01:28:45.132497     513 interface.go:384] Looking for default routes with IPv4 addresses\nI0823 01:28:45.132914     513 interface.go:389] Default route transits interface \"eth0\"\nI0823 01:28:45.133249     513 interface.go:196] Interface eth0 is up\nI0823 01:28:45.133529     513 interface.go:244] Interface \"eth0\" has 1 addresses :[172.17.0.3/16].\nI0823 01:28:45.133726     513 interface.go:211] Checking addr  172.17.0.3/16.\nI0823 01:28:45.133927     513 interface.go:218] IP found 172.17.0.3\nI0823 01:28:45.134099     513 interface.go:250] Found valid IPv4 address 172.17.0.3 for interface \"eth0\".\nI0823 01:28:45.134282     513 interface.go:395] Found active IP 172.17.0.3 \nI0823 01:28:45.134560     513 preflight.go:101] [preflight] Running configuration dependant checks\nI0823 01:28:45.134777     513 controlplaneprepare.go:211] [download-certs] Skipping certs download\nI0823 01:28:45.135000     513 kubelet.go:107] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf\nI0823 01:28:45.137051     513 kubelet.go:115] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt\nI0823 01:28:45.138291     513 loader.go:375] Config loaded from file:  /etc/kubernetes/bootstrap-kubelet.conf\nI0823 01:28:45.139393     513 kubelet.go:133] [kubelet-start] Stopping the kubelet\n[kubelet-start] Downloading configuration for the kubelet from the \"kubelet-config-1.17\" ConfigMap in the kube-system namespace\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\nI0823 01:28:45.186073     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.17 200 OK in 6 milliseconds\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\nI0823 01:28:45.210495     513 kubelet.go:150] [kubelet-start] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...\nI0823 01:28:46.867953     513 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0823 01:28:46.893964     513 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf\nI0823 01:28:46.897392     513 kubelet.go:168] [kubelet-start] preserving the crisocket information for the node\nI0823 01:28:46.897427     513 patchnode.go:30] [patchnode] Uploading the CRI Socket information \"/run/containerd/containerd.sock\" to the Node API object \"kind-worker2\" as an annotation\nI0823 01:28:47.409870     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 12 milliseconds\nI0823 01:28:47.900699     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0823 01:28:48.403612     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 4 milliseconds\nI0823 01:28:48.901290     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0823 01:28:49.401844     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0823 01:28:49.900661     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0823 01:28:50.401555     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0823 01:28:50.901321     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0823 01:28:51.400787     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0823 01:28:51.900625     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0823 01:28:52.400622     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0823 01:28:52.900515     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0823 01:28:53.401275     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0823 01:28:53.901069     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0823 01:28:54.400778     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0823 01:28:54.900929     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0823 01:28:55.401787     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0823 01:28:55.900673     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0823 01:28:56.400386     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0823 01:28:56.900688     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0823 01:28:57.400572     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0823 01:28:57.900547     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0823 01:28:58.400849     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds\nI0823 01:28:58.900727     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds\nI0823 01:28:59.402083     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 404 Not Found in 4 milliseconds\nI0823 01:28:59.901958     513 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-worker2 200 OK in 4 milliseconds\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the control-plane to see this node join the cluster.\n\nI0823 01:28:59.910575     513 round_trippers.go:443] PATCH https://172.17.0.2:6443/api/v1/nodes/kind-worker2 200 OK in 5 milliseconds"
I0823 01:29:00.021]  ✓ Joining worker nodes 🚜
I0823 01:29:00.022]  â€ĸ Waiting ≤ 1m0s for control-plane = Ready âŗ  ...
I0823 01:29:00.022] time="01:29:00" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master -o=jsonpath='{.items..status.conditions[-1:].status}']"
I0823 01:29:00.565]  ✓ Waiting ≤ 1m0s for control-plane = Ready âŗ
I0823 01:29:00.565]  â€ĸ Ready after 1s 💚
I0823 01:29:00.565] Cluster creation complete. You can now use the cluster with:
... skipping 1040 lines ...
I0823 02:50:06.605] [02:50:06] Pod status is: Running
I0823 02:50:11.693] [02:50:11] Pod status is: Running
I0823 02:50:16.784] [02:50:16] Pod status is: Running
I0823 02:50:21.875] [02:50:21] Pod status is: Running
I0823 02:50:26.962] [02:50:26] Pod status is: Running
I0823 02:50:32.054] [02:50:32] Pod status is: Running
W0823 02:50:37.139] Error from server (NotFound): pods "e2e-conformance-test" not found
W0823 02:50:37.144] + cleanup
W0823 02:50:37.145] + kind export logs /workspace/_artifacts/logs
I0823 02:50:39.377] Exported logs to: /workspace/_artifacts/logs
W0823 02:50:39.477] + [[ true = true ]]
W0823 02:50:39.478] + kind delete cluster
I0823 02:50:39.578] Deleting cluster "kind" ...
... skipping 8 lines ...
W0823 02:50:47.273]     check(*cmd)
W0823 02:50:47.273]   File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 30, in check
W0823 02:50:47.273]     subprocess.check_call(cmd)
W0823 02:50:47.273]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0823 02:50:47.273]     raise CalledProcessError(retcode, cmd)
W0823 02:50:47.273] subprocess.CalledProcessError: Command '('bash', '-c', 'cd ./../../k8s.io/kubernetes && source ./../test-infra/experiment/kind-conformance-image-e2e.sh')' returned non-zero exit status 1
E0823 02:50:47.282] Command failed
I0823 02:50:47.282] process 685 exited with code 1 after 89.2m
E0823 02:50:47.283] FAIL: pull-kubernetes-conformance-image-test
I0823 02:50:47.283] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0823 02:50:47.870] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0823 02:50:47.919] process 179484 exited with code 0 after 0.0m
I0823 02:50:47.919] Call:  gcloud config get-value account
I0823 02:50:48.198] process 179496 exited with code 0 after 0.0m
I0823 02:50:48.199] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0823 02:50:48.199] Upload result and artifacts...
I0823 02:50:48.199] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1164708853137281024
I0823 02:50:48.199] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1164708853137281024/artifacts
W0823 02:50:49.319] CommandException: One or more URLs matched no objects.
E0823 02:50:49.468] Command failed
I0823 02:50:49.469] process 179508 exited with code 1 after 0.0m
W0823 02:50:49.469] Remote dir gs://kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1164708853137281024/artifacts not exist yet
I0823 02:50:49.470] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/76443/pull-kubernetes-conformance-image-test/1164708853137281024/artifacts
I0823 02:50:51.771] process 179650 exited with code 0 after 0.0m
W0823 02:50:51.772] metadata path /workspace/_artifacts/metadata.json does not exist
W0823 02:50:51.772] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...