This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 9 succeeded
Started2020-03-30 17:35
Elapsed20m12s
Revisionmaster
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/59607e60-d0c0-4f27-9ed0-a2bd65d0c3c5/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/59607e60-d0c0-4f27-9ed0-a2bd65d0c3c5/targets/test

Test Failures


task-06-e2e-kubeadm 5m0s

timeout. task did not completed in less than 5m0s as expected
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 9 Passed Tests

Show 1 Skipped Tests

Error lines from build-log.txt

... skipping 240 lines ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /kind/systemd/kubelet.service.
Created symlink /etc/systemd/system/kubelet.service → /kind/systemd/kubelet.service.
time="17:37:52" level=debug msg="Running: [docker exec kind-build-e58ea119-7773-46a8-a682-f68d67c4b3ea mkdir -p /etc/systemd/system/kubelet.service.d]"
time="17:37:53" level=info msg="Adding /etc/systemd/system/kubelet.service.d/10-kubeadm.conf to the image"
time="17:37:53" level=debug msg="Running: [docker exec kind-build-e58ea119-7773-46a8-a682-f68d67c4b3ea cp /alter/bits/systemd/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf]"
time="17:37:54" level=debug msg="Running: [docker exec kind-build-e58ea119-7773-46a8-a682-f68d67c4b3ea chown -R root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf]"
time="17:37:55" level=debug msg="Running: [docker exec kind-build-e58ea119-7773-46a8-a682-f68d67c4b3ea /bin/sh -c echo \"KUBELET_EXTRA_ARGS=--fail-swap-on=false\" >> /etc/default/kubelet]"
time="17:37:56" level=debug msg="Running: [docker exec kind-build-e58ea119-7773-46a8-a682-f68d67c4b3ea /bin/sh -c which docker || true]"
time="17:37:57" level=info msg="Detected containerd as container runtime"
time="17:37:57" level=info msg="Pre loading images ..."
time="17:37:57" level=debug msg="Running: [docker exec kind-build-e58ea119-7773-46a8-a682-f68d67c4b3ea mkdir -p /kind/images]"
time="17:37:58" level=debug msg="Running: [docker exec kind-build-e58ea119-7773-46a8-a682-f68d67c4b3ea bash -c containerd & find /kind/images -name *.tar -print0 | xargs -r -0 -n 1 -P $(nproc) ctr --namespace=k8s.io images import --no-unpack && kill %1 && rm -rf /kind/images/*]"
time="2020-03-30T17:37:59.259368306Z" level=info msg="starting containerd" revision=7af311b4200b464a79c340b4e3a2799f8906ee8d version=v1.3.0-20-g7af311b4
time="2020-03-30T17:37:59.377009205Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
time="2020-03-30T17:37:59.377495332Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
time="2020-03-30T17:37:59.377576738Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
time="2020-03-30T17:37:59.377606590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
time="2020-03-30T17:37:59.377702609Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
time="2020-03-30T17:37:59.377945379Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
time="2020-03-30T17:37:59.378008266Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
time="2020-03-30T17:37:59.378102495Z" level=info msg="metadata content store policy set" policy=shared
time="2020-03-30T17:37:59.742885402Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
time="2020-03-30T17:37:59.742964133Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
time="2020-03-30T17:37:59.743066073Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
time="2020-03-30T17:37:59.743099978Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
time="2020-03-30T17:37:59.743159769Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
... skipping 20 lines ...
time="2020-03-30T17:37:59.745784180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
time="2020-03-30T17:37:59.745809278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
time="2020-03-30T17:37:59.746287099Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false} UntrustedWorkloadRuntime:{Type: Engine: PodAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false} Runtimes:map[runc:{Type:io.containerd.runc.v1 Engine: PodAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false} test-handler:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false}] NoPivot:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Configs:map[] Auths:map[]} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SandboxImage:k8s.gcr.io/pause:3.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
time="2020-03-30T17:37:59.746675171Z" level=warning msg="`default_runtime` is deprecated, please use `default_runtime_name` to reference the default configuration you have defined in `runtimes`"
time="2020-03-30T17:37:59.746907986Z" level=info msg="Connect containerd service"
time="2020-03-30T17:37:59.749258904Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
time="2020-03-30T17:37:59.749627917Z" level=error msg="Failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
time="2020-03-30T17:37:59.750065819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
time="2020-03-30T17:37:59.750187169Z" level=info msg="Start subscribing containerd event"
time="2020-03-30T17:37:59.750298973Z" level=info msg="Start recovering state"
time="2020-03-30T17:37:59.750594894Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
time="2020-03-30T17:37:59.750799930Z" level=info msg=serving... address=/run/containerd/containerd.sock
time="2020-03-30T17:37:59.750828574Z" level=info msg="containerd successfully booted in 0.493919s"
time="2020-03-30T17:37:59.756936146Z" level=info msg="Start event monitor"
time="2020-03-30T17:37:59.758441295Z" level=info msg="Start snapshots syncer"
time="2020-03-30T17:37:59.758478105Z" level=info msg="Start streaming server"
time="2020-03-30T17:38:02.929355259Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-7642396919f26df99df1ff78e89004e3ef88e4c470e96082598862bcd55d68d2/layer.tar locked: unavailable" ref=tar-7642396919f26df99df1ff78e89004e3ef88e4c470e96082598862bcd55d68d2/layer.tar total=53882368
time="2020-03-30T17:38:03.578564439Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-7642396919f26df99df1ff78e89004e3ef88e4c470e96082598862bcd55d68d2/layer.tar locked: unavailable" ref=tar-7642396919f26df99df1ff78e89004e3ef88e4c470e96082598862bcd55d68d2/layer.tar total=53882368
time="2020-03-30T17:38:04.099074673Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-7642396919f26df99df1ff78e89004e3ef88e4c470e96082598862bcd55d68d2/layer.tar locked: unavailable" ref=tar-7642396919f26df99df1ff78e89004e3ef88e4c470e96082598862bcd55d68d2/layer.tar total=53882368
time="2020-03-30T17:38:09.057131218Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/kube-scheduler:v1.19.0-alpha.1.127_fce286e227b022,Labels:map[string]string{},XXX_unrecognized:[],}"
time="2020-03-30T17:38:09.127643398Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9b1caa684a8d91450c969c12a58403ad472c9f08557463fad5687adfa65a4e30,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2020-03-30T17:38:09.128244251Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/kube-scheduler:v1.19.0-alpha.1.127_fce286e227b022,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2020-03-30T17:38:12.277106775Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-7642396919f26df99df1ff78e89004e3ef88e4c470e96082598862bcd55d68d2/layer.tar locked: unavailable" ref=tar-7642396919f26df99df1ff78e89004e3ef88e4c470e96082598862bcd55d68d2/layer.tar total=53882368
time="2020-03-30T17:38:13.082521475Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-7642396919f26df99df1ff78e89004e3ef88e4c470e96082598862bcd55d68d2/layer.tar locked: unavailable" ref=tar-7642396919f26df99df1ff78e89004e3ef88e4c470e96082598862bcd55d68d2/layer.tar total=53882368
time="2020-03-30T17:38:13.892574811Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-7642396919f26df99df1ff78e89004e3ef88e4c470e96082598862bcd55d68d2/layer.tar locked: unavailable" ref=tar-7642396919f26df99df1ff78e89004e3ef88e4c470e96082598862bcd55d68d2/layer.tar total=53882368
time="2020-03-30T17:38:14.962818089Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/kube-proxy:v1.19.0-alpha.1.127_fce286e227b022,Labels:map[string]string{},XXX_unrecognized:[],}"
time="2020-03-30T17:38:14.983873892Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6115ccef6b2050b1137f1bbd805c9b2c5bd878877f6033d54c6aac3371b986c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2020-03-30T17:38:14.985258488Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/kube-controller-manager:v1.19.0-alpha.1.127_fce286e227b022,Labels:map[string]string{},XXX_unrecognized:[],}"
time="2020-03-30T17:38:15.008505258Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/kube-proxy:v1.19.0-alpha.1.127_fce286e227b022,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2020-03-30T17:38:15.010052712Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c5d1571767fcf983eb8c05ac0d39adae9c81d96dcf5df4b03be4315cae84cff2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2020-03-30T17:38:15.011465246Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/kube-controller-manager:v1.19.0-alpha.1.127_fce286e227b022,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
... skipping 108 lines ...
time="17:40:14" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-regular-lb]"

kinder-regular-control-plane-1:$ Preparing /kind/kubeadm.conf
time="17:40:15" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-regular-control-plane-1]"
time="17:40:15" level=debug msg="Running: [docker exec kinder-regular-control-plane-1 kubeadm version -o=short]"
time="17:40:16" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.19.0-alpha.1.127+fce286e227b022)"
time="17:40:16" level=debug msg="generated config:\napiServer:\n  certSANs:\n  - localhost\n  - 172.17.0.4\napiVersion: kubeadm.k8s.io/v1beta2\nclusterName: kinder-regular\ncontrolPlaneEndpoint: 172.17.0.7:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.19.0-alpha.1.127+fce286e227b022\nnetworking:\n  podSubnet: 192.168.0.0/16\n  serviceSubnet: \"\"\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.4\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.4\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
time="17:40:16" level=debug msg="Running: [docker cp /tmp/kinder-regular-control-plane-1-113254283 kinder-regular-control-plane-1:/kind/kubeadm.conf]"

kinder-regular-lb:$ Updating load balancer configuration with 1 control plane backends
time="17:40:18" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-regular-control-plane-1]"
time="17:40:18" level=debug msg="Writing loadbalancer config on kinder-regular-lb..."
time="17:40:18" level=debug msg="Running: [docker cp /tmp/kinder-regular-lb-937793902 kinder-regular-lb:/usr/local/etc/haproxy/haproxy.cfg]"
... skipping 31 lines ...
I0330 17:40:20.907407     184 checks.go:376] validating the presence of executable ebtables
I0330 17:40:20.907510     184 checks.go:376] validating the presence of executable ethtool
I0330 17:40:20.907598     184 checks.go:376] validating the presence of executable socat
I0330 17:40:20.907745     184 checks.go:376] validating the presence of executable tc
I0330 17:40:20.907788     184 checks.go:376] validating the presence of executable touch
I0330 17:40:20.907840     184 checks.go:520] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0330 17:40:20.931562     184 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0330 17:40:20.931914     184 checks.go:618] validating kubelet version
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
... skipping 387 lines ...
time="17:42:36" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-regular-lb]"

kinder-regular-control-plane-2:$ Preparing /kind/kubeadm.conf
time="17:42:37" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-regular-control-plane-2]"
time="17:42:37" level=debug msg="Running: [docker exec kinder-regular-control-plane-2 kubeadm version -o=short]"
time="17:42:38" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.19.0-alpha.1.127+fce286e227b022)"
time="17:42:38" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta2\ncontrolPlane:\n  localAPIEndpoint:\n    advertiseAddress: 172.17.0.2\n    bindPort: 6443\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.2\n"
time="17:42:38" level=debug msg="Running: [docker cp /tmp/kinder-regular-control-plane-2-031213060 kinder-regular-control-plane-2:/kind/kubeadm.conf]"

kinder-regular-control-plane-2:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
time="17:42:39" level=debug msg="Running: [docker exec kinder-regular-control-plane-2 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]"
W0330 17:42:40.359843     338 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
I0330 17:42:40.359932     338 join.go:371] [preflight] found NodeName empty; using OS hostname as NodeName
... skipping 17 lines ...
I0330 17:42:40.419480     338 checks.go:376] validating the presence of executable ebtables
I0330 17:42:40.419521     338 checks.go:376] validating the presence of executable ethtool
I0330 17:42:40.419557     338 checks.go:376] validating the presence of executable socat
I0330 17:42:40.419604     338 checks.go:376] validating the presence of executable tc
I0330 17:42:40.419638     338 checks.go:376] validating the presence of executable touch
I0330 17:42:40.419677     338 checks.go:520] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0330 17:42:40.444556     338 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0330 17:42:40.444929     338 checks.go:618] validating kubelet version
I0330 17:42:40.773887     338 checks.go:128] validating if the service is enabled and active
I0330 17:42:40.796171     338 checks.go:201] validating availability of port 10250
I0330 17:42:40.796647     338 checks.go:432] validating if the connectivity type is via proxy or direct
I0330 17:42:40.796726     338 join.go:441] [preflight] Discovering cluster-info
... skipping 136 lines ...
I0330 17:43:06.927967     338 local.go:139] Adding etcd member: https://172.17.0.2:2380
[etcd] Announced new etcd member joining to the existing etcd cluster
I0330 17:43:07.001149     338 local.go:145] Updated etcd member list: [{kinder-regular-control-plane-1 https://172.17.0.4:2380} {kinder-regular-control-plane-2 https://172.17.0.2:2380}]
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
I0330 17:43:07.003177     338 etcd.go:500] [etcd] attempting to see if all cluster endpoints ([https://172.17.0.4:2379 https://172.17.0.2:2379]) are available 1/8
{"level":"warn","ts":"2020-03-30T17:43:18.048Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://172.17.0.2:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
I0330 17:43:18.049091     338 etcd.go:480] Failed to get etcd status for https://172.17.0.2:2379: context deadline exceeded
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0330 17:43:23.960801     338 round_trippers.go:443] POST https://172.17.0.7:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 409 Conflict in 5780 milliseconds
I0330 17:43:24.084948     338 round_trippers.go:443] GET https://172.17.0.7:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s 200 OK in 115 milliseconds
I0330 17:43:24.111943     338 round_trippers.go:443] PUT https://172.17.0.7:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s 200 OK in 25 milliseconds
I0330 17:43:24.205157     338 round_trippers.go:443] POST https://172.17.0.7:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 409 Conflict in 85 milliseconds
I0330 17:43:24.285746     338 round_trippers.go:443] PUT https://172.17.0.7:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm:nodes-kubeadm-config?timeout=10s 200 OK in 78 milliseconds
... skipping 87 lines ...
time="17:43:59" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-regular-lb]"

kinder-regular-control-plane-3:$ Preparing /kind/kubeadm.conf
time="17:44:00" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-regular-control-plane-3]"
time="17:44:00" level=debug msg="Running: [docker exec kinder-regular-control-plane-3 kubeadm version -o=short]"
time="17:44:02" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.19.0-alpha.1.127+fce286e227b022)"
time="17:44:02" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta2\ncontrolPlane:\n  localAPIEndpoint:\n    advertiseAddress: 172.17.0.3\n    bindPort: 6443\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.3\n"
time="17:44:02" level=debug msg="Running: [docker cp /tmp/kinder-regular-control-plane-3-110590014 kinder-regular-control-plane-3:/kind/kubeadm.conf]"

kinder-regular-control-plane-3:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
time="17:44:03" level=debug msg="Running: [docker exec kinder-regular-control-plane-3 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]"
W0330 17:44:04.582588     429 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
I0330 17:44:04.582827     429 join.go:371] [preflight] found NodeName empty; using OS hostname as NodeName
... skipping 17 lines ...
I0330 17:44:04.719454     429 checks.go:376] validating the presence of executable ebtables
I0330 17:44:04.719523     429 checks.go:376] validating the presence of executable ethtool
I0330 17:44:04.719602     429 checks.go:376] validating the presence of executable socat
I0330 17:44:04.719640     429 checks.go:376] validating the presence of executable tc
I0330 17:44:04.719668     429 checks.go:376] validating the presence of executable touch
I0330 17:44:04.719716     429 checks.go:520] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0330 17:44:04.885692     429 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0330 17:44:04.886149     429 checks.go:618] validating kubelet version
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
... skipping 142 lines ...
I0330 17:44:34.656243     429 local.go:139] Adding etcd member: https://172.17.0.3:2380
I0330 17:44:34.776286     429 local.go:145] Updated etcd member list: [{kinder-regular-control-plane-1 https://172.17.0.4:2380} {kinder-regular-control-plane-3 https://172.17.0.3:2380} {kinder-regular-control-plane-2 https://172.17.0.2:2380}]
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
I0330 17:44:34.777683     429 etcd.go:500] [etcd] attempting to see if all cluster endpoints ([https://172.17.0.4:2379 https://172.17.0.2:2379 https://172.17.0.3:2379]) are available 1/8
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
{"level":"warn","ts":"2020-03-30T17:45:01.631Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://172.17.0.3:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
I0330 17:45:01.631845     429 etcd.go:480] Failed to get etcd status for https://172.17.0.3:2379: context deadline exceeded
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0330 17:45:01.870630     429 round_trippers.go:443] POST https://172.17.0.7:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 409 Conflict in 77 milliseconds
I0330 17:45:01.915010     429 round_trippers.go:443] GET https://172.17.0.7:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s 200 OK in 43 milliseconds
I0330 17:45:01.951694     429 round_trippers.go:443] PUT https://172.17.0.7:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s 200 OK in 35 milliseconds
I0330 17:45:01.972447     429 round_trippers.go:443] POST https://172.17.0.7:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 409 Conflict in 20 milliseconds
I0330 17:45:01.982694     429 round_trippers.go:443] PUT https://172.17.0.7:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm:nodes-kubeadm-config?timeout=10s 200 OK in 9 milliseconds
... skipping 64 lines ...
time="17:45:23" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-regular-lb]"

kinder-regular-worker-1:$ Preparing /kind/kubeadm.conf
time="17:45:24" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-regular-worker-1]"
time="17:45:24" level=debug msg="Running: [docker exec kinder-regular-worker-1 kubeadm version -o=short]"
time="17:45:25" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.19.0-alpha.1.127+fce286e227b022)"
time="17:45:25" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta2\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.5\n"
time="17:45:25" level=debug msg="Running: [docker cp /tmp/kinder-regular-worker-1-536651296 kinder-regular-worker-1:/kind/kubeadm.conf]"

kinder-regular-worker-1:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
time="17:45:27" level=debug msg="Running: [docker exec kinder-regular-worker-1 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]"
W0330 17:45:28.024067     500 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
I0330 17:45:28.024151     500 join.go:371] [preflight] found NodeName empty; using OS hostname as NodeName
... skipping 17 lines ...
I0330 17:45:28.067782     500 checks.go:376] validating the presence of executable ebtables
I0330 17:45:28.067816     500 checks.go:376] validating the presence of executable ethtool
I0330 17:45:28.067842     500 checks.go:376] validating the presence of executable socat
I0330 17:45:28.067878     500 checks.go:376] validating the presence of executable tc
I0330 17:45:28.067909     500 checks.go:376] validating the presence of executable touch
I0330 17:45:28.067971     500 checks.go:520] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0330 17:45:28.096809     500 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0330 17:45:28.097126     500 checks.go:618] validating kubelet version
I0330 17:45:28.357499     500 checks.go:128] validating if the service is enabled and active
I0330 17:45:28.410765     500 checks.go:201] validating availability of port 10250
I0330 17:45:28.411134     500 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt
I0330 17:45:28.411172     500 checks.go:432] validating if the connectivity type is via proxy or direct
... skipping 93 lines ...
time="17:46:14" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-regular-lb]"

kinder-regular-worker-2:$ Preparing /kind/kubeadm.conf
time="17:46:14" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-regular-worker-2]"
time="17:46:14" level=debug msg="Running: [docker exec kinder-regular-worker-2 kubeadm version -o=short]"
time="17:46:16" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.19.0-alpha.1.127+fce286e227b022)"
time="17:46:16" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta2\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.6\n"
time="17:46:16" level=debug msg="Running: [docker cp /tmp/kinder-regular-worker-2-588190463 kinder-regular-worker-2:/kind/kubeadm.conf]"

kinder-regular-worker-2:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
time="17:46:17" level=debug msg="Running: [docker exec kinder-regular-worker-2 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]"
W0330 17:46:18.511913     562 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
... skipping 17 lines ...
I0330 17:46:18.598641     562 checks.go:376] validating the presence of executable ebtables
I0330 17:46:18.598693     562 checks.go:376] validating the presence of executable ethtool
I0330 17:46:18.598725     562 checks.go:376] validating the presence of executable socat
I0330 17:46:18.598763     562 checks.go:376] validating the presence of executable tc
I0330 17:46:18.598804     562 checks.go:376] validating the presence of executable touch
I0330 17:46:18.598901     562 checks.go:520] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0330 17:46:18.656185     562 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0330 17:46:18.656555     562 checks.go:618] validating kubelet version
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
... skipping 322 lines ...
I0330 17:52:34.667104   10407 local.go:120] [etcd] Updated etcd member list: [{kinder-regular-control-plane-3 https://172.17.0.3:2380} {kinder-regular-control-plane-2 https://172.17.0.2:2380}]
I0330 17:52:34.667169   10407 cleanupnode.go:57] [reset] Getting init system
[reset] Unmounting mounted directories in "/var/lib/kubelet"
I0330 17:52:35.020939   10407 cleanupnode.go:79] [reset] Removing Kubernetes-managed containers
make: Leaving directory '/home/prow/go/src/k8s.io/kubernetes'
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
W0330 17:54:42.438208   10407 cleanupnode.go:81] [reset] Failed to remove containers: failed to stop running pod e98ce352b245eda35bd96df7a1d861d64fb3ac2bda63a5c77673a6ad226dfca3: output: time="2020-03-30T17:53:38Z" level=fatal msg="stopping the pod sandbox \"e98ce352b245eda35bd96df7a1d861d64fb3ac2bda63a5c77673a6ad226dfca3\" failed: rpc error: code = Unknown desc = failed to destroy network for sandbox \"e98ce352b245eda35bd96df7a1d861d64fb3ac2bda63a5c77673a6ad226dfca3\": error getting ClusterInformation: the server was unable to return a response in the time allotted, but may still be processing the request (get ClusterInformations.crd.projectcalico.org default)"
, error: exit status 1
I0330 17:54:42.438293   10407 cleanupnode.go:87] [reset] Removing contents from the config and pki directories
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
I0330 17:54:42.449049   10407 reset.go:218] [reset] Deleting contents of /var/lib/etcd
I0330 17:54:42.452360   10407 reset.go:218] [reset] Deleting contents of /var/lib/kubelet
I0330 17:54:42.463804   10407 reset.go:218] [reset] Deleting contents of /var/lib/dockershim
... skipping 90 lines ...
I0330 17:54:51.272238    5740 round_trippers.go:443] GET https://172.17.0.7:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Detcd%2Ctier%3Dcontrol-plane 200 OK in 12 milliseconds
I0330 17:54:51.274633    5740 etcd.go:102] etcd endpoints read from pods: https://172.17.0.4:2379,https://172.17.0.2:2379,https://172.17.0.3:2379
I0330 17:54:51.300555    5740 etcd.go:250] etcd endpoints read from etcd: https://172.17.0.3:2379
I0330 17:54:51.300614    5740 etcd.go:120] update etcd endpoints: https://172.17.0.3:2379
I0330 17:54:51.300628    5740 local.go:109] [etcd] get the member id from peer: https://172.17.0.3:2380
I0330 17:54:51.313701    5740 local.go:115] [etcd] removing etcd member: https://172.17.0.3:2380, id: 7514245402007008978
{"level":"warn","ts":"2020-03-30T17:54:51.339Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-ff91caee-195b-4165-a62e-b2f0ac8bf5bb/172.17.0.3:2379","attempt":0,"error":"rpc error: code = Unknown desc = etcdserver: re-configuration failed due to not enough started members"}
I0330 17:54:51.339349    5740 etcd.go:329] Failed to remove etcd member: etcdserver: re-configuration failed due to not enough started members
{"level":"warn","ts":"2020-03-30T17:54:51.394Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-ff91caee-195b-4165-a62e-b2f0ac8bf5bb/172.17.0.3:2379","attempt":0,"error":"rpc error: code = Unknown desc = etcdserver: re-configuration failed due to not enough started members"}
I0330 17:54:51.394194    5740 etcd.go:329] Failed to remove etcd member: etcdserver: re-configuration failed due to not enough started members
{"level":"warn","ts":"2020-03-30T17:54:51.505Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-ff91caee-195b-4165-a62e-b2f0ac8bf5bb/172.17.0.3:2379","attempt":0,"error":"rpc error: code = Unknown desc = etcdserver: re-configuration failed due to not enough started members"}
I0330 17:54:51.505601    5740 etcd.go:329] Failed to remove etcd member: etcdserver: re-configuration failed due to not enough started members
{"level":"warn","ts":"2020-03-30T17:54:51.720Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-ff91caee-195b-4165-a62e-b2f0ac8bf5bb/172.17.0.3:2379","attempt":0,"error":"rpc error: code = Unknown desc = etcdserver: re-configuration failed due to not enough started members"}
I0330 17:54:51.720554    5740 etcd.go:329] Failed to remove etcd member: etcdserver: re-configuration failed due to not enough started members
{"level":"warn","ts":"2020-03-30T17:54:52.140Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-ff91caee-195b-4165-a62e-b2f0ac8bf5bb/172.17.0.3:2379","attempt":0,"error":"rpc error: code = Unknown desc = etcdserver: re-configuration failed due to not enough started members"}
I0330 17:54:52.140270    5740 etcd.go:329] Failed to remove etcd member: etcdserver: re-configuration failed due to not enough started members
{"level":"warn","ts":"2020-03-30T17:54:52.976Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-ff91caee-195b-4165-a62e-b2f0ac8bf5bb/172.17.0.3:2379","attempt":0,"error":"rpc error: code = Unknown desc = etcdserver: re-configuration failed due to not enough started members"}
I0330 17:54:52.976296    5740 etcd.go:329] Failed to remove etcd member: etcdserver: re-configuration failed due to not enough started members
{"level":"warn","ts":"2020-03-30T17:54:54.688Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-ff91caee-195b-4165-a62e-b2f0ac8bf5bb/172.17.0.3:2379","attempt":0,"error":"rpc error: code = Unknown desc = etcdserver: re-configuration failed due to not enough started members"}
I0330 17:54:54.688401    5740 etcd.go:329] Failed to remove etcd member: etcdserver: re-configuration failed due to not enough started members
{"level":"warn","ts":"2020-03-30T17:54:57.911Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-ff91caee-195b-4165-a62e-b2f0ac8bf5bb/172.17.0.3:2379","attempt":0,"error":"rpc error: code = Unknown desc = etcdserver: re-configuration failed due to not enough started members"}
I0330 17:54:57.911405    5740 etcd.go:329] Failed to remove etcd member: etcdserver: re-configuration failed due to not enough started members
{"level":"warn","ts":"2020-03-30T17:55:04.418Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-ff91caee-195b-4165-a62e-b2f0ac8bf5bb/172.17.0.3:2379","attempt":0,"error":"rpc error: code = Unknown desc = etcdserver: re-configuration failed due to not enough started members"}
I0330 17:55:04.418199    5740 etcd.go:329] Failed to remove etcd member: etcdserver: re-configuration failed due to not enough started members
{"level":"warn","ts":"2020-03-30T17:55:17.344Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-ff91caee-195b-4165-a62e-b2f0ac8bf5bb/172.17.0.3:2379","attempt":0,"error":"rpc error: code = Unknown desc = etcdserver: re-configuration failed due to not enough started members"}
I0330 17:55:17.344829    5740 etcd.go:329] Failed to remove etcd member: etcdserver: re-configuration failed due to not enough started members
[reset] Stopping the kubelet service
{"level":"warn","ts":"2020-03-30T17:55:43.716Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-ff91caee-195b-4165-a62e-b2f0ac8bf5bb/172.17.0.3:2379","attempt":0,"error":"rpc error: code = Unknown desc = etcdserver: re-configuration failed due to not enough started members"}
I0330 17:55:43.717057    5740 etcd.go:329] Failed to remove etcd member: etcdserver: re-configuration failed due to not enough started members
W0330 17:55:43.717119    5740 removeetcdmember.go:61] [reset] failed to remove etcd member: etcdserver: re-configuration failed due to not enough started members
.Please manually remove this etcd member using etcdctl
I0330 17:55:43.717144    5740 cleanupnode.go:57] [reset] Getting init system
[reset] Unmounting mounted directories in "/var/lib/kubelet"
I0330 17:55:43.849899    5740 cleanupnode.go:79] [reset] Removing Kubernetes-managed containers
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
I0330 17:55:46.928004    5740 cleanupnode.go:87] [reset] Removing contents from the config and pki directories
... skipping 90 lines ...
Deleting cluster "kinder-regular" ...
time="17:55:50" level=debug msg="Running: /usr/bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\\t{{.Label \"io.k8s.sigs.kind.cluster\"}} --filter label=io.k8s.sigs.kind.cluster=kinder-regular]"
time="17:55:50" level=debug msg="Running: /usr/bin/docker [docker rm -f -v kinder-regular-lb kinder-regular-worker-2 kinder-regular-worker-1 kinder-regular-control-plane-2 kinder-regular-control-plane-3 kinder-regular-control-plane-1]"
 completed!

Ran 10 of 11 tasks in 0.000 seconds
FAIL! -- 9 tasks Passed | 1 Failed | 1 Skipped

see junit-runner.xml and task logs files for more details

Error: failed executing the workflow
+ EXIT_VALUE=1
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up binfmt_misc ...
================================================================================
... skipping 2 lines ...