This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 9 succeeded
Started2020-03-18 23:17
Elapsed30m27s
Revisionrelease-1.16
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/862ae584-4c56-44d8-900b-fe0d5d3a11db/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/862ae584-4c56-44d8-900b-fe0d5d3a11db/targets/test
uploadercrier

Test Failures


task-06-e2e-kubeadm 5m0s

timeout. task did not completed in less than 5m0s as expected
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 9 Passed Tests

Show 1 Skipped Tests

Error lines from build-log.txt

... skipping 237 lines ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /kind/systemd/kubelet.service.
Created symlink /etc/systemd/system/kubelet.service → /kind/systemd/kubelet.service.
time="23:26:14" level=debug msg="Running: [docker exec kind-build-b7b37325-b23f-47c1-9bb2-0331a9bc0af3 mkdir -p /etc/systemd/system/kubelet.service.d]"
time="23:26:15" level=info msg="Adding /etc/systemd/system/kubelet.service.d/10-kubeadm.conf to the image"
time="23:26:15" level=debug msg="Running: [docker exec kind-build-b7b37325-b23f-47c1-9bb2-0331a9bc0af3 cp /alter/bits/systemd/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf]"
time="23:26:17" level=debug msg="Running: [docker exec kind-build-b7b37325-b23f-47c1-9bb2-0331a9bc0af3 chown -R root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf]"
time="23:26:19" level=debug msg="Running: [docker exec kind-build-b7b37325-b23f-47c1-9bb2-0331a9bc0af3 /bin/sh -c echo \"KUBELET_EXTRA_ARGS=--fail-swap-on=false\" >> /etc/default/kubelet]"
time="23:26:20" level=debug msg="Running: [docker exec kind-build-b7b37325-b23f-47c1-9bb2-0331a9bc0af3 /bin/sh -c which docker || true]"
time="23:26:22" level=info msg="Detected containerd as container runtime"
time="23:26:22" level=info msg="Pre loading images ..."
time="23:26:22" level=debug msg="Running: [docker exec kind-build-b7b37325-b23f-47c1-9bb2-0331a9bc0af3 mkdir -p /kind/images]"
time="23:26:23" level=debug msg="Running: [docker exec kind-build-b7b37325-b23f-47c1-9bb2-0331a9bc0af3 bash -c containerd & find /kind/images -name *.tar -print0 | xargs -r -0 -n 1 -P $(nproc) ctr --namespace=k8s.io images import --no-unpack && kill %1 && rm -rf /kind/images/*]"
time="2020-03-18T23:26:25.192671315Z" level=info msg="starting containerd" revision=7af311b4200b464a79c340b4e3a2799f8906ee8d version=v1.3.0-20-g7af311b4
time="2020-03-18T23:26:25.268985627Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
time="2020-03-18T23:26:25.270148835Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
time="2020-03-18T23:26:25.270310333Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
time="2020-03-18T23:26:25.270426976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
time="2020-03-18T23:26:25.271106418Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
time="2020-03-18T23:26:25.272244252Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
time="2020-03-18T23:26:25.273758788Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
time="2020-03-18T23:26:25.273785629Z" level=info msg="metadata content store policy set" policy=shared
time="2020-03-18T23:26:25.378728975Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
time="2020-03-18T23:26:25.378802701Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
time="2020-03-18T23:26:25.378882942Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
time="2020-03-18T23:26:25.378909482Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
time="2020-03-18T23:26:25.378931188Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
... skipping 20 lines ...
time="2020-03-18T23:26:25.382548024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
time="2020-03-18T23:26:25.382582461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
time="2020-03-18T23:26:25.382795657Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false} UntrustedWorkloadRuntime:{Type: Engine: PodAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false} Runtimes:map[runc:{Type:io.containerd.runc.v1 Engine: PodAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false} test-handler:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false}] NoPivot:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Configs:map[] Auths:map[]} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SandboxImage:k8s.gcr.io/pause:3.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
time="2020-03-18T23:26:25.382865491Z" level=warning msg="`default_runtime` is deprecated, please use `default_runtime_name` to reference the default configuration you have defined in `runtimes`"
time="2020-03-18T23:26:25.382914055Z" level=info msg="Connect containerd service"
time="2020-03-18T23:26:25.383158975Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
time="2020-03-18T23:26:25.383429519Z" level=error msg="Failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
time="2020-03-18T23:26:25.383728420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
time="2020-03-18T23:26:25.384131847Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
time="2020-03-18T23:26:25.384333531Z" level=info msg=serving... address=/run/containerd/containerd.sock
time="2020-03-18T23:26:25.384371321Z" level=info msg="containerd successfully booted in 0.197341s"
time="2020-03-18T23:26:25.384674761Z" level=info msg="Start subscribing containerd event"
time="2020-03-18T23:26:25.384782006Z" level=info msg="Start recovering state"
time="2020-03-18T23:26:25.384929925Z" level=info msg="Start event monitor"
time="2020-03-18T23:26:25.384991895Z" level=info msg="Start snapshots syncer"
time="2020-03-18T23:26:25.385006711Z" level=info msg="Start streaming server"
time="2020-03-18T23:26:26.338028629Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/VERSION locked: unavailable" ref=tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/VERSION total=3
time="2020-03-18T23:26:26.548676899Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/VERSION locked: unavailable" ref=tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/VERSION total=3
time="2020-03-18T23:26:27.108075003Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/VERSION locked: unavailable" ref=tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/VERSION total=3
time="2020-03-18T23:26:27.946455558Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/VERSION locked: unavailable" ref=tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/VERSION total=3
time="2020-03-18T23:26:28.852239395Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar locked: unavailable" ref=tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar total=43908096
time="2020-03-18T23:26:29.266438056Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar locked: unavailable" ref=tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar total=43908096
time="2020-03-18T23:26:29.733809514Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar locked: unavailable" ref=tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar total=43908096
time="2020-03-18T23:26:30.663122029Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar locked: unavailable" ref=tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar total=43908096
time="2020-03-18T23:26:31.039703627Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar locked: unavailable" ref=tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar total=43908096
time="2020-03-18T23:26:31.638595066Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar locked: unavailable" ref=tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar total=43908096
time="2020-03-18T23:26:32.366342763Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar locked: unavailable" ref=tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar total=43908096
time="2020-03-18T23:26:32.896121064Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar locked: unavailable" ref=tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar total=43908096
time="2020-03-18T23:26:33.254049194Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar locked: unavailable" ref=tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar total=43908096
time="2020-03-18T23:26:33.987196404Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar locked: unavailable" ref=tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar total=43908096
time="2020-03-18T23:26:34.538945118Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar locked: unavailable" ref=tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar total=43908096
time="2020-03-18T23:26:35.161395389Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar locked: unavailable" ref=tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar total=43908096
time="2020-03-18T23:26:36.261815632Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar locked: unavailable" ref=tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar total=43908096
time="2020-03-18T23:26:37.222899094Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar locked: unavailable" ref=tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar total=43908096
time="2020-03-18T23:26:38.352254158Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar locked: unavailable" ref=tar-2d4b6f0f2e93b3cfece543efaa894a4f6f97214013219bd8a4c18c42bd4b677a/layer.tar total=43908096
time="2020-03-18T23:26:43.076530297Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/kube-proxy:v1.16.9-beta.0.7_5116ee4b159565,Labels:map[string]string{},XXX_unrecognized:[],}"
time="2020-03-18T23:26:43.158683953Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5aebafdc963ffbc950b7041f18693266f7a668e93e532f5b07965432bb99fca2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2020-03-18T23:26:43.159333864Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/kube-proxy:v1.16.9-beta.0.7_5116ee4b159565,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2020-03-18T23:26:48.204690004Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref k8s.io/1/tar-repositories locked: unavailable" ref=tar-repositories total=141
time="2020-03-18T23:26:48.590311303Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/kube-controller-manager:v1.16.9-beta.0.7_5116ee4b159565,Labels:map[string]string{},XXX_unrecognized:[],}"
time="2020-03-18T23:26:48.627592058Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/kube-scheduler:v1.16.9-beta.0.7_5116ee4b159565,Labels:map[string]string{},XXX_unrecognized:[],}"
time="2020-03-18T23:26:48.663457453Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:393530499efe497a2c87556307d158028e1377799458f2bc22b03378778aaef5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2020-03-18T23:26:48.664830964Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/kube-controller-manager:v1.16.9-beta.0.7_5116ee4b159565,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2020-03-18T23:26:48.666563240Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:14019112afa8d560378a108a2fef45e50c3a729277f9de7d77d3a194d0ff221f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2020-03-18T23:26:48.668673523Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/kube-scheduler:v1.16.9-beta.0.7_5116ee4b159565,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
... skipping 104 lines ...
time="23:29:02" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-regular-lb]"

kinder-regular-control-plane-1:$ Preparing /kind/kubeadm.conf
time="23:29:02" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-regular-control-plane-1]"
time="23:29:03" level=debug msg="Running: [docker exec kinder-regular-control-plane-1 kubeadm version -o=short]"
time="23:29:04" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.16.9-beta.0.7+5116ee4b159565)"
time="23:29:04" level=debug msg="generated config:\napiServer:\n  certSANs:\n  - localhost\n  - 172.17.0.5\napiVersion: kubeadm.k8s.io/v1beta2\nclusterName: kinder-regular\ncontrolPlaneEndpoint: 172.17.0.7:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.9-beta.0.7+5116ee4b159565\nnetworking:\n  podSubnet: 192.168.0.0/16\n  serviceSubnet: \"\"\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.5\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.5\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
time="23:29:04" level=debug msg="Running: [docker cp /tmp/kinder-regular-control-plane-1-021076239 kinder-regular-control-plane-1:/kind/kubeadm.conf]"

kinder-regular-lb:$ Updating load balancer configuration with 1 control plane backends
time="23:29:05" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-regular-control-plane-1]"
time="23:29:06" level=debug msg="Writing loadbalancer config on kinder-regular-lb..."
time="23:29:06" level=debug msg="Running: [docker cp /tmp/kinder-regular-lb-856286498 kinder-regular-lb:/usr/local/etc/haproxy/haproxy.cfg]"
... skipping 30 lines ...
I0318 23:29:09.176306     181 checks.go:377] validating the presence of executable ebtables
I0318 23:29:09.176346     181 checks.go:377] validating the presence of executable ethtool
I0318 23:29:09.176384     181 checks.go:377] validating the presence of executable socat
I0318 23:29:09.176424     181 checks.go:377] validating the presence of executable tc
I0318 23:29:09.176452     181 checks.go:377] validating the presence of executable touch
I0318 23:29:09.176541     181 checks.go:521] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0318 23:29:09.229983     181 checks.go:407] checking whether the given node name is reachable using net.LookupHost
I0318 23:29:09.230280     181 checks.go:619] validating kubelet version
I0318 23:29:09.601465     181 checks.go:129] validating if the service is enabled and active
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
... skipping 360 lines ...
time="23:32:08" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-regular-lb]"

kinder-regular-control-plane-2:$ Preparing /kind/kubeadm.conf
time="23:32:08" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-regular-control-plane-2]"
time="23:32:09" level=debug msg="Running: [docker exec kinder-regular-control-plane-2 kubeadm version -o=short]"
time="23:32:10" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.16.9-beta.0.7+5116ee4b159565)"
time="23:32:10" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta2\ncontrolPlane:\n  localAPIEndpoint:\n    advertiseAddress: 172.17.0.4\n    bindPort: 6443\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.4\n"
time="23:32:10" level=debug msg="Running: [docker cp /tmp/kinder-regular-control-plane-2-391511044 kinder-regular-control-plane-2:/kind/kubeadm.conf]"

kinder-regular-control-plane-2:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
time="23:32:12" level=debug msg="Running: [docker exec kinder-regular-control-plane-2 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]"
I0318 23:32:13.613832     377 join.go:363] [preflight] found NodeName empty; using OS hostname as NodeName
I0318 23:32:13.613917     377 joinconfiguration.go:75] loading configuration from "/kind/kubeadm.conf"
... skipping 15 lines ...
I0318 23:32:13.681406     377 checks.go:377] validating the presence of executable ebtables
I0318 23:32:13.681469     377 checks.go:377] validating the presence of executable ethtool
I0318 23:32:13.681689     377 checks.go:377] validating the presence of executable socat
I0318 23:32:13.681748     377 checks.go:377] validating the presence of executable tc
I0318 23:32:13.681808     377 checks.go:377] validating the presence of executable touch
I0318 23:32:13.681878     377 checks.go:521] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0318 23:32:13.755704     377 checks.go:407] checking whether the given node name is reachable using net.LookupHost
I0318 23:32:13.756000     377 checks.go:619] validating kubelet version
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
... skipping 139 lines ...
I0318 23:32:49.978330     377 local.go:136] Adding etcd member: https://172.17.0.4:2380
I0318 23:32:50.122319     377 local.go:142] Updated etcd member list: [{kinder-regular-control-plane-2 https://172.17.0.4:2380} {kinder-regular-control-plane-1 https://172.17.0.5:2380}]
I0318 23:32:50.123803     377 etcd.go:372] [etcd] attempting to see if all cluster endpoints ([https://172.17.0.5:2379 https://172.17.0.4:2379]) are available 1/8
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
{"level":"warn","ts":"2020-03-18T23:33:06.341Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://172.17.0.4:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
I0318 23:33:06.341294     377 etcd.go:377] [etcd] Attempt timed out
I0318 23:33:06.341599     377 etcd.go:369] [etcd] Waiting 5s until next retry
I0318 23:33:11.341794     377 etcd.go:372] [etcd] attempting to see if all cluster endpoints ([https://172.17.0.5:2379 https://172.17.0.4:2379]) are available 2/8
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0318 23:33:11.466453     377 round_trippers.go:443] POST https://172.17.0.7:6443/api/v1/namespaces/kube-system/configmaps 409 Conflict in 45 milliseconds
I0318 23:33:11.472824     377 round_trippers.go:443] GET https://172.17.0.7:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 6 milliseconds
... skipping 119 lines ...
time="23:35:02" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-regular-lb]"

kinder-regular-control-plane-3:$ Preparing /kind/kubeadm.conf
time="23:35:02" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-regular-control-plane-3]"
time="23:35:03" level=debug msg="Running: [docker exec kinder-regular-control-plane-3 kubeadm version -o=short]"
time="23:35:05" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.16.9-beta.0.7+5116ee4b159565)"
time="23:35:05" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta2\ncontrolPlane:\n  localAPIEndpoint:\n    advertiseAddress: 172.17.0.6\n    bindPort: 6443\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.6\n"
time="23:35:05" level=debug msg="Running: [docker cp /tmp/kinder-regular-control-plane-3-936034878 kinder-regular-control-plane-3:/kind/kubeadm.conf]"

kinder-regular-control-plane-3:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
time="23:35:07" level=debug msg="Running: [docker exec kinder-regular-control-plane-3 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]"
I0318 23:35:08.601027     526 join.go:363] [preflight] found NodeName empty; using OS hostname as NodeName
I0318 23:35:08.601102     526 joinconfiguration.go:75] loading configuration from "/kind/kubeadm.conf"
... skipping 15 lines ...
I0318 23:35:08.700204     526 checks.go:377] validating the presence of executable ebtables
I0318 23:35:08.700447     526 checks.go:377] validating the presence of executable ethtool
I0318 23:35:08.700721     526 checks.go:377] validating the presence of executable socat
I0318 23:35:08.701010     526 checks.go:377] validating the presence of executable tc
I0318 23:35:08.701216     526 checks.go:377] validating the presence of executable touch
I0318 23:35:08.701504     526 checks.go:521] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0318 23:35:08.769761     526 checks.go:407] checking whether the given node name is reachable using net.LookupHost
I0318 23:35:08.770257     526 checks.go:619] validating kubelet version
I0318 23:35:09.213158     526 checks.go:129] validating if the service is enabled and active
I0318 23:35:09.287484     526 checks.go:202] validating availability of port 10250
I0318 23:35:09.288413     526 checks.go:433] validating if the connectivity type is via proxy or direct
I0318 23:35:09.288566     526 join.go:433] [preflight] Discovering cluster-info
... skipping 133 lines ...
I0318 23:35:47.363938     526 local.go:136] Adding etcd member: https://172.17.0.6:2380
I0318 23:35:47.828487     526 local.go:142] Updated etcd member list: [{kinder-regular-control-plane-2 https://172.17.0.4:2380} {kinder-regular-control-plane-3 https://172.17.0.6:2380} {kinder-regular-control-plane-1 https://172.17.0.5:2380}]
I0318 23:35:47.829970     526 etcd.go:372] [etcd] attempting to see if all cluster endpoints ([https://172.17.0.4:2379 https://172.17.0.5:2379 https://172.17.0.6:2379]) are available 1/8
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
{"level":"warn","ts":"2020-03-18T23:35:54.040Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://172.17.0.6:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
I0318 23:35:54.040507     526 etcd.go:377] [etcd] Attempt timed out
I0318 23:35:54.040518     526 etcd.go:369] [etcd] Waiting 5s until next retry
I0318 23:35:59.048496     526 etcd.go:372] [etcd] attempting to see if all cluster endpoints ([https://172.17.0.4:2379 https://172.17.0.5:2379 https://172.17.0.6:2379]) are available 2/8
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0318 23:35:59.300196     526 round_trippers.go:443] POST https://172.17.0.7:6443/api/v1/namespaces/kube-system/configmaps 409 Conflict in 30 milliseconds
I0318 23:35:59.352656     526 round_trippers.go:443] GET https://172.17.0.7:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 51 milliseconds
... skipping 65 lines ...
time="23:36:27" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-regular-lb]"

kinder-regular-worker-1:$ Preparing /kind/kubeadm.conf
time="23:36:28" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-regular-worker-1]"
time="23:36:28" level=debug msg="Running: [docker exec kinder-regular-worker-1 kubeadm version -o=short]"
time="23:36:30" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.16.9-beta.0.7+5116ee4b159565)"
time="23:36:30" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta2\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.3\n"
time="23:36:30" level=debug msg="Running: [docker cp /tmp/kinder-regular-worker-1-262743584 kinder-regular-worker-1:/kind/kubeadm.conf]"

kinder-regular-worker-1:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
time="23:36:32" level=debug msg="Running: [docker exec kinder-regular-worker-1 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]"
I0318 23:36:33.949851     611 join.go:363] [preflight] found NodeName empty; using OS hostname as NodeName
I0318 23:36:33.949931     611 joinconfiguration.go:75] loading configuration from "/kind/kubeadm.conf"
... skipping 15 lines ...
I0318 23:36:34.053085     611 checks.go:377] validating the presence of executable ebtables
I0318 23:36:34.053117     611 checks.go:377] validating the presence of executable ethtool
I0318 23:36:34.053149     611 checks.go:377] validating the presence of executable socat
I0318 23:36:34.053189     611 checks.go:377] validating the presence of executable tc
I0318 23:36:34.053216     611 checks.go:377] validating the presence of executable touch
I0318 23:36:34.053278     611 checks.go:521] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0318 23:36:34.082509     611 checks.go:407] checking whether the given node name is reachable using net.LookupHost
I0318 23:36:34.082875     611 checks.go:619] validating kubelet version
I0318 23:36:34.547350     611 checks.go:129] validating if the service is enabled and active
I0318 23:36:34.608251     611 checks.go:202] validating availability of port 10250
I0318 23:36:34.608647     611 checks.go:287] validating the existence of file /etc/kubernetes/pki/ca.crt
I0318 23:36:34.608697     611 checks.go:433] validating if the connectivity type is via proxy or direct
... skipping 105 lines ...
time="23:37:32" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-regular-lb]"

kinder-regular-worker-2:$ Preparing /kind/kubeadm.conf
time="23:37:32" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-regular-worker-2]"
time="23:37:33" level=debug msg="Running: [docker exec kinder-regular-worker-2 kubeadm version -o=short]"
time="23:37:35" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.16.9-beta.0.7+5116ee4b159565)"
time="23:37:35" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta2\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.2\n"
time="23:37:35" level=debug msg="Running: [docker cp /tmp/kinder-regular-worker-2-017245951 kinder-regular-worker-2:/kind/kubeadm.conf]"

kinder-regular-worker-2:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
time="23:37:37" level=debug msg="Running: [docker exec kinder-regular-worker-2 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]"
I0318 23:37:38.814946     661 join.go:363] [preflight] found NodeName empty; using OS hostname as NodeName
I0318 23:37:38.815050     661 joinconfiguration.go:75] loading configuration from "/kind/kubeadm.conf"
... skipping 15 lines ...
I0318 23:37:38.893662     661 checks.go:377] validating the presence of executable ebtables
I0318 23:37:38.893844     661 checks.go:377] validating the presence of executable ethtool
I0318 23:37:38.893930     661 checks.go:377] validating the presence of executable socat
I0318 23:37:38.894035     661 checks.go:377] validating the presence of executable tc
I0318 23:37:38.894131     661 checks.go:377] validating the presence of executable touch
I0318 23:37:38.894240     661 checks.go:521] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0318 23:37:38.943812     661 checks.go:407] checking whether the given node name is reachable using net.LookupHost
I0318 23:37:38.957529     661 checks.go:619] validating kubelet version
I0318 23:37:39.304229     661 checks.go:129] validating if the service is enabled and active
I0318 23:37:39.402923     661 checks.go:202] validating availability of port 10250
I0318 23:37:39.405130     661 checks.go:287] validating the existence of file /etc/kubernetes/pki/ca.crt
I0318 23:37:39.405184     661 checks.go:433] validating if the connectivity type is via proxy or direct
... skipping 351 lines ...
[reset] Unmounting mounted directories in "/var/lib/kubelet"
I0318 23:45:32.278663   14624 cleanupnode.go:79] [reset] Removing Kubernetes-managed containers
make[1]: Leaving directory '/home/prow/go/src/k8s.io/kubernetes'
+++ [0318 23:45:38] Building go targets for linux/amd64:
    test/e2e_kubeadm/e2e_kubeadm.test
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
W0318 23:47:27.984319   14624 cleanupnode.go:81] [reset] Failed to remove containers: failed to stop running pod 7988baedb3c49109fa5e4552c58211ac1618a6243b811f2423067231b523551f: output: time="2020-03-18T23:46:48Z" level=fatal msg="stopping the pod sandbox \"7988baedb3c49109fa5e4552c58211ac1618a6243b811f2423067231b523551f\" failed: rpc error: code = Unknown desc = failed to destroy network for sandbox \"7988baedb3c49109fa5e4552c58211ac1618a6243b811f2423067231b523551f\": the server was unable to return a response in the time allotted, but may still be processing the request (get IPAMHandles.crd.projectcalico.org k8s-pod-network.7988baedb3c49109fa5e4552c58211ac1618a6243b811f2423067231b523551f)"
, error: exit status 1
I0318 23:47:27.984459   14624 cleanupnode.go:87] [reset] Removing contents from the config and pki directories
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
I0318 23:47:27.987670   14624 reset.go:209] [reset] Deleting content of /var/lib/etcd
I0318 23:47:28.080238   14624 reset.go:209] [reset] Deleting content of /var/lib/kubelet

... skipping 80 lines ...
I0318 23:47:41.930534    7473 round_trippers.go:443] PUT https://172.17.0.7:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 12 milliseconds
I0318 23:47:41.931759    7473 removeetcdmember.go:54] [reset] Checking for etcd config
I0318 23:47:41.932195    7473 local.go:97] [etcd] creating etcd client that connects to etcd pods
I0318 23:47:41.961556    7473 round_trippers.go:443] GET https://172.17.0.7:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 29 milliseconds
I0318 23:47:41.966419    7473 etcd.go:107] etcd endpoints read from pods: 
[reset] Stopping the kubelet service
W0318 23:47:41.969868    7473 removeetcdmember.go:61] [reset] failed to remove etcd member: error syncing endpoints with etc: etcdclient: no available endpoints
.Please manually remove this etcd member using etcdctl
I0318 23:47:41.969916    7473 cleanupnode.go:57] [reset] Getting init system
[reset] Unmounting mounted directories in "/var/lib/kubelet"
I0318 23:47:42.268583    7473 cleanupnode.go:79] [reset] Removing Kubernetes-managed containers
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
... skipping 87 lines ...
Deleting cluster "kinder-regular" ...
time="23:48:03" level=debug msg="Running: /usr/bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\\t{{.Label \"io.k8s.sigs.kind.cluster\"}} --filter label=io.k8s.sigs.kind.cluster=kinder-regular]"
time="23:48:03" level=debug msg="Running: /usr/bin/docker [docker rm -f -v kinder-regular-lb kinder-regular-control-plane-1 kinder-regular-control-plane-3 kinder-regular-control-plane-2 kinder-regular-worker-1 kinder-regular-worker-2]"
 completed!

Ran 10 of 11 tasks in 0.000 seconds
FAIL! -- 9 tasks Passed | 1 Failed | 1 Skipped

see junit-runner.xml and task logs files for more details

Error: failed executing the workflow
+ EXIT_VALUE=1
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up binfmt_misc ...
================================================================================
... skipping 2 lines ...