This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 269 succeeded
Started2020-02-05 18:18
Elapsed40m20s
Revisionrelease-1.16
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/e48b7838-0f5e-47e2-9ece-19d7606e6444/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/e48b7838-0f5e-47e2-9ece-19d7606e6444/targets/test

No Test Failures!


Show 269 Passed Tests

Show 4499 Skipped Tests

Error lines from build-log.txt

... skipping 284 lines ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /kind/systemd/kubelet.service.
Created symlink /etc/systemd/system/kubelet.service → /kind/systemd/kubelet.service.
time="18:21:20" level=debug msg="Running: [docker exec kind-build-64f0df93-88c3-41c1-baf4-7373bc7b3a15 mkdir -p /etc/systemd/system/kubelet.service.d]"
time="18:21:21" level=info msg="Adding /etc/systemd/system/kubelet.service.d/10-kubeadm.conf to the image"
time="18:21:21" level=debug msg="Running: [docker exec kind-build-64f0df93-88c3-41c1-baf4-7373bc7b3a15 cp /alter/bits/systemd/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf]"
time="18:21:22" level=debug msg="Running: [docker exec kind-build-64f0df93-88c3-41c1-baf4-7373bc7b3a15 chown -R root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf]"
time="18:21:23" level=debug msg="Running: [docker exec kind-build-64f0df93-88c3-41c1-baf4-7373bc7b3a15 /bin/sh -c echo \"KUBELET_EXTRA_ARGS=--fail-swap-on=false\" >> /etc/default/kubelet]"
time="18:21:24" level=debug msg="Running: [docker exec kind-build-64f0df93-88c3-41c1-baf4-7373bc7b3a15 cp /alter/bits/kubeadm /usr/bin/kubeadm]"
time="18:21:27" level=debug msg="Running: [docker exec kind-build-64f0df93-88c3-41c1-baf4-7373bc7b3a15 chown -R root:root /usr/bin/kubeadm]"
time="18:21:28" level=debug msg="Running: [docker exec kind-build-64f0df93-88c3-41c1-baf4-7373bc7b3a15 /bin/sh -c which docker || true]"
time="18:21:29" level=info msg="Detected docker as container runtime"
time="18:21:29" level=info msg="Pre loading images ..."
time="18:21:29" level=debug msg="Running: [docker exec kind-build-64f0df93-88c3-41c1-baf4-7373bc7b3a15 mkdir -p /kind/images]"
... skipping 183 lines ...
kinder-xony-control-plane-1:$ Preparing /kind/kubeadm.conf
time="18:23:42" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-xony-control-plane-1]"
time="18:23:43" level=debug msg="Running: [docker exec kinder-xony-control-plane-1 kubeadm version -o=short]"
time="18:23:44" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.17.3-beta.0.32+70cd0f5c4b7e14)"
time="18:23:44" level=debug msg="Preparing dockerPatch for kubeadm config v1beta2 (kubeadm version 1.17.3-beta.0.32+70cd0f5c4b7e14)"
time="18:23:44" level=debug msg="Preparing automaticCopyCertsPatches for kubeadm config v1beta2 (kubeadm version 1.17.3-beta.0.32+70cd0f5c4b7e14)"
time="18:23:44" level=debug msg="generated config:\napiServer:\n  certSANs:\n  - localhost\n  - 172.17.0.2\napiVersion: kubeadm.k8s.io/v1beta2\nclusterName: kinder-xony\ncontrolPlaneEndpoint: 172.17.0.7:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.7-beta.0.23+0a70c2fa6d4642\nnetworking:\n  podSubnet: 192.168.0.0/16\n  serviceSubnet: \"\"\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\ncertificateKey: \"0123456789012345678901234567890123456789012345678901234567890123\"\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.2\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.2\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
time="18:23:44" level=debug msg="Running: [docker cp /tmp/kinder-xony-control-plane-1-664493943 kinder-xony-control-plane-1:/kind/kubeadm.conf]"

kinder-xony-lb:$ Updating load balancer configuration with 1 control plane backends
time="18:23:45" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-xony-control-plane-1]"
time="18:23:45" level=debug msg="Writing loadbalancer config on kinder-xony-lb..."
time="18:23:45" level=debug msg="Running: [docker cp /tmp/kinder-xony-lb-046506602 kinder-xony-lb:/usr/local/etc/haproxy/haproxy.cfg]"
... skipping 33 lines ...
I0205 18:23:50.365364     589 checks.go:376] validating the presence of executable ebtables
I0205 18:23:50.365410     589 checks.go:376] validating the presence of executable ethtool
I0205 18:23:50.365464     589 checks.go:376] validating the presence of executable socat
I0205 18:23:50.365498     589 checks.go:376] validating the presence of executable tc
I0205 18:23:50.365554     589 checks.go:376] validating the presence of executable touch
I0205 18:23:50.365607     589 checks.go:520] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0205 18:23:50.749477     589 checks.go:406] checking whether the given node name is reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
DOCKER_VERSION: 18.09.4
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
... skipping 336 lines ...
kinder-xony-control-plane-2:$ Preparing /kind/kubeadm.conf
time="18:25:36" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-xony-control-plane-2]"
time="18:25:36" level=debug msg="Running: [docker exec kinder-xony-control-plane-2 kubeadm version -o=short]"
time="18:25:37" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.17.3-beta.0.32+70cd0f5c4b7e14)"
time="18:25:37" level=debug msg="Preparing dockerPatch for kubeadm config v1beta2 (kubeadm version 1.17.3-beta.0.32+70cd0f5c4b7e14)"
time="18:25:37" level=debug msg="Preparing automaticCopyCertsPatches for kubeadm config v1beta2 (kubeadm version 1.17.3-beta.0.32+70cd0f5c4b7e14)"
time="18:25:37" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta2\ncontrolPlane:\n  certificateKey: \"0123456789012345678901234567890123456789012345678901234567890123\"\n  localAPIEndpoint:\n    advertiseAddress: 172.17.0.3\n    bindPort: 6443\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.3\n"
time="18:25:37" level=debug msg="Running: [docker cp /tmp/kinder-xony-control-plane-2-528466841 kinder-xony-control-plane-2:/kind/kubeadm.conf]"
time="18:25:39" level=debug msg="Running: [docker exec kinder-xony-control-plane-2 kubeadm version -o=short]"

kinder-xony-control-plane-2:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
time="18:25:40" level=debug msg="Running: [docker exec kinder-xony-control-plane-2 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]"
W0205 18:25:40.985621     711 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
... skipping 18 lines ...
I0205 18:25:41.738069     711 checks.go:376] validating the presence of executable ebtables
I0205 18:25:41.738118     711 checks.go:376] validating the presence of executable ethtool
I0205 18:25:41.738177     711 checks.go:376] validating the presence of executable socat
I0205 18:25:41.738218     711 checks.go:376] validating the presence of executable tc
I0205 18:25:41.738257     711 checks.go:376] validating the presence of executable touch
I0205 18:25:41.738300     711 checks.go:520] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0205 18:25:42.087766     711 checks.go:406] checking whether the given node name is reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
DOCKER_VERSION: 18.09.4
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
... skipping 120 lines ...
I0205 18:26:07.602291     711 local.go:136] Adding etcd member: https://172.17.0.3:2380
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
I0205 18:26:07.676011     711 local.go:142] Updated etcd member list: [{kinder-xony-control-plane-2 https://172.17.0.3:2380} {kinder-xony-control-plane-1 https://172.17.0.2:2380}]
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
I0205 18:26:07.677797     711 etcd.go:408] [etcd] attempting to see if all cluster endpoints ([https://172.17.0.2:2379 https://172.17.0.3:2379]) are available 1/8
{"level":"warn","ts":"2020-02-05T18:26:23.171Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://172.17.0.3:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
I0205 18:26:23.171595     711 etcd.go:388] Failed to get etcd status for https://172.17.0.3:2379: context deadline exceeded
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0205 18:26:25.965277     711 round_trippers.go:443] POST https://172.17.0.7:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 409 Conflict in 2724 milliseconds
I0205 18:26:25.990683     711 round_trippers.go:443] GET https://172.17.0.7:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s 200 OK in 24 milliseconds
I0205 18:26:26.029177     711 round_trippers.go:443] PUT https://172.17.0.7:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s 200 OK in 37 milliseconds
I0205 18:26:26.046696     711 round_trippers.go:443] POST https://172.17.0.7:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 409 Conflict in 16 milliseconds
I0205 18:26:26.053466     711 round_trippers.go:443] PUT https://172.17.0.7:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm:nodes-kubeadm-config?timeout=10s 200 OK in 6 milliseconds
... skipping 67 lines ...
kinder-xony-control-plane-3:$ Preparing /kind/kubeadm.conf
time="18:26:46" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-xony-control-plane-3]"
time="18:26:47" level=debug msg="Running: [docker exec kinder-xony-control-plane-3 kubeadm version -o=short]"
time="18:26:48" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.17.3-beta.0.32+70cd0f5c4b7e14)"
time="18:26:48" level=debug msg="Preparing dockerPatch for kubeadm config v1beta2 (kubeadm version 1.17.3-beta.0.32+70cd0f5c4b7e14)"
time="18:26:48" level=debug msg="Preparing automaticCopyCertsPatches for kubeadm config v1beta2 (kubeadm version 1.17.3-beta.0.32+70cd0f5c4b7e14)"
time="18:26:48" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta2\ncontrolPlane:\n  certificateKey: \"0123456789012345678901234567890123456789012345678901234567890123\"\n  localAPIEndpoint:\n    advertiseAddress: 172.17.0.6\n    bindPort: 6443\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.6\n"
time="18:26:48" level=debug msg="Running: [docker cp /tmp/kinder-xony-control-plane-3-939330355 kinder-xony-control-plane-3:/kind/kubeadm.conf]"
time="18:26:49" level=debug msg="Running: [docker exec kinder-xony-control-plane-3 kubeadm version -o=short]"

kinder-xony-control-plane-3:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
time="18:26:50" level=debug msg="Running: [docker exec kinder-xony-control-plane-3 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]"
W0205 18:26:51.658873     794 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
... skipping 18 lines ...
I0205 18:26:52.502818     794 checks.go:376] validating the presence of executable ebtables
I0205 18:26:52.502862     794 checks.go:376] validating the presence of executable ethtool
I0205 18:26:52.502918     794 checks.go:376] validating the presence of executable socat
I0205 18:26:52.502957     794 checks.go:376] validating the presence of executable tc
I0205 18:26:52.502994     794 checks.go:376] validating the presence of executable touch
I0205 18:26:52.503038     794 checks.go:520] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0205 18:26:52.888110     794 checks.go:406] checking whether the given node name is reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
DOCKER_VERSION: 18.09.4
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
... skipping 120 lines ...
I0205 18:27:18.569921     794 local.go:136] Adding etcd member: https://172.17.0.6:2380
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
I0205 18:27:18.690523     794 local.go:142] Updated etcd member list: [{kinder-xony-control-plane-2 https://172.17.0.3:2380} {kinder-xony-control-plane-1 https://172.17.0.2:2380} {kinder-xony-control-plane-3 https://172.17.0.6:2380}]
I0205 18:27:18.693858     794 etcd.go:408] [etcd] attempting to see if all cluster endpoints ([https://172.17.0.3:2379 https://172.17.0.2:2379 https://172.17.0.6:2379]) are available 1/8
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
{"level":"warn","ts":"2020-02-05T18:27:24.155Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://172.17.0.6:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
I0205 18:27:24.155815     794 etcd.go:388] Failed to get etcd status for https://172.17.0.6:2379: context deadline exceeded
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0205 18:27:24.241584     794 round_trippers.go:443] POST https://172.17.0.7:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 409 Conflict in 13 milliseconds
I0205 18:27:24.247846     794 round_trippers.go:443] GET https://172.17.0.7:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s 200 OK in 5 milliseconds
I0205 18:27:24.258985     794 round_trippers.go:443] PUT https://172.17.0.7:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s 200 OK in 9 milliseconds
I0205 18:27:24.271072     794 round_trippers.go:443] POST https://172.17.0.7:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 409 Conflict in 11 milliseconds
I0205 18:27:24.279177     794 round_trippers.go:443] PUT https://172.17.0.7:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm:nodes-kubeadm-config?timeout=10s 200 OK in 7 milliseconds
... skipping 70 lines ...

kinder-xony-worker-1:$ Preparing /kind/kubeadm.conf
time="18:27:55" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-xony-worker-1]"
time="18:27:55" level=debug msg="Running: [docker exec kinder-xony-worker-1 kubeadm version -o=short]"
time="18:27:56" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.17.3-beta.0.32+70cd0f5c4b7e14)"
time="18:27:56" level=debug msg="Preparing dockerPatch for kubeadm config v1beta2 (kubeadm version 1.17.3-beta.0.32+70cd0f5c4b7e14)"
time="18:27:56" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta2\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.4\n"
time="18:27:56" level=debug msg="Running: [docker cp /tmp/kinder-xony-worker-1-300155357 kinder-xony-worker-1:/kind/kubeadm.conf]"

kinder-xony-worker-1:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
time="18:27:58" level=debug msg="Running: [docker exec kinder-xony-worker-1 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]"
[preflight] Running pre-flight checks
W0205 18:27:59.013344     870 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
... skipping 17 lines ...
I0205 18:27:59.983067     870 checks.go:376] validating the presence of executable ebtables
I0205 18:27:59.983127     870 checks.go:376] validating the presence of executable ethtool
I0205 18:27:59.983171     870 checks.go:376] validating the presence of executable socat
I0205 18:27:59.983210     870 checks.go:376] validating the presence of executable tc
I0205 18:27:59.983253     870 checks.go:376] validating the presence of executable touch
I0205 18:27:59.983300     870 checks.go:520] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0205 18:28:00.403810     870 checks.go:406] checking whether the given node name is reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
DOCKER_VERSION: 18.09.4
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
... skipping 93 lines ...

kinder-xony-worker-2:$ Preparing /kind/kubeadm.conf
time="18:28:41" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-xony-worker-2]"
time="18:28:41" level=debug msg="Running: [docker exec kinder-xony-worker-2 kubeadm version -o=short]"
time="18:28:42" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.17.3-beta.0.32+70cd0f5c4b7e14)"
time="18:28:42" level=debug msg="Preparing dockerPatch for kubeadm config v1beta2 (kubeadm version 1.17.3-beta.0.32+70cd0f5c4b7e14)"
time="18:28:42" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta2\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.5\n"
time="18:28:42" level=debug msg="Running: [docker cp /tmp/kinder-xony-worker-2-117874072 kinder-xony-worker-2:/kind/kubeadm.conf]"

kinder-xony-worker-2:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
time="18:28:44" level=debug msg="Running: [docker exec kinder-xony-worker-2 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]"
W0205 18:28:44.990837     885 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
I0205 18:28:44.990924     885 join.go:371] [preflight] found NodeName empty; using OS hostname as NodeName
... skipping 17 lines ...
I0205 18:28:45.839025     885 checks.go:376] validating the presence of executable ebtables
I0205 18:28:45.839062     885 checks.go:376] validating the presence of executable ethtool
I0205 18:28:45.839098     885 checks.go:376] validating the presence of executable socat
I0205 18:28:45.839137     885 checks.go:376] validating the presence of executable tc
I0205 18:28:45.839187     885 checks.go:376] validating the presence of executable touch
I0205 18:28:45.839228     885 checks.go:520] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
DOCKER_VERSION: 18.09.4
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0205 18:28:46.269109     885 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0205 18:28:46.269414     885 checks.go:618] validating kubelet version
I0205 18:28:46.497987     885 checks.go:128] validating if the service is enabled and active
I0205 18:28:46.537497     885 checks.go:201] validating availability of port 10250
I0205 18:28:46.537767     885 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt
I0205 18:28:46.537799     885 checks.go:432] validating if the connectivity type is via proxy or direct
... skipping 753 lines ...
  _output/local/go/src/k8s.io/kubernetes/test/e2e_kubeadm/dns_addon_test.go:133
[AfterEach] [k8s.io] [sig-cluster-lifecycle] [area-kubeadm] DNS addon
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 18:34:09.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
•
Ran 31 of 37 Specs in 0.779 seconds
SUCCESS! -- 31 Passed | 0 Failed | 0 Pending | 6 Skipped
PASS

Ginkgo ran 1 suite in 921.562142ms
Test Suite Passed
[--skip=\[copy-certs\] /home/prow/go/src/k8s.io/kubernetes/_output/bin/e2e_kubeadm.test -- --report-dir=/logs/artifacts --report-prefix=e2e-kubeadm --kubeconfig=/root/.kube/kind-config-kinder-xony]
 completed!
... skipping 269 lines ...
Feb  5 18:38:10.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/kind-config-kinder-xony explain e2e-test-crd-publish-openapi-1403-crds.spec'
Feb  5 18:38:11.636: INFO: stderr: ""
Feb  5 18:38:11.636: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1403-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Feb  5 18:38:11.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/kind-config-kinder-xony explain e2e-test-crd-publish-openapi-1403-crds.spec.bars'
Feb  5 18:38:12.389: INFO: stderr: ""
Feb  5 18:38:12.390: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1403-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Feb  5 18:38:12.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/kind-config-kinder-xony explain e2e-test-crd-publish-openapi-1403-crds.spec.bars2'
Feb  5 18:38:13.025: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 18:38:17.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7829" for this suite.
... skipping 400 lines ...

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  5 18:39:16.405: INFO: File jessie_udp@dns-test-service-3.dns-419.svc.cluster.local from pod  dns-419/dns-test-4b045b37-49e6-4c92-b035-c2c5b880defe contains '' instead of '10.104.226.176'
Feb  5 18:39:16.405: INFO: Lookups using dns-419/dns-test-4b045b37-49e6-4c92-b035-c2c5b880defe failed for: [jessie_udp@dns-test-service-3.dns-419.svc.cluster.local]

Feb  5 18:39:21.693: INFO: DNS probes using dns-test-4b045b37-49e6-4c92-b035-c2c5b880defe succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
... skipping 344 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-5799
STEP: Creating statefulset with conflicting port in namespace statefulset-5799
STEP: Waiting until pod test-pod will start running in namespace statefulset-5799
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5799
Feb  5 18:39:33.317: INFO: Observed stateful pod in namespace: statefulset-5799, name: ss-0, uid: d3177f66-bfaf-4bda-a7cb-194a6f566bee, status phase: Pending. Waiting for statefulset controller to delete.
Feb  5 18:39:33.606: INFO: Observed stateful pod in namespace: statefulset-5799, name: ss-0, uid: d3177f66-bfaf-4bda-a7cb-194a6f566bee, status phase: Failed. Waiting for statefulset controller to delete.
Feb  5 18:39:33.619: INFO: Observed stateful pod in namespace: statefulset-5799, name: ss-0, uid: d3177f66-bfaf-4bda-a7cb-194a6f566bee, status phase: Failed. Waiting for statefulset controller to delete.
Feb  5 18:39:33.641: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5799
STEP: Removing pod with conflicting port in namespace statefulset-5799
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5799 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88
Feb  5 18:39:46.032: INFO: Deleting all statefulset in ns statefulset-5799
... skipping 281 lines ...
Feb  5 18:40:29.799: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 18:40:42.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3333" for this suite.
Feb  5 18:40:50.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 364 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  5 18:41:24.210: INFO: Successfully updated pod "pod-update-activedeadlineseconds-13e4689b-6edc-442d-96fb-3808292c6161"
Feb  5 18:41:24.210: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-13e4689b-6edc-442d-96fb-3808292c6161" in namespace "pods-709" to be "terminated due to deadline exceeded"
Feb  5 18:41:24.215: INFO: Pod "pod-update-activedeadlineseconds-13e4689b-6edc-442d-96fb-3808292c6161": Phase="Running", Reason="", readiness=true. Elapsed: 5.27783ms
Feb  5 18:41:26.221: INFO: Pod "pod-update-activedeadlineseconds-13e4689b-6edc-442d-96fb-3808292c6161": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.010598222s
Feb  5 18:41:26.221: INFO: Pod "pod-update-activedeadlineseconds-13e4689b-6edc-442d-96fb-3808292c6161" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 18:41:26.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-709" for this suite.
Feb  5 18:41:34.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 460 lines ...
Feb  5 18:42:05.727: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716524923, loc:(*time.Location)(0x78686e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716524923, loc:(*time.Location)(0x78686e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716524923, loc:(*time.Location)(0x78686e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716524923, loc:(*time.Location)(0x78686e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 18:42:07.736: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716524923, loc:(*time.Location)(0x78686e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716524923, loc:(*time.Location)(0x78686e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716524923, loc:(*time.Location)(0x78686e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716524923, loc:(*time.Location)(0x78686e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 18:42:09.735: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716524923, loc:(*time.Location)(0x78686e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716524923, loc:(*time.Location)(0x78686e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716524923, loc:(*time.Location)(0x78686e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716524923, loc:(*time.Location)(0x78686e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  5 18:42:12.760: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 18:42:13.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5683" for this suite.
... skipping 6 lines ...
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103


• [SLOW TEST:24.965 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 2590 lines ...
[BeforeEach] [sig-node] ConfigMap
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 18:46:40.144: INFO: >>> kubeConfig: /root/.kube/kind-config-kinder-xony
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap that has name configmap-test-emptyKey-50f08705-0471-43de-8052-6b69085b4886
[AfterEach] [sig-node] ConfigMap
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 18:46:40.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5704" for this suite.
Feb  5 18:46:46.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 18:46:46.940: INFO: namespace configmap-5704 deletion completed in 6.534672412s


• [SLOW TEST:6.795 seconds]
[sig-node] ConfigMap
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32
  should fail to create ConfigMap with empty key [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 503 lines ...
STEP: Creating a kubernetes client
Feb  5 18:46:50.261: INFO: >>> kubeConfig: /root/.kube/kind-config-kinder-xony
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating the pod
Feb  5 18:46:50.443: INFO: PodSpec: initContainers in spec.initContainers
Feb  5 18:47:49.475: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMet