This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2019-09-19 23:35
Elapsed15m30s
Revision
Buildergke-prow-ssd-pool-ubuntu-9cdf51d2-j63r
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/40145852-6259-4acb-a7cd-68c03e3e0e04/targets/test'}}
pod028d4499-db36-11e9-8a06-a2745baf3417
resultstorehttps://source.cloud.google.com/results/invocations/40145852-6259-4acb-a7cd-68c03e3e0e04/targets/test
infra-commit79a4a73da
pod028d4499-db36-11e9-8a06-a2745baf3417
repok8s.io/kubernetes
repo-commit73505056fb757140da6616d61e905222ef4fe939
repos{u'k8s.io/kubernetes': u'master', u'sigs.k8s.io/kind': u'master'}

No Test Failures!


Error lines from build-log.txt

... skipping 654 lines ...
W0919 23:49:15.839] localAPIEndpoint:
W0919 23:49:15.840]   advertiseAddress: "fc00:db8:1::242:ac11:4"
W0919 23:49:15.840]   bindPort: 6443
W0919 23:49:15.840] nodeRegistration:
W0919 23:49:15.840]   criSocket: "/run/containerd/containerd.sock"
W0919 23:49:15.840]   kubeletExtraArgs:
W0919 23:49:15.840]     fail-swap-on: "false"
W0919 23:49:15.840]     node-ip: "fc00:db8:1::242:ac11:4"
W0919 23:49:15.840] ---
W0919 23:49:15.841] # no-op entry that exists solely so it can be patched
W0919 23:49:15.841] apiVersion: kubeadm.k8s.io/v1beta2
W0919 23:49:15.841] kind: JoinConfiguration
W0919 23:49:15.841] metadata:
... skipping 2 lines ...
W0919 23:49:15.841]   localAPIEndpoint:
W0919 23:49:15.841]     advertiseAddress: "fc00:db8:1::242:ac11:4"
W0919 23:49:15.841]     bindPort: 6443
W0919 23:49:15.842] nodeRegistration:
W0919 23:49:15.842]   criSocket: "/run/containerd/containerd.sock"
W0919 23:49:15.842]   kubeletExtraArgs:
W0919 23:49:15.842]     fail-swap-on: "false"
W0919 23:49:15.842]     node-ip: "fc00:db8:1::242:ac11:4"
W0919 23:49:15.843] discovery:
W0919 23:49:15.843]   bootstrapToken:
W0919 23:49:15.843]     apiServerEndpoint: "[fc00:db8:1::242:ac11:4]:6443"
W0919 23:49:15.843]     token: "abcdef.0123456789abcdef"
W0919 23:49:15.843]     unsafeSkipCAVerification: true
... skipping 48 lines ...
W0919 23:49:15.865] localAPIEndpoint:
W0919 23:49:15.865]   advertiseAddress: fc00:db8:1::242:ac11:4
W0919 23:49:15.865]   bindPort: 6443
W0919 23:49:15.865] nodeRegistration:
W0919 23:49:15.865]   criSocket: /run/containerd/containerd.sock
W0919 23:49:15.865]   kubeletExtraArgs:
W0919 23:49:15.866]     fail-swap-on: "false"
W0919 23:49:15.866]     node-ip: fc00:db8:1::242:ac11:4
W0919 23:49:15.866] ---
W0919 23:49:15.866] apiVersion: kubeadm.k8s.io/v1beta2
W0919 23:49:15.866] controlPlane:
W0919 23:49:15.866]   localAPIEndpoint:
W0919 23:49:15.866]     advertiseAddress: fc00:db8:1::242:ac11:4
... skipping 4 lines ...
W0919 23:49:15.866]     token: abcdef.0123456789abcdef
W0919 23:49:15.867]     unsafeSkipCAVerification: true
W0919 23:49:15.867] kind: JoinConfiguration
W0919 23:49:15.867] nodeRegistration:
W0919 23:49:15.867]   criSocket: /run/containerd/containerd.sock
W0919 23:49:15.867]   kubeletExtraArgs:
W0919 23:49:15.867]     fail-swap-on: "false"
W0919 23:49:15.867]     node-ip: fc00:db8:1::242:ac11:4
W0919 23:49:15.867] ---
W0919 23:49:15.867] address: '::'
W0919 23:49:15.867] apiVersion: kubelet.config.k8s.io/v1beta1
W0919 23:49:15.867] evictionHard:
W0919 23:49:15.868]   imagefs.available: 0%
... skipping 47 lines ...
W0919 23:49:16.122] localAPIEndpoint:
W0919 23:49:16.122]   advertiseAddress: "fc00:db8:1::242:ac11:2"
W0919 23:49:16.122]   bindPort: 6443
W0919 23:49:16.122] nodeRegistration:
W0919 23:49:16.122]   criSocket: "/run/containerd/containerd.sock"
W0919 23:49:16.122]   kubeletExtraArgs:
W0919 23:49:16.123]     fail-swap-on: "false"
W0919 23:49:16.123]     node-ip: "fc00:db8:1::242:ac11:2"
W0919 23:49:16.123] ---
W0919 23:49:16.123] # no-op entry that exists solely so it can be patched
W0919 23:49:16.123] apiVersion: kubeadm.k8s.io/v1beta2
W0919 23:49:16.123] kind: JoinConfiguration
W0919 23:49:16.123] metadata:
W0919 23:49:16.123]   name: config
W0919 23:49:16.123] 
W0919 23:49:16.124] nodeRegistration:
W0919 23:49:16.124]   criSocket: "/run/containerd/containerd.sock"
W0919 23:49:16.127]   kubeletExtraArgs:
W0919 23:49:16.127]     fail-swap-on: "false"
W0919 23:49:16.128]     node-ip: "fc00:db8:1::242:ac11:2"
W0919 23:49:16.128] discovery:
W0919 23:49:16.128]   bootstrapToken:
W0919 23:49:16.128]     apiServerEndpoint: "[fc00:db8:1::242:ac11:4]:6443"
W0919 23:49:16.128]     token: "abcdef.0123456789abcdef"
W0919 23:49:16.128]     unsafeSkipCAVerification: true
... skipping 60 lines ...
W0919 23:49:16.136] localAPIEndpoint:
W0919 23:49:16.136]   advertiseAddress: "fc00:db8:1::242:ac11:3"
W0919 23:49:16.136]   bindPort: 6443
W0919 23:49:16.136] nodeRegistration:
W0919 23:49:16.136]   criSocket: "/run/containerd/containerd.sock"
W0919 23:49:16.136]   kubeletExtraArgs:
W0919 23:49:16.136]     fail-swap-on: "false"
W0919 23:49:16.136]     node-ip: "fc00:db8:1::242:ac11:3"
W0919 23:49:16.137] ---
W0919 23:49:16.137] # no-op entry that exists solely so it can be patched
W0919 23:49:16.137] apiVersion: kubeadm.k8s.io/v1beta2
W0919 23:49:16.137] kind: JoinConfiguration
W0919 23:49:16.137] metadata:
W0919 23:49:16.137]   name: config
W0919 23:49:16.137] 
W0919 23:49:16.137] nodeRegistration:
W0919 23:49:16.137]   criSocket: "/run/containerd/containerd.sock"
W0919 23:49:16.137]   kubeletExtraArgs:
W0919 23:49:16.138]     fail-swap-on: "false"
W0919 23:49:16.138]     node-ip: "fc00:db8:1::242:ac11:3"
W0919 23:49:16.138] discovery:
W0919 23:49:16.138]   bootstrapToken:
W0919 23:49:16.138]     apiServerEndpoint: "[fc00:db8:1::242:ac11:4]:6443"
W0919 23:49:16.138]     token: "abcdef.0123456789abcdef"
W0919 23:49:16.138]     unsafeSkipCAVerification: true
... skipping 48 lines ...
W0919 23:49:16.161] localAPIEndpoint:
W0919 23:49:16.161]   advertiseAddress: fc00:db8:1::242:ac11:3
W0919 23:49:16.161]   bindPort: 6443
W0919 23:49:16.161] nodeRegistration:
W0919 23:49:16.161]   criSocket: /run/containerd/containerd.sock
W0919 23:49:16.161]   kubeletExtraArgs:
W0919 23:49:16.161]     fail-swap-on: "false"
W0919 23:49:16.162]     node-ip: fc00:db8:1::242:ac11:3
W0919 23:49:16.162] ---
W0919 23:49:16.162] apiVersion: kubeadm.k8s.io/v1beta2
W0919 23:49:16.162] discovery:
W0919 23:49:16.162]   bootstrapToken:
W0919 23:49:16.162]     apiServerEndpoint: '[fc00:db8:1::242:ac11:4]:6443'
W0919 23:49:16.162]     token: abcdef.0123456789abcdef
W0919 23:49:16.162]     unsafeSkipCAVerification: true
W0919 23:49:16.162] kind: JoinConfiguration
W0919 23:49:16.162] nodeRegistration:
W0919 23:49:16.163]   criSocket: /run/containerd/containerd.sock
W0919 23:49:16.163]   kubeletExtraArgs:
W0919 23:49:16.163]     fail-swap-on: "false"
W0919 23:49:16.163]     node-ip: fc00:db8:1::242:ac11:3
W0919 23:49:16.163] ---
W0919 23:49:16.163] address: '::'
W0919 23:49:16.163] apiVersion: kubelet.config.k8s.io/v1beta1
W0919 23:49:16.163] evictionHard:
W0919 23:49:16.163]   imagefs.available: 0%
... skipping 35 lines ...
W0919 23:49:16.167] localAPIEndpoint:
W0919 23:49:16.168]   advertiseAddress: fc00:db8:1::242:ac11:2
W0919 23:49:16.168]   bindPort: 6443
W0919 23:49:16.168] nodeRegistration:
W0919 23:49:16.168]   criSocket: /run/containerd/containerd.sock
W0919 23:49:16.168]   kubeletExtraArgs:
W0919 23:49:16.168]     fail-swap-on: "false"
W0919 23:49:16.168]     node-ip: fc00:db8:1::242:ac11:2
W0919 23:49:16.168] ---
W0919 23:49:16.168] apiVersion: kubeadm.k8s.io/v1beta2
W0919 23:49:16.169] discovery:
W0919 23:49:16.169]   bootstrapToken:
W0919 23:49:16.169]     apiServerEndpoint: '[fc00:db8:1::242:ac11:4]:6443'
W0919 23:49:16.169]     token: abcdef.0123456789abcdef
W0919 23:49:16.169]     unsafeSkipCAVerification: true
W0919 23:49:16.169] kind: JoinConfiguration
W0919 23:49:16.169] nodeRegistration:
W0919 23:49:16.169]   criSocket: /run/containerd/containerd.sock
W0919 23:49:16.169]   kubeletExtraArgs:
W0919 23:49:16.169]     fail-swap-on: "false"
W0919 23:49:16.169]     node-ip: fc00:db8:1::242:ac11:2
W0919 23:49:16.169] ---
W0919 23:49:16.170] address: '::'
W0919 23:49:16.170] apiVersion: kubelet.config.k8s.io/v1beta1
W0919 23:49:16.170] evictionHard:
W0919 23:49:16.170]   imagefs.available: 0%
... skipping 43 lines ...
W0919 23:49:43.816] I0919 23:49:19.174148     124 checks.go:377] validating the presence of executable ebtables
W0919 23:49:43.816] I0919 23:49:19.174301     124 checks.go:377] validating the presence of executable ethtool
W0919 23:49:43.816] I0919 23:49:19.174362     124 checks.go:377] validating the presence of executable socat
W0919 23:49:43.816] I0919 23:49:19.174512     124 checks.go:377] validating the presence of executable tc
W0919 23:49:43.817] I0919 23:49:19.174573     124 checks.go:377] validating the presence of executable touch
W0919 23:49:43.817] I0919 23:49:19.174694     124 checks.go:521] running all checks
W0919 23:49:43.817] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1034-gke\n", err: exit status 1
W0919 23:49:43.817] I0919 23:49:19.198795     124 checks.go:407] checking whether the given node name is reachable using net.LookupHost
W0919 23:49:43.818] [preflight] The system verification failed. Printing the output from the verification:
W0919 23:49:43.818] KERNEL_VERSION: 4.15.0-1034-gke
W0919 23:49:43.818] OS: Linux
W0919 23:49:43.818] CGROUPS_CPU: enabled
W0919 23:49:43.818] CGROUPS_CPUACCT: enabled
W0919 23:49:43.818] CGROUPS_CPUSET: enabled
W0919 23:49:43.819] CGROUPS_DEVICES: enabled
... skipping 79 lines ...
W0919 23:49:43.833] I0919 23:49:30.039613     124 round_trippers.go:443] GET https://[fc00:db8:1::242:ac11:4]:6443/healthz?timeout=32s  in 0 milliseconds
W0919 23:49:43.833] I0919 23:49:30.539586     124 round_trippers.go:443] GET https://[fc00:db8:1::242:ac11:4]:6443/healthz?timeout=32s  in 0 milliseconds
W0919 23:49:43.834] I0919 23:49:31.039670     124 round_trippers.go:443] GET https://[fc00:db8:1::242:ac11:4]:6443/healthz?timeout=32s  in 0 milliseconds
W0919 23:49:43.834] I0919 23:49:31.539638     124 round_trippers.go:443] GET https://[fc00:db8:1::242:ac11:4]:6443/healthz?timeout=32s  in 0 milliseconds
W0919 23:49:43.834] I0919 23:49:32.039645     124 round_trippers.go:443] GET https://[fc00:db8:1::242:ac11:4]:6443/healthz?timeout=32s  in 0 milliseconds
W0919 23:49:43.834] I0919 23:49:32.540551     124 round_trippers.go:443] GET https://[fc00:db8:1::242:ac11:4]:6443/healthz?timeout=32s  in 0 milliseconds
W0919 23:49:43.834] I0919 23:49:38.484009     124 round_trippers.go:443] GET https://[fc00:db8:1::242:ac11:4]:6443/healthz?timeout=32s 500 Internal Server Error in 5444 milliseconds
W0919 23:49:43.834] I0919 23:49:38.554182     124 round_trippers.go:443] GET https://[fc00:db8:1::242:ac11:4]:6443/healthz?timeout=32s 500 Internal Server Error in 15 milliseconds
W0919 23:49:43.835] I0919 23:49:39.041388     124 round_trippers.go:443] GET https://[fc00:db8:1::242:ac11:4]:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds
W0919 23:49:43.835] I0919 23:49:39.543945     124 round_trippers.go:443] GET https://[fc00:db8:1::242:ac11:4]:6443/healthz?timeout=32s 500 Internal Server Error in 4 milliseconds
W0919 23:49:43.835] I0919 23:49:40.041278     124 round_trippers.go:443] GET https://[fc00:db8:1::242:ac11:4]:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds
W0919 23:49:43.835] I0919 23:49:40.541579     124 round_trippers.go:443] GET https://[fc00:db8:1::242:ac11:4]:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds
W0919 23:49:43.835] I0919 23:49:41.046837     124 round_trippers.go:443] GET https://[fc00:db8:1::242:ac11:4]:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds
W0919 23:49:43.836] I0919 23:49:41.541573     124 round_trippers.go:443] GET https://[fc00:db8:1::242:ac11:4]:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds
W0919 23:49:43.836] I0919 23:49:42.041796     124 round_trippers.go:443] GET https://[fc00:db8:1::242:ac11:4]:6443/healthz?timeout=32s 200 OK in 2 milliseconds
W0919 23:49:43.836] I0919 23:49:42.041941     124 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
W0919 23:49:43.836] [apiclient] All control plane components are healthy after 17.016434 seconds
W0919 23:49:43.837] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
W0919 23:49:43.837] I0919 23:49:42.054590     124 round_trippers.go:443] POST https://[fc00:db8:1::242:ac11:4]:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 11 milliseconds
W0919 23:49:43.837] I0919 23:49:42.059018     124 round_trippers.go:443] POST https://[fc00:db8:1::242:ac11:4]:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 3 milliseconds
... skipping 88 lines ...
W0919 23:49:49.369] DEBUG: exec/local.go:88] Running: "docker exec --privileged kind-worker2 kubeadm join --config /kind/kubeadm.conf --ignore-preflight-errors=all --v=6"
W0919 23:49:49.369] DEBUG: exec/local.go:88] Running: "docker exec --privileged kind-worker kubeadm join --config /kind/kubeadm.conf --ignore-preflight-errors=all --v=6"
W0919 23:49:50.164] DEBUG: kubeadmjoin/join.go:198] I0919 23:49:50.067218     388 join.go:368] [preflight] found NodeName empty; using OS hostname as NodeName
W0919 23:49:50.164] I0919 23:49:50.067273     388 joinconfiguration.go:75] loading configuration from "/kind/kubeadm.conf"
W0919 23:49:50.164] I0919 23:49:50.073209     388 prefligh
W0919 23:49:50.165]  ✗ Joining worker nodes 🚜
W0919 23:49:50.165] ERROR: failed to create cluster: failed to join node with kubeadm: command "docker exec --privileged kind-worker kubeadm join --config /kind/kubeadm.conf --ignore-preflight-errors=all --v=6" failed with error: signal: broken pipe
W0919 23:49:50.165] 
W0919 23:49:50.165] Output:
W0919 23:49:50.165] I0919 23:49:50.067218     388 join.go:368] [preflight] found NodeName empty; using OS hostname as NodeName
W0919 23:49:50.166] I0919 23:49:50.067273     388 joinconfiguration.go:75] loading configuration from "/kind/kubeadm.conf"
W0919 23:49:50.166] [preflight] Running pre-flight checks
W0919 23:49:50.166] 
... skipping 30 lines ...
W0919 23:50:20.334]     check(*cmd)
W0919 23:50:20.334]   File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 30, in check
W0919 23:50:20.334]     subprocess.check_call(cmd)
W0919 23:50:20.335]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0919 23:50:20.335]     raise CalledProcessError(retcode, cmd)
W0919 23:50:20.335] subprocess.CalledProcessError: Command '('./../../sigs.k8s.io/kind/hack/ci/e2e.sh',)' returned non-zero exit status 1
E0919 23:50:20.357] Command failed
I0919 23:50:20.358] process 550 exited with code 1 after 13.6m
E0919 23:50:20.358] FAIL: ci-kubernetes-kind-conformance-parallel-ipv6
I0919 23:50:20.359] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0919 23:50:22.268] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0919 23:50:22.417] process 13600 exited with code 0 after 0.0m
I0919 23:50:22.417] Call:  gcloud config get-value account
I0919 23:50:23.221] process 13612 exited with code 0 after 0.0m
I0919 23:50:23.222] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0919 23:50:23.222] Upload result and artifacts...
I0919 23:50:23.222] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-kind-conformance-parallel-ipv6/1174829215934058496
I0919 23:50:23.222] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-kind-conformance-parallel-ipv6/1174829215934058496/artifacts
W0919 23:50:26.120] CommandException: One or more URLs matched no objects.
E0919 23:50:26.466] Command failed
I0919 23:50:26.466] process 13624 exited with code 1 after 0.1m
W0919 23:50:26.467] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-kind-conformance-parallel-ipv6/1174829215934058496/artifacts not exist yet
I0919 23:50:26.467] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-kind-conformance-parallel-ipv6/1174829215934058496/artifacts
I0919 23:50:30.201] process 13766 exited with code 0 after 0.1m
W0919 23:50:30.201] metadata path /workspace/_artifacts/metadata.json does not exist
W0919 23:50:30.201] metadata not found or invalid, init with empty metadata
... skipping 15 lines ...