This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2019-11-12 17:49
Elapsed5m29s
Revision
Buildergke-prow-ssd-pool-1a225945-d6fn
Refs master:9bd8f2fc
692:7ed5fc3f
poda2e8c86c-0574-11ea-a7a2-b2e906db168f
infra-commit7d1113798
poda2e8c86c-0574-11ea-a7a2-b2e906db168f
reposigs.k8s.io/kind
repo-commit4bcc5a42c4a62d92c04d34016cd866b20c4bd797
repos{u'k8s.io/kubernetes': u'release-1.15', u'sigs.k8s.io/kind': u'master:9bd8f2fc890f85ec14c04ade961ee96b77913ebc,692:7ed5fc3f990b466c0eb8eadc9a0240d71d7b153d'}

No Test Failures!


Error lines from build-log.txt

... skipping 417 lines ...
W1112 17:54:22.464] localAPIEndpoint:
W1112 17:54:22.464]   advertiseAddress: 172.17.0.2
W1112 17:54:22.465]   bindPort: 6443
W1112 17:54:22.465] nodeRegistration:
W1112 17:54:22.465]   criSocket: /run/containerd/containerd.sock
W1112 17:54:22.465]   kubeletExtraArgs:
W1112 17:54:22.466]     fail-swap-on: "false"
W1112 17:54:22.466]     node-ip: 172.17.0.2
W1112 17:54:22.466] ---
W1112 17:54:22.466] apiVersion: kubeadm.k8s.io/v1beta2
W1112 17:54:22.467] discovery:
W1112 17:54:22.467]   bootstrapToken:
W1112 17:54:22.467]     apiServerEndpoint: 172.17.0.3:6443
W1112 17:54:22.467]     token: abcdef.0123456789abcdef
W1112 17:54:22.468]     unsafeSkipCAVerification: true
W1112 17:54:22.468] kind: JoinConfiguration
W1112 17:54:22.468] nodeRegistration:
W1112 17:54:22.468]   criSocket: /run/containerd/containerd.sock
W1112 17:54:22.469]   kubeletExtraArgs:
W1112 17:54:22.469]     fail-swap-on: "false"
W1112 17:54:22.469]     node-ip: 172.17.0.2
W1112 17:54:22.469] ---
W1112 17:54:22.470] apiVersion: kubelet.config.k8s.io/v1beta1
W1112 17:54:22.470] evictionHard:
W1112 17:54:22.470]   imagefs.available: 0%
W1112 17:54:22.470]   nodefs.available: 0%
... skipping 31 lines ...
W1112 17:54:22.478] localAPIEndpoint:
W1112 17:54:22.479]   advertiseAddress: 172.17.0.4
W1112 17:54:22.479]   bindPort: 6443
W1112 17:54:22.479] nodeRegistration:
W1112 17:54:22.479]   criSocket: /run/containerd/containerd.sock
W1112 17:54:22.480]   kubeletExtraArgs:
W1112 17:54:22.480]     fail-swap-on: "false"
W1112 17:54:22.480]     node-ip: 172.17.0.4
W1112 17:54:22.481] ---
W1112 17:54:22.481] apiVersion: kubeadm.k8s.io/v1beta2
W1112 17:54:22.481] discovery:
W1112 17:54:22.481]   bootstrapToken:
W1112 17:54:22.482]     apiServerEndpoint: 172.17.0.3:6443
W1112 17:54:22.482]     token: abcdef.0123456789abcdef
W1112 17:54:22.482]     unsafeSkipCAVerification: true
W1112 17:54:22.482] kind: JoinConfiguration
W1112 17:54:22.483] nodeRegistration:
W1112 17:54:22.483]   criSocket: /run/containerd/containerd.sock
W1112 17:54:22.483]   kubeletExtraArgs:
W1112 17:54:22.483]     fail-swap-on: "false"
W1112 17:54:22.483]     node-ip: 172.17.0.4
W1112 17:54:22.484] ---
W1112 17:54:22.484] apiVersion: kubelet.config.k8s.io/v1beta1
W1112 17:54:22.484] evictionHard:
W1112 17:54:22.484]   imagefs.available: 0%
W1112 17:54:22.484]   nodefs.available: 0%
... skipping 31 lines ...
W1112 17:54:22.491] localAPIEndpoint:
W1112 17:54:22.491]   advertiseAddress: 172.17.0.3
W1112 17:54:22.491]   bindPort: 6443
W1112 17:54:22.492] nodeRegistration:
W1112 17:54:22.492]   criSocket: /run/containerd/containerd.sock
W1112 17:54:22.492]   kubeletExtraArgs:
W1112 17:54:22.492]     fail-swap-on: "false"
W1112 17:54:22.492]     node-ip: 172.17.0.3
W1112 17:54:22.493] ---
W1112 17:54:22.493] apiVersion: kubeadm.k8s.io/v1beta2
W1112 17:54:22.493] controlPlane:
W1112 17:54:22.493]   localAPIEndpoint:
W1112 17:54:22.493]     advertiseAddress: 172.17.0.3
... skipping 4 lines ...
W1112 17:54:22.495]     token: abcdef.0123456789abcdef
W1112 17:54:22.495]     unsafeSkipCAVerification: true
W1112 17:54:22.495] kind: JoinConfiguration
W1112 17:54:22.495] nodeRegistration:
W1112 17:54:22.496]   criSocket: /run/containerd/containerd.sock
W1112 17:54:22.496]   kubeletExtraArgs:
W1112 17:54:22.496]     fail-swap-on: "false"
W1112 17:54:22.496]     node-ip: 172.17.0.3
W1112 17:54:22.496] ---
W1112 17:54:22.497] apiVersion: kubelet.config.k8s.io/v1beta1
W1112 17:54:22.497] evictionHard:
W1112 17:54:22.497]   imagefs.available: 0%
W1112 17:54:22.497]   nodefs.available: 0%
... skipping 7 lines ...
W1112 17:54:22.986]  • Starting control-plane 🕹️  ...
W1112 17:54:23.638] DEBUG: kubeadminit/init.go:73] I1112 17:54:23.583672     131 initconfiguration.go:189] loading configuration from "/kind/kubeadm.conf"
W1112 17:54:23.639] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration
W1112 17:54:23.639] I1112 17:54:23.591263     131 feature_gate.go:216] feature gates: &{map[]}
W1112 17:54:23.639] [serviceSubnet: Invalid value: "10.96.0.0/12,fd00:10:96::/112": couldn't parse subnet, podSubnet: Invalid value: "10.244.0.0/16,fd00:10:244::/16": couldn't parse subnet, featureGates: Invalid value: map[string]bool{"IPv6DualStack":true}: IPv6DualStack is not a valid feature name., KubeProxyConfiguration.ClusterCIDR: Invalid value: "10.244.0.0/16,fd00:10:244::/16": must be a valid CIDR block (e.g. 10.100.0.0/16)]
W1112 17:54:23.639]  ✗ Starting control-plane 🕹️
W1112 17:54:23.640] ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged kind-control-plane kubeadm init --ignore-preflight-errors=all --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 3
W1112 17:54:23.640] 
W1112 17:54:23.640] Output:
W1112 17:54:23.640] I1112 17:54:23.583672     131 initconfiguration.go:189] loading configuration from "/kind/kubeadm.conf"
W1112 17:54:23.641] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration
W1112 17:54:23.641] I1112 17:54:23.591263     131 feature_gate.go:216] feature gates: &{map[]}
W1112 17:54:23.641] [serviceSubnet: Invalid value: "10.96.0.0/12,fd00:10:96::/112": couldn't parse subnet, podSubnet: Invalid value: "10.244.0.0/16,fd00:10:244::/16": couldn't parse subnet, featureGates: Invalid value: map[string]bool{"IPv6DualStack":true}: IPv6DualStack is not a valid feature name., KubeProxyConfiguration.ClusterCIDR: Invalid value: "10.244.0.0/16,fd00:10:244::/16": must be a valid CIDR block (e.g. 10.100.0.0/16)]
... skipping 53 lines ...
W1112 17:54:26.739]     check(*cmd)
W1112 17:54:26.739]   File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 30, in check
W1112 17:54:26.739]     subprocess.check_call(cmd)
W1112 17:54:26.739]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W1112 17:54:26.739]     raise CalledProcessError(retcode, cmd)
W1112 17:54:26.740] subprocess.CalledProcessError: Command '('bash', '-c', 'cd ./../../k8s.io/kubernetes && ./../../sigs.k8s.io/kind/hack/ci/e2e.sh')' returned non-zero exit status 1
E1112 17:54:26.745] Command failed
I1112 17:54:26.745] process 661 exited with code 1 after 4.2m
E1112 17:54:26.746] FAIL: pull-kind-conformance-parallel-1-15
I1112 17:54:26.746] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W1112 17:54:27.781] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I1112 17:54:27.838] process 9026 exited with code 0 after 0.0m
I1112 17:54:27.838] Call:  gcloud config get-value account
I1112 17:54:28.163] process 9038 exited with code 0 after 0.0m
I1112 17:54:28.163] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I1112 17:54:28.163] Upload result and artifacts...
I1112 17:54:28.164] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_kind/692/pull-kind-conformance-parallel-1-15/1194311088900935683
I1112 17:54:28.164] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_kind/692/pull-kind-conformance-parallel-1-15/1194311088900935683/artifacts
W1112 17:54:29.420] CommandException: One or more URLs matched no objects.
E1112 17:54:29.539] Command failed
I1112 17:54:29.540] process 9050 exited with code 1 after 0.0m
W1112 17:54:29.540] Remote dir gs://kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_kind/692/pull-kind-conformance-parallel-1-15/1194311088900935683/artifacts not exist yet
I1112 17:54:29.540] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_kind/692/pull-kind-conformance-parallel-1-15/1194311088900935683/artifacts
I1112 17:54:31.253] process 9194 exited with code 0 after 0.0m
W1112 17:54:31.253] metadata path /workspace/_artifacts/metadata.json does not exist
W1112 17:54:31.254] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...