This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2019-08-19 22:14
Elapsed5m32s
Revision
Buildergke-prow-ssd-pool-1a225945-khv0
Refs master:01725d0e
692:20659391
pod9649adea-c2ce-11e9-802f-1a170d45c31d
infra-commit1972ae8f9
pod9649adea-c2ce-11e9-802f-1a170d45c31d
reposigs.k8s.io/kind
repo-commit57874c7f32e0892acc744a98c228d8fe46881d3a
repos{u'k8s.io/kubernetes': u'release-1.12', u'sigs.k8s.io/kind': u'master:01725d0edab1d5cdd4679a8af1c0b17fb74f0db8,692:2065939129f9c9f27462d4d58c1fb6d5248cd7dd'}

No Test Failures!


Error lines from build-log.txt

... skipping 922 lines ...
I0819 22:19:20.377] time="22:19:20" level=debug msg="Running: /usr/bin/docker [docker exec --privileged kind-control-plane cat /kind/version]"
I0819 22:19:20.687] time="22:19:20" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-control-plane]"
I0819 22:19:20.752] time="22:19:20" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-worker2]"
I0819 22:19:20.752] time="22:19:20" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-control-plane]"
I0819 22:19:20.752] time="22:19:20" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-worker]"
I0819 22:19:20.816] time="22:19:20" level=debug msg="Configuration Input data: {kind v1.12.11-beta.0.1+5f799a487b70ae 172.17.0.3:6443 6443 127.0.0.1 false 172.17.0.2 abcdef.0123456789abcdef 10.244.0.0/16,fd00:10:244::/16 10.96.0.0/12 false {}}"
I0819 22:19:20.819] time="22:19:20" level=debug msg="Configuration generated:\n # config generated by kind\napiVersion: kubeadm.k8s.io/v1alpha3\nkind: ClusterConfiguration\nmetadata:\n  name: config\nkubernetesVersion: v1.12.11-beta.0.1+5f799a487b70ae\nclusterName: \"kind\"\ncontrolPlaneEndpoint: \"172.17.0.3:6443\"\nnetworking:\n  podSubnet: \"10.244.0.0/16,fd00:10:244::/16\"\n  serviceSubnet: \"10.96.0.0/12\"\n# we need nsswitch.conf so we use /etc/hosts\n# https://github.com/kubernetes/kubernetes/issues/69195\napiServerExtraVolumes:\n- name: nsswitch\n  mountPath: /etc/nsswitch.conf\n  hostPath: /etc/nsswitch.conf\n  writeable: false\n  pathType: FileOrCreate\n# on docker for mac we have to expose the api server via port forward,\n# so we need to ensure the cert is valid for localhost so we can talk\n# to the cluster after rewriting the kubeconfig to point to localhost\napiServerCertSANs: [localhost, \"127.0.0.1\"]\ncontrollerManagerExtraArgs:\n  enable-hostpath-provisioner: \"true\"\nnetworking:\n  podSubnet: \"10.244.0.0/16,fd00:10:244::/16\"\n---\napiVersion: kubeadm.k8s.io/v1alpha3\nkind: InitConfiguration\nmetadata:\n  name: config\n# we use a well know token for TLS bootstrap\nbootstrapTokens:\n- token: \"abcdef.0123456789abcdef\"\n# we use a well know port for making the API server discoverable inside docker network. \n# from the host machine such port will be accessible via a random local port instead.\napiEndpoint:\n  advertiseAddress: \"172.17.0.2\"\n  bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.2\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeadm.k8s.io/v1alpha3\nkind: JoinConfiguration\nmetadata:\n  name: config\ndiscoveryTokenAPIServers: [\"172.17.0.3:6443\"]\ntoken: \"abcdef.0123456789abcdef\"\ndiscoveryTokenUnsafeSkipCAVerification: true\ncontrolPlane: false\n\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.2\"\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nmetadata:\n  name: config\n# configure ipv6 addresses in IPv6 mode\n\n# disable disk resource management by default\n# kubelet will see the host disk that the inner container runtime\n# is ultimately backed by and attempt to recover disk space. we don't want that.\nimageGCHighThresholdPercent: 100\nevictionHard:\n  nodefs.available: \"0%\"\n  nodefs.inodesFree: \"0%\"\n  imagefs.available: \"0%\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmetadata:\n  name: config\n"
I0819 22:19:20.823] time="22:19:20" level=debug msg="Configuration Input data: {kind v1.12.11-beta.0.1+5f799a487b70ae 172.17.0.3:6443 6443 127.0.0.1 false 172.17.0.4 abcdef.0123456789abcdef 10.244.0.0/16,fd00:10:244::/16 10.96.0.0/12 false {}}"
I0819 22:19:20.827] time="22:19:20" level=debug msg="Configuration generated:\n # config generated by kind\napiVersion: kubeadm.k8s.io/v1alpha3\nkind: ClusterConfiguration\nmetadata:\n  name: config\nkubernetesVersion: v1.12.11-beta.0.1+5f799a487b70ae\nclusterName: \"kind\"\ncontrolPlaneEndpoint: \"172.17.0.3:6443\"\nnetworking:\n  podSubnet: \"10.244.0.0/16,fd00:10:244::/16\"\n  serviceSubnet: \"10.96.0.0/12\"\n# we need nsswitch.conf so we use /etc/hosts\n# https://github.com/kubernetes/kubernetes/issues/69195\napiServerExtraVolumes:\n- name: nsswitch\n  mountPath: /etc/nsswitch.conf\n  hostPath: /etc/nsswitch.conf\n  writeable: false\n  pathType: FileOrCreate\n# on docker for mac we have to expose the api server via port forward,\n# so we need to ensure the cert is valid for localhost so we can talk\n# to the cluster after rewriting the kubeconfig to point to localhost\napiServerCertSANs: [localhost, \"127.0.0.1\"]\ncontrollerManagerExtraArgs:\n  enable-hostpath-provisioner: \"true\"\nnetworking:\n  podSubnet: \"10.244.0.0/16,fd00:10:244::/16\"\n---\napiVersion: kubeadm.k8s.io/v1alpha3\nkind: InitConfiguration\nmetadata:\n  name: config\n# we use a well know token for TLS bootstrap\nbootstrapTokens:\n- token: \"abcdef.0123456789abcdef\"\n# we use a well know port for making the API server discoverable inside docker network. \n# from the host machine such port will be accessible via a random local port instead.\napiEndpoint:\n  advertiseAddress: \"172.17.0.4\"\n  bindPort: 6443\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.4\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeadm.k8s.io/v1alpha3\nkind: JoinConfiguration\nmetadata:\n  name: config\ndiscoveryTokenAPIServers: [\"172.17.0.3:6443\"]\ntoken: \"abcdef.0123456789abcdef\"\ndiscoveryTokenUnsafeSkipCAVerification: true\ncontrolPlane: false\n\nnodeRegistration:\n  criSocket: \"/run/containerd/containerd.sock\"\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: \"172.17.0.4\"\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nmetadata:\n  name: config\n# configure ipv6 addresses in IPv6 mode\n\n# disable disk resource management by default\n# kubelet will see the host disk that the inner container runtime\n# is ultimately backed by and attempt to recover disk space. we don't want that.\nimageGCHighThresholdPercent: 100\nevictionHard:\n  nodefs.available: \"0%\"\n  nodefs.inodesFree: \"0%\"\n  imagefs.available: \"0%\"\n---\n# no-op entry that exists solely so it can be patched\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmetadata:\n  name: config\n"
I0819 22:19:20.840]  ✗ Creating kubeadm config 📜
W0819 22:19:20.941] Error: failed to create cluster: failed to generate kubeadm config content: no matches for OriginalId kubeadm.k8s.io_v1beta2_ClusterConfiguration|~X|config; no matches for CurrentId kubeadm.k8s.io_v1beta2_ClusterConfiguration|~X|config; failed to find unique target for patch kubeadm.k8s.io_v1beta2_ClusterConfiguration|config
W0819 22:19:20.941] + cleanup
W0819 22:19:20.942] + kind export logs /workspace/_artifacts/logs
I0819 22:19:22.969] Exported logs to: /workspace/_artifacts/logs
W0819 22:19:23.069] + [[ true = true ]]
W0819 22:19:23.070] + kind delete cluster
I0819 22:19:23.171] Deleting cluster "kind" ...
... skipping 7 lines ...
W0819 22:19:24.154]     check(*cmd)
W0819 22:19:24.154]   File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 30, in check
W0819 22:19:24.156]     subprocess.check_call(cmd)
W0819 22:19:24.157]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0819 22:19:24.157]     raise CalledProcessError(retcode, cmd)
W0819 22:19:24.158] subprocess.CalledProcessError: Command '('bash', '-c', 'cd ./../../k8s.io/kubernetes && ./../../sigs.k8s.io/kind/hack/ci/e2e.sh')' returned non-zero exit status 1
E0819 22:19:24.161] Command failed
I0819 22:19:24.161] process 686 exited with code 1 after 4.3m
E0819 22:19:24.162] FAIL: pull-kind-conformance-parallel-1-12
I0819 22:19:24.162] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0819 22:19:25.331] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0819 22:19:25.392] process 10066 exited with code 0 after 0.0m
I0819 22:19:25.392] Call:  gcloud config get-value account
I0819 22:19:25.726] process 10078 exited with code 0 after 0.0m
I0819 22:19:25.726] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0819 22:19:25.727] Upload result and artifacts...
I0819 22:19:25.727] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_kind/692/pull-kind-conformance-parallel-1-12/1163574821339009027
I0819 22:19:25.727] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_kind/692/pull-kind-conformance-parallel-1-12/1163574821339009027/artifacts
W0819 22:19:27.435] CommandException: One or more URLs matched no objects.
E0819 22:19:27.563] Command failed
I0819 22:19:27.564] process 10090 exited with code 1 after 0.0m
W0819 22:19:27.564] Remote dir gs://kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_kind/692/pull-kind-conformance-parallel-1-12/1163574821339009027/artifacts not exist yet
I0819 22:19:27.564] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_kind/692/pull-kind-conformance-parallel-1-12/1163574821339009027/artifacts
I0819 22:19:29.515] process 10232 exited with code 0 after 0.0m
W0819 22:19:29.515] metadata path /workspace/_artifacts/metadata.json does not exist
W0819 22:19:29.516] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...