This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 87 succeeded
Started2020-02-07 02:21
Elapsed40m55s
Revisionrelease-1.15
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/20be5dad-fcb5-447f-9dca-27a7ccd88d49/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/20be5dad-fcb5-447f-9dca-27a7ccd88d49/targets/test

No Test Failures!


Show 87 Passed Tests

Show 1154 Skipped Tests

Error lines from build-log.txt

... skipping 335 lines ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /kind/systemd/kubelet.service.
Created symlink /etc/systemd/system/kubelet.service → /kind/systemd/kubelet.service.
time="02:29:17" level=debug msg="Running: [docker exec kind-build-97537e2d-b646-48df-9fa8-1fbfaf4c3cac mkdir -p /etc/systemd/system/kubelet.service.d]"
time="02:29:18" level=info msg="Adding /etc/systemd/system/kubelet.service.d/10-kubeadm.conf to the image"
time="02:29:18" level=debug msg="Running: [docker exec kind-build-97537e2d-b646-48df-9fa8-1fbfaf4c3cac cp /alter/bits/systemd/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf]"
time="02:29:19" level=debug msg="Running: [docker exec kind-build-97537e2d-b646-48df-9fa8-1fbfaf4c3cac chown -R root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf]"
time="02:29:20" level=debug msg="Running: [docker exec kind-build-97537e2d-b646-48df-9fa8-1fbfaf4c3cac /bin/sh -c echo \"KUBELET_EXTRA_ARGS=--fail-swap-on=false\" >> /etc/default/kubelet]"
time="02:29:22" level=debug msg="Running: [docker exec kind-build-97537e2d-b646-48df-9fa8-1fbfaf4c3cac mkdir -p /kinder]"
time="02:29:23" level=debug msg="Running: [docker exec kind-build-97537e2d-b646-48df-9fa8-1fbfaf4c3cac rsync -r /alter/bits/upgrade /kinder]"
time="02:29:41" level=debug msg="Running: [docker exec kind-build-97537e2d-b646-48df-9fa8-1fbfaf4c3cac chown -R root:root /kinder/upgrade]"
time="02:29:42" level=debug msg="Running: [docker exec kind-build-97537e2d-b646-48df-9fa8-1fbfaf4c3cac /bin/sh -c which docker || true]"
time="02:29:43" level=info msg="Detected docker as container runtime"
time="02:29:43" level=info msg="Pre loading images ..."
... skipping 176 lines ...
kinder-upgrade-control-plane-1:$ Preparing /kind/kubeadm.conf
time="02:32:10" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-upgrade-control-plane-1]"
time="02:32:10" level=debug msg="Running: [docker exec kinder-upgrade-control-plane-1 kubeadm version -o=short]"
time="02:32:11" level=debug msg="Preparing kubeadm config v1beta1 (kubeadm version 1.14.11-beta.1.2+c8b135d0b49c44)"
time="02:32:11" level=debug msg="Preparing dockerPatch for kubeadm config v1beta1 (kubeadm version 1.14.11-beta.1.2+c8b135d0b49c44)"
time="02:32:11" level=debug msg="Preparing automaticCopyCertsPatches for kubeadm config v1beta1 (kubeadm version 1.14.11-beta.1.2+c8b135d0b49c44)"
time="02:32:11" level=debug msg="generated config:\napiServer:\n  certSANs:\n  - localhost\n  - 172.17.0.4\napiVersion: kubeadm.k8s.io/v1beta1\nclusterName: kinder-upgrade\ncontrolPlaneEndpoint: 172.17.0.7:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.14.11-beta.1.2+c8b135d0b49c44\nnetworking:\n  podSubnet: 192.168.0.0/16\n  serviceSubnet: \"\"\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta1\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.4\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.4\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
time="02:32:11" level=debug msg="Running: [docker cp /tmp/kinder-upgrade-control-plane-1-270957191 kinder-upgrade-control-plane-1:/kind/kubeadm.conf]"

kinder-upgrade-lb:$ Updating load balancer configuration with 1 control plane backends
time="02:32:12" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-upgrade-control-plane-1]"
time="02:32:12" level=debug msg="Writing loadbalancer config on kinder-upgrade-lb..."
time="02:32:12" level=debug msg="Running: [docker cp /tmp/kinder-upgrade-lb-351001146 kinder-upgrade-lb:/usr/local/etc/haproxy/haproxy.cfg]"
... skipping 32 lines ...
I0207 02:32:15.426196     555 checks.go:382] validating the presence of executable ebtables
I0207 02:32:15.426256     555 checks.go:382] validating the presence of executable ethtool
I0207 02:32:15.426314     555 checks.go:382] validating the presence of executable socat
I0207 02:32:15.426372     555 checks.go:382] validating the presence of executable tc
I0207 02:32:15.426420     555 checks.go:382] validating the presence of executable touch
I0207 02:32:15.426474     555 checks.go:524] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
DOCKER_VERSION: 18.09.4
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0207 02:32:15.526059     555 checks.go:412] checking whether the given node name is reachable using net.LookupHost
I0207 02:32:15.526350     555 checks.go:622] validating kubelet version
I0207 02:32:15.695564     555 checks.go:131] validating if the service is enabled and active
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
... skipping 381 lines ...
kinder-upgrade-control-plane-2:$ Preparing /kind/kubeadm.conf
time="02:34:27" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-upgrade-control-plane-2]"
time="02:34:28" level=debug msg="Running: [docker exec kinder-upgrade-control-plane-2 kubeadm version -o=short]"
time="02:34:28" level=debug msg="Preparing kubeadm config v1beta1 (kubeadm version 1.14.11-beta.1.2+c8b135d0b49c44)"
time="02:34:28" level=debug msg="Preparing dockerPatch for kubeadm config v1beta1 (kubeadm version 1.14.11-beta.1.2+c8b135d0b49c44)"
time="02:34:28" level=debug msg="Preparing automaticCopyCertsPatches for kubeadm config v1beta1 (kubeadm version 1.14.11-beta.1.2+c8b135d0b49c44)"
time="02:34:28" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta1\ncontrolPlane:\n  localAPIEndpoint:\n    advertiseAddress: 172.17.0.3\n    bindPort: 6443\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.3\n"
time="02:34:28" level=debug msg="Running: [docker cp /tmp/kinder-upgrade-control-plane-2-098931702 kinder-upgrade-control-plane-2:/kind/kubeadm.conf]"
time="02:34:29" level=debug msg="Running: [docker exec kinder-upgrade-control-plane-2 kubeadm version -o=short]"

kinder-upgrade-control-plane-2:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables --certificate-key=0123456789012345678901234567890123456789012345678901234567890123
time="02:34:30" level=debug msg="Running: [docker exec kinder-upgrade-control-plane-2 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables --certificate-key=0123456789012345678901234567890123456789012345678901234567890123]"
I0207 02:34:31.192390     743 join.go:367] [preflight] found NodeName empty; using OS hostname as NodeName
... skipping 17 lines ...
I0207 02:34:31.784687     743 checks.go:382] validating the presence of executable ebtables
I0207 02:34:31.784758     743 checks.go:382] validating the presence of executable ethtool
I0207 02:34:31.784827     743 checks.go:382] validating the presence of executable socat
I0207 02:34:31.784870     743 checks.go:382] validating the presence of executable tc
I0207 02:34:31.785021     743 checks.go:382] validating the presence of executable touch
I0207 02:34:31.785425     743 checks.go:524] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
DOCKER_VERSION: 18.09.4
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0207 02:34:31.868882     743 checks.go:412] checking whether the given node name is reachable using net.LookupHost
I0207 02:34:31.869253     743 checks.go:622] validating kubelet version
I0207 02:34:32.024356     743 checks.go:131] validating if the service is enabled and active
I0207 02:34:32.058404     743 checks.go:209] validating availability of port 10250
I0207 02:34:32.059254     743 checks.go:439] validating if the connectivity type is via proxy or direct
I0207 02:34:32.059314     743 join.go:427] [preflight] Discovering cluster-info
... skipping 188 lines ...
kinder-upgrade-control-plane-3:$ Preparing /kind/kubeadm.conf
time="02:35:24" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-upgrade-control-plane-3]"
time="02:35:24" level=debug msg="Running: [docker exec kinder-upgrade-control-plane-3 kubeadm version -o=short]"
time="02:35:25" level=debug msg="Preparing kubeadm config v1beta1 (kubeadm version 1.14.11-beta.1.2+c8b135d0b49c44)"
time="02:35:25" level=debug msg="Preparing dockerPatch for kubeadm config v1beta1 (kubeadm version 1.14.11-beta.1.2+c8b135d0b49c44)"
time="02:35:25" level=debug msg="Preparing automaticCopyCertsPatches for kubeadm config v1beta1 (kubeadm version 1.14.11-beta.1.2+c8b135d0b49c44)"
time="02:35:25" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta1\ncontrolPlane:\n  localAPIEndpoint:\n    advertiseAddress: 172.17.0.2\n    bindPort: 6443\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.2\n"
time="02:35:25" level=debug msg="Running: [docker cp /tmp/kinder-upgrade-control-plane-3-020209304 kinder-upgrade-control-plane-3:/kind/kubeadm.conf]"
time="02:35:26" level=debug msg="Running: [docker exec kinder-upgrade-control-plane-3 kubeadm version -o=short]"

kinder-upgrade-control-plane-3:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables --certificate-key=0123456789012345678901234567890123456789012345678901234567890123
time="02:35:27" level=debug msg="Running: [docker exec kinder-upgrade-control-plane-3 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables --certificate-key=0123456789012345678901234567890123456789012345678901234567890123]"
I0207 02:35:28.380206     787 join.go:367] [preflight] found NodeName empty; using OS hostname as NodeName
... skipping 17 lines ...
I0207 02:35:29.061323     787 checks.go:382] validating the presence of executable ebtables
I0207 02:35:29.061359     787 checks.go:382] validating the presence of executable ethtool
I0207 02:35:29.061406     787 checks.go:382] validating the presence of executable socat
I0207 02:35:29.061434     787 checks.go:382] validating the presence of executable tc
I0207 02:35:29.061467     787 checks.go:382] validating the presence of executable touch
I0207 02:35:29.061526     787 checks.go:524] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0207 02:35:29.139340     787 checks.go:412] checking whether the given node name is reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
DOCKER_VERSION: 18.09.4
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
... skipping 202 lines ...

kinder-upgrade-worker-1:$ Preparing /kind/kubeadm.conf
time="02:36:30" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-upgrade-worker-1]"
time="02:36:30" level=debug msg="Running: [docker exec kinder-upgrade-worker-1 kubeadm version -o=short]"
time="02:36:31" level=debug msg="Preparing kubeadm config v1beta1 (kubeadm version 1.14.11-beta.1.2+c8b135d0b49c44)"
time="02:36:31" level=debug msg="Preparing dockerPatch for kubeadm config v1beta1 (kubeadm version 1.14.11-beta.1.2+c8b135d0b49c44)"
time="02:36:31" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta1\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.6\n"
time="02:36:31" level=debug msg="Running: [docker cp /tmp/kinder-upgrade-worker-1-444226442 kinder-upgrade-worker-1:/kind/kubeadm.conf]"

kinder-upgrade-worker-1:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
time="02:36:32" level=debug msg="Running: [docker exec kinder-upgrade-worker-1 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]"
I0207 02:36:33.557933     812 join.go:367] [preflight] found NodeName empty; using OS hostname as NodeName
I0207 02:36:33.557980     812 joinconfiguration.go:75] loading configuration from "/kind/kubeadm.conf"
... skipping 16 lines ...
I0207 02:36:34.320408     812 checks.go:382] validating the presence of executable ebtables
I0207 02:36:34.320441     812 checks.go:382] validating the presence of executable ethtool
I0207 02:36:34.320478     812 checks.go:382] validating the presence of executable socat
I0207 02:36:34.320506     812 checks.go:382] validating the presence of executable tc
I0207 02:36:34.320539     812 checks.go:382] validating the presence of executable touch
I0207 02:36:34.320583     812 checks.go:524] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0207 02:36:34.412557     812 checks.go:412] checking whether the given node name is reachable using net.LookupHost
I0207 02:36:34.412858     812 checks.go:622] validating kubelet version
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
DOCKER_VERSION: 18.09.4
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
... skipping 91 lines ...

kinder-upgrade-worker-2:$ Preparing /kind/kubeadm.conf
time="02:37:14" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-upgrade-worker-2]"
time="02:37:14" level=debug msg="Running: [docker exec kinder-upgrade-worker-2 kubeadm version -o=short]"
time="02:37:15" level=debug msg="Preparing kubeadm config v1beta1 (kubeadm version 1.14.11-beta.1.2+c8b135d0b49c44)"
time="02:37:15" level=debug msg="Preparing dockerPatch for kubeadm config v1beta1 (kubeadm version 1.14.11-beta.1.2+c8b135d0b49c44)"
time="02:37:15" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta1\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.5\n"
time="02:37:15" level=debug msg="Running: [docker cp /tmp/kinder-upgrade-worker-2-541661537 kinder-upgrade-worker-2:/kind/kubeadm.conf]"

kinder-upgrade-worker-2:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
time="02:37:16" level=debug msg="Running: [docker exec kinder-upgrade-worker-2 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]"
I0207 02:37:17.264943     882 join.go:367] [preflight] found NodeName empty; using OS hostname as NodeName
I0207 02:37:17.264984     882 joinconfiguration.go:75] loading configuration from "/kind/kubeadm.conf"
... skipping 16 lines ...
I0207 02:37:18.036762     882 checks.go:382] validating the presence of executable ebtables
I0207 02:37:18.036802     882 checks.go:382] validating the presence of executable ethtool
I0207 02:37:18.037086     882 checks.go:382] validating the presence of executable socat
I0207 02:37:18.037766     882 checks.go:382] validating the presence of executable tc
I0207 02:37:18.037909     882 checks.go:382] validating the presence of executable touch
I0207 02:37:18.037990     882 checks.go:524] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0207 02:37:18.132036     882 checks.go:412] checking whether the given node name is reachable using net.LookupHost
I0207 02:37:18.132300     882 checks.go:622] validating kubelet version
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
DOCKER_VERSION: 18.09.4
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
... skipping 1522 lines ...
  _output/local/go/src/k8s.io/kubernetes/test/e2e_kubeadm/bootstrap_signer.go:40
[AfterEach] [k8s.io] [sig-cluster-lifecycle] [area-kubeadm] bootstrap signer
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 02:46:30.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
•
Ran 29 of 34 Specs in 0.541 seconds
SUCCESS! -- 29 Passed | 0 Failed | 0 Pending | 5 Skipped
PASS

Ginkgo ran 1 suite in 694.711261ms
Test Suite Passed
[--skip=\[copy-certs\] /home/prow/go/src/k8s.io/kubernetes/_output/bin/e2e_kubeadm.test -- --report-prefix=e2e-kubeadm --kubeconfig=/root/.kube/kind-config-kinder-upgrade --report-dir=/logs/artifacts]
 completed!
... skipping 548 lines ...
[BeforeEach] [sig-network] Services
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-1331
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1331 to expose endpoints map[]
Feb  7 02:49:58.886: INFO: Get endpoints failed (18.077555ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb  7 02:49:59.893: INFO: successfully validated that service endpoint-test2 in namespace services-1331 exposes endpoints map[] (1.025715612s elapsed)
STEP: Creating pod pod1 in namespace services-1331
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1331 to expose endpoints map[pod1:[80]]
Feb  7 02:50:04.070: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.159806031s elapsed, will retry)
Feb  7 02:50:06.117: INFO: successfully validated that service endpoint-test2 in namespace services-1331 exposes endpoints map[pod1:[80]] (6.20695891s elapsed)
STEP: Creating pod pod2 in namespace services-1331
... skipping 849 lines ...
[BeforeEach] [sig-network] Services
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-696
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-696 to expose endpoints map[]
Feb  7 02:51:05.037: INFO: Get endpoints failed (40.718066ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb  7 02:51:06.046: INFO: successfully validated that service multi-endpoint-test in namespace services-696 exposes endpoints map[] (1.049124466s elapsed)
STEP: Creating pod pod1 in namespace services-696
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-696 to expose endpoints map[pod1:[100]]
Feb  7 02:51:10.195: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.123526864s elapsed, will retry)
Feb  7 02:51:11.210: INFO: successfully validated that service multi-endpoint-test in namespace services-696 exposes endpoints map[pod1:[100]] (5.138278996s elapsed)
STEP: Creating pod pod2 in namespace services-696
... skipping 185 lines ...
Feb  7 02:51:26.206: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8587.svc.cluster.local from pod dns-8587/dns-test-2bb8378c-c9f0-4de3-aa40-dd0ffe703a53: the server could not find the requested resource (get pods dns-test-2bb8378c-c9f0-4de3-aa40-dd0ffe703a53)
Feb  7 02:51:26.218: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8587.svc.cluster.local from pod dns-8587/dns-test-2bb8378c-c9f0-4de3-aa40-dd0ffe703a53: the server could not find the requested resource (get pods dns-test-2bb8378c-c9f0-4de3-aa40-dd0ffe703a53)
Feb  7 02:51:26.264: INFO: Unable to read jessie_udp@dns-test-service.dns-8587.svc.cluster.local from pod dns-8587/dns-test-2bb8378c-c9f0-4de3-aa40-dd0ffe703a53: the server could not find the requested resource (get pods dns-test-2bb8378c-c9f0-4de3-aa40-dd0ffe703a53)
Feb  7 02:51:26.271: INFO: Unable to read jessie_tcp@dns-test-service.dns-8587.svc.cluster.local from pod dns-8587/dns-test-2bb8378c-c9f0-4de3-aa40-dd0ffe703a53: the server could not find the requested resource (get pods dns-test-2bb8378c-c9f0-4de3-aa40-dd0ffe703a53)
Feb  7 02:51:26.288: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8587.svc.cluster.local from pod dns-8587/dns-test-2bb8378c-c9f0-4de3-aa40-dd0ffe703a53: the server could not find the requested resource (get pods dns-test-2bb8378c-c9f0-4de3-aa40-dd0ffe703a53)
Feb  7 02:51:26.296: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8587.svc.cluster.local from pod dns-8587/dns-test-2bb8378c-c9f0-4de3-aa40-dd0ffe703a53: the server could not find the requested resource (get pods dns-test-2bb8378c-c9f0-4de3-aa40-dd0ffe703a53)
Feb  7 02:51:26.343: INFO: Lookups using dns-8587/dns-test-2bb8378c-c9f0-4de3-aa40-dd0ffe703a53 failed for: [wheezy_udp@dns-test-service.dns-8587.svc.cluster.local wheezy_tcp@dns-test-service.dns-8587.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8587.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8587.svc.cluster.local jessie_udp@dns-test-service.dns-8587.svc.cluster.local jessie_tcp@dns-test-service.dns-8587.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8587.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8587.svc.cluster.local]

Feb  7 02:51:31.596: INFO: DNS probes using dns-8587/dns-test-2bb8378c-c9f0-4de3-aa40-dd0ffe703a53 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 1021 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-9866
STEP: Creating statefulset with conflicting port in namespace statefulset-9866
STEP: Waiting until pod test-pod will start running in namespace statefulset-9866
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9866
Feb  7 02:53:45.500: INFO: Observed stateful pod in namespace: statefulset-9866, name: ss-0, uid: 3fdd2a4b-42bc-4583-a56e-87f8e0dffcf2, status phase: Pending. Waiting for statefulset controller to delete.
Feb  7 02:53:46.064: INFO: Observed stateful pod in namespace: statefulset-9866, name: ss-0, uid: 3fdd2a4b-42bc-4583-a56e-87f8e0dffcf2, status phase: Failed. Waiting for statefulset controller to delete.
Feb  7 02:53:46.073: INFO: Observed stateful pod in namespace: statefulset-9866, name: ss-0, uid: 3fdd2a4b-42bc-4583-a56e-87f8e0dffcf2, status phase: Failed. Waiting for statefulset controller to delete.
Feb  7 02:53:46.080: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9866
STEP: Removing pod with conflicting port in namespace statefulset-9866
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9866 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  7 02:53:54.149: INFO: Deleting all statefulset in ns statefulset-9866
... skipping 23 lines ...
STEP: Creating a kubernetes client
Feb  7 02:54:03.005: INFO: >>> kubeConfig: /root/.kube/kind-config-kinder-upgrade
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  7 02:54:03.069: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 02:54:13.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
Feb  7 02:54:19.811: INFO: namespace init-container-5200 deletion completed in 6.218302949s


• [SLOW TEST:16.807 seconds]
[k8s.io] InitContainer [NodeConformance]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 1810 lines ...
Feb  7 02:57:04.697: INFO: >>> kubeConfig: /root/.kube/kind-config-kinder-upgrade
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  7 02:57:09.816: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
... skipping 1329 lines ...
[BeforeEach] [sig-node] ConfigMap
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 02:59:03.106: INFO: >>> kubeConfig: /root/.kube/kind-config-kinder-upgrade
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-427478ee-80c4-4108-a6f9-b00b7255281a
[AfterEach] [sig-node] ConfigMap
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 02:59:03.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6770" for this suite.
Feb  7 02:59:09.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 02:59:09.396: INFO: namespace configmap-6770 deletion completed in 6.202420638s


• [SLOW TEST:6.290 seconds]
[sig-node] ConfigMap
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 645 lines ...
STEP: Creating a kubernetes client
Feb  7 02:58:29.341: INFO: >>> kubeConfig: /root/.kube/kind-config-kinder-upgrade
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  7 02:58:29.405: INFO: PodSpec: initContainers in spec.initContainers
Feb  7 02:59:22.029: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-040f9154-5ae9-4ca1-bb17-e30b1c973d72", GenerateName:"", Namespace:"init-container-3364", SelfLink:"/api/v1/namespaces/init-container-3364/pods/pod-init-040f9154-5ae9-4ca1-bb17-e30b1c973d72", UID:"565c5ea1-689c-4ee9-ae61-40b2201856f4", ResourceVersion:"12706", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716641109, loc:(*time.Location)(0x7eb1a20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"405676538"}, Annotations:map[string]string{"cni.projectcalico.org/podIP":"192.168.48.233/32"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-wdm4b", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc003246980), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wdm4b", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wdm4b", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wdm4b", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0026990d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kinder-upgrade-worker-2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003052ae0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002699150)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002699170)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002699178), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00269917c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716641109, loc:(*time.Location)(0x7eb1a20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716641109, loc:(*time.Location)(0x7eb1a20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716641109, loc:(*time.Location)(0x7eb1a20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716641109, loc:(*time.Location)(0x7eb1a20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"192.168.48.233", StartTime:(*v1.Time)(0xc0025408a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0028cb2d0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0028cb340)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://850a78f958a8021f7db36440c6d68c47c0e0b1110e7cecdc857f564f33862053"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0025408e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0025408c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 02:59:22.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3364" for this suite.
Feb  7 02:59:44.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 02:59:44.238: INFO: namespace init-container-3364 deletion completed in 22.196955682s


• [SLOW TEST:74.898 seconds]
[k8s.io] InitContainer [NodeConformance]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 1246 lines ...
[BeforeEach] [sig-api-machinery] Secrets
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 03:01:51.697: INFO: >>> kubeConfig: /root/.kube/kind-config-kinder-upgrade
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-c6097754-418c-48e3-a0a7-f8adebe62815
[AfterEach] [sig-api-machinery] Secrets
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 03:01:51.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3812" for this suite.
Feb  7 03:01:59.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 03:02:00.164: INFO: namespace secrets-3812 deletion completed in 8.317432092s


• [SLOW TEST:8.467 seconds]
[sig-api-machinery] Secrets
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Subpath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 81 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
Feb  7 03:02:09.919: INFO: Running AfterSuite actions on all nodes

{"component":"entrypoint","file":"prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 40m0s timeout","time":"2020-02-07T03:02:10Z"}

[BeforeEach] [k8s.io] Pods
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  7 03:01:54.290: INFO: >>> kubeConfig: /root/.kube/kind-config-kinder-upgrade
STEP: Building a namespace api object, basename pods
... skipping 7 lines ...
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  7 03:02:03.014: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e546c6ca-ffda-4e9d-b644-f37a8e041fb6"
Feb  7 03:02:03.014: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e546c6ca-ffda-4e9d-b644-f37a8e041fb6" in namespace "pods-5403" to be "terminated due to deadline exceeded"
Feb  7 03:02:03.026: INFO: Pod "pod-update-activedeadlineseconds-e546c6ca-ffda-4e9d-b644-f37a8e041fb6": Phase="Running", Reason="", readiness=true. Elapsed: 12.389496ms
Feb  7 03:02:05.045: INFO: Pod "pod-update-activedeadlineseconds-e546c6ca-ffda-4e9d-b644-f37a8e041fb6": Phase="Running", Reason="", readiness=true. Elapsed: 2.030859005s
Feb  7 03:02:07.051: INFO: Pod "pod-update-activedeadlineseconds-e546c6ca-ffda-4e9d-b644-f37a8e041fb6": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.03710354s
Feb  7 03:02:07.051: INFO: Pod "pod-update-activedeadlineseconds-e546c6ca-ffda-4e9d-b644-f37a8e041fb6" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  7 03:02:07.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5403" for this suite.
Feb  7 03:02:13.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 5 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
Feb  7 03:02:13.292: INFO: Running AfterSuite actions on all nodes

{"component":"entrypoint","file":"prow/entrypoint/run.go:245","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","time":"2020-02-07T03:02:25Z"}