This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 202 succeeded
Started2020-03-19 13:02
Elapsed40m18s
Revisionrelease-1.16
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/c8982924-01b1-45a2-9d91-e3cb8dad1605/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/c8982924-01b1-45a2-9d91-e3cb8dad1605/targets/test

No Test Failures!


Show 202 Passed Tests

Show 3148 Skipped Tests

Error lines from build-log.txt

... skipping 284 lines ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /kind/systemd/kubelet.service.
Created symlink /etc/systemd/system/kubelet.service → /kind/systemd/kubelet.service.
time="13:03:55" level=debug msg="Running: [docker exec kind-build-1c603742-b467-4ab8-a008-9a79154bf1a2 mkdir -p /etc/systemd/system/kubelet.service.d]"
time="13:03:56" level=info msg="Adding /etc/systemd/system/kubelet.service.d/10-kubeadm.conf to the image"
time="13:03:56" level=debug msg="Running: [docker exec kind-build-1c603742-b467-4ab8-a008-9a79154bf1a2 cp /alter/bits/systemd/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf]"
time="13:03:56" level=debug msg="Running: [docker exec kind-build-1c603742-b467-4ab8-a008-9a79154bf1a2 chown -R root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf]"
time="13:03:57" level=debug msg="Running: [docker exec kind-build-1c603742-b467-4ab8-a008-9a79154bf1a2 /bin/sh -c echo \"KUBELET_EXTRA_ARGS=--fail-swap-on=false\" >> /etc/default/kubelet]"
time="13:03:57" level=debug msg="Running: [docker exec kind-build-1c603742-b467-4ab8-a008-9a79154bf1a2 mkdir -p /kinder]"
time="13:03:58" level=debug msg="Running: [docker exec kind-build-1c603742-b467-4ab8-a008-9a79154bf1a2 rsync -r /alter/bits/upgrade /kinder]"
time="13:04:09" level=debug msg="Running: [docker exec kind-build-1c603742-b467-4ab8-a008-9a79154bf1a2 chown -R root:root /kinder/upgrade]"
time="13:04:09" level=debug msg="Running: [docker exec kind-build-1c603742-b467-4ab8-a008-9a79154bf1a2 /bin/sh -c which docker || true]"
time="13:04:10" level=info msg="Detected docker as container runtime"
time="13:04:10" level=info msg="Pre loading images ..."
... skipping 174 lines ...
kinder-upgrade-control-plane-1:$ Preparing /kind/kubeadm.conf
time="13:05:48" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-upgrade-control-plane-1]"
time="13:05:48" level=debug msg="Running: [docker exec kinder-upgrade-control-plane-1 kubeadm version -o=short]"
time="13:05:49" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.7+7f18f85e0e8bcc)"
time="13:05:49" level=debug msg="Preparing dockerPatch for kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.7+7f18f85e0e8bcc)"
time="13:05:49" level=debug msg="Preparing automaticCopyCertsPatches for kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.7+7f18f85e0e8bcc)"
time="13:05:49" level=debug msg="generated config:\napiServer:\n  certSANs:\n  - localhost\n  - 172.17.0.4\napiVersion: kubeadm.k8s.io/v1beta2\nclusterName: kinder-upgrade\ncontrolPlaneEndpoint: 172.17.0.7:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.15.12-beta.0.7+7f18f85e0e8bcc\nnetworking:\n  podSubnet: 192.168.0.0/16\n  serviceSubnet: \"\"\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\ncertificateKey: \"0123456789012345678901234567890123456789012345678901234567890123\"\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.4\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.4\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
time="13:05:49" level=debug msg="Running: [docker cp /tmp/kinder-upgrade-control-plane-1-722941820 kinder-upgrade-control-plane-1:/kind/kubeadm.conf]"

kinder-upgrade-lb:$ Updating load balancer configuration with 1 control plane backends
time="13:05:50" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-upgrade-control-plane-1]"
time="13:05:50" level=debug msg="Writing loadbalancer config on kinder-upgrade-lb..."
time="13:05:50" level=debug msg="Running: [docker cp /tmp/kinder-upgrade-lb-399781035 kinder-upgrade-lb:/usr/local/etc/haproxy/haproxy.cfg]"
... skipping 32 lines ...
I0319 13:05:53.522432     554 checks.go:382] validating the presence of executable ebtables
I0319 13:05:53.522898     554 checks.go:382] validating the presence of executable ethtool
I0319 13:05:53.523114     554 checks.go:382] validating the presence of executable socat
I0319 13:05:53.523284     554 checks.go:382] validating the presence of executable tc
I0319 13:05:53.523378     554 checks.go:382] validating the presence of executable touch
I0319 13:05:53.523693     554 checks.go:524] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0319 13:05:53.585904     554 checks.go:412] checking whether the given node name is reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
DOCKER_VERSION: 18.09.4
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
... skipping 407 lines ...
kinder-upgrade-control-plane-2:$ Preparing /kind/kubeadm.conf
time="13:07:58" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-upgrade-control-plane-2]"
time="13:07:58" level=debug msg="Running: [docker exec kinder-upgrade-control-plane-2 kubeadm version -o=short]"
time="13:07:58" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.7+7f18f85e0e8bcc)"
time="13:07:58" level=debug msg="Preparing dockerPatch for kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.7+7f18f85e0e8bcc)"
time="13:07:58" level=debug msg="Preparing automaticCopyCertsPatches for kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.7+7f18f85e0e8bcc)"
time="13:07:58" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta2\ncontrolPlane:\n  certificateKey: \"0123456789012345678901234567890123456789012345678901234567890123\"\n  localAPIEndpoint:\n    advertiseAddress: 172.17.0.2\n    bindPort: 6443\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.2\n"
time="13:07:58" level=debug msg="Running: [docker cp /tmp/kinder-upgrade-control-plane-2-083427420 kinder-upgrade-control-plane-2:/kind/kubeadm.conf]"
time="13:07:59" level=debug msg="Running: [docker exec kinder-upgrade-control-plane-2 kubeadm version -o=short]"

kinder-upgrade-control-plane-2:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
time="13:08:00" level=debug msg="Running: [docker exec kinder-upgrade-control-plane-2 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]"
I0319 13:08:00.890370     686 join.go:364] [preflight] found NodeName empty; using OS hostname as NodeName
... skipping 17 lines ...
I0319 13:08:01.350167     686 checks.go:382] validating the presence of executable ebtables
I0319 13:08:01.350218     686 checks.go:382] validating the presence of executable ethtool
I0319 13:08:01.350315     686 checks.go:382] validating the presence of executable socat
I0319 13:08:01.350395     686 checks.go:382] validating the presence of executable tc
I0319 13:08:01.350451     686 checks.go:382] validating the presence of executable touch
I0319 13:08:01.350514     686 checks.go:524] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0319 13:08:01.411322     686 checks.go:412] checking whether the given node name is reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
DOCKER_VERSION: 18.09.4
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
... skipping 278 lines ...
kinder-upgrade-control-plane-3:$ Preparing /kind/kubeadm.conf
time="13:09:57" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-upgrade-control-plane-3]"
time="13:09:57" level=debug msg="Running: [docker exec kinder-upgrade-control-plane-3 kubeadm version -o=short]"
time="13:09:58" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.7+7f18f85e0e8bcc)"
time="13:09:58" level=debug msg="Preparing dockerPatch for kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.7+7f18f85e0e8bcc)"
time="13:09:58" level=debug msg="Preparing automaticCopyCertsPatches for kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.7+7f18f85e0e8bcc)"
time="13:09:58" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta2\ncontrolPlane:\n  certificateKey: \"0123456789012345678901234567890123456789012345678901234567890123\"\n  localAPIEndpoint:\n    advertiseAddress: 172.17.0.6\n    bindPort: 6443\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.6\n"
time="13:09:58" level=debug msg="Running: [docker cp /tmp/kinder-upgrade-control-plane-3-377764846 kinder-upgrade-control-plane-3:/kind/kubeadm.conf]"
time="13:09:59" level=debug msg="Running: [docker exec kinder-upgrade-control-plane-3 kubeadm version -o=short]"

kinder-upgrade-control-plane-3:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
time="13:09:59" level=debug msg="Running: [docker exec kinder-upgrade-control-plane-3 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]"
I0319 13:10:00.295181     825 join.go:364] [preflight] found NodeName empty; using OS hostname as NodeName
... skipping 17 lines ...
I0319 13:10:00.818083     825 checks.go:382] validating the presence of executable ebtables
I0319 13:10:00.818131     825 checks.go:382] validating the presence of executable ethtool
I0319 13:10:00.818168     825 checks.go:382] validating the presence of executable socat
I0319 13:10:00.818205     825 checks.go:382] validating the presence of executable tc
I0319 13:10:00.818254     825 checks.go:382] validating the presence of executable touch
I0319 13:10:00.818305     825 checks.go:524] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0319 13:10:00.912281     825 checks.go:412] checking whether the given node name is reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
DOCKER_VERSION: 18.09.4
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
... skipping 203 lines ...

kinder-upgrade-worker-1:$ Preparing /kind/kubeadm.conf
time="13:10:55" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-upgrade-worker-1]"
time="13:10:55" level=debug msg="Running: [docker exec kinder-upgrade-worker-1 kubeadm version -o=short]"
time="13:10:56" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.7+7f18f85e0e8bcc)"
time="13:10:56" level=debug msg="Preparing dockerPatch for kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.7+7f18f85e0e8bcc)"
time="13:10:56" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta2\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.5\n"
time="13:10:56" level=debug msg="Running: [docker cp /tmp/kinder-upgrade-worker-1-076439888 kinder-upgrade-worker-1:/kind/kubeadm.conf]"

kinder-upgrade-worker-1:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
time="13:10:57" level=debug msg="Running: [docker exec kinder-upgrade-worker-1 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]"
I0319 13:10:57.711353     866 join.go:364] [preflight] found NodeName empty; using OS hostname as NodeName
I0319 13:10:57.711415     866 joinconfiguration.go:75] loading configuration from "/kind/kubeadm.conf"
... skipping 16 lines ...
I0319 13:10:58.279065     866 checks.go:382] validating the presence of executable ebtables
I0319 13:10:58.279101     866 checks.go:382] validating the presence of executable ethtool
I0319 13:10:58.279142     866 checks.go:382] validating the presence of executable socat
I0319 13:10:58.279173     866 checks.go:382] validating the presence of executable tc
I0319 13:10:58.279208     866 checks.go:382] validating the presence of executable touch
I0319 13:10:58.279248     866 checks.go:524] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0319 13:10:58.344806     866 checks.go:412] checking whether the given node name is reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
DOCKER_VERSION: 18.09.4
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
... skipping 90 lines ...

kinder-upgrade-worker-2:$ Preparing /kind/kubeadm.conf
time="13:11:24" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-upgrade-worker-2]"
time="13:11:25" level=debug msg="Running: [docker exec kinder-upgrade-worker-2 kubeadm version -o=short]"
time="13:11:25" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.7+7f18f85e0e8bcc)"
time="13:11:25" level=debug msg="Preparing dockerPatch for kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.7+7f18f85e0e8bcc)"
time="13:11:25" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta2\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.3\n"
time="13:11:25" level=debug msg="Running: [docker cp /tmp/kinder-upgrade-worker-2-381334639 kinder-upgrade-worker-2:/kind/kubeadm.conf]"

kinder-upgrade-worker-2:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
time="13:11:26" level=debug msg="Running: [docker exec kinder-upgrade-worker-2 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]"
I0319 13:11:27.139606     896 join.go:364] [preflight] found NodeName empty; using OS hostname as NodeName
I0319 13:11:27.139649     896 joinconfiguration.go:75] loading configuration from "/kind/kubeadm.conf"
... skipping 16 lines ...
I0319 13:11:27.696971     896 checks.go:382] validating the presence of executable ebtables
I0319 13:11:27.697017     896 checks.go:382] validating the presence of executable ethtool
I0319 13:11:27.697099     896 checks.go:382] validating the presence of executable socat
I0319 13:11:27.697142     896 checks.go:382] validating the presence of executable tc
I0319 13:11:27.697182     896 checks.go:382] validating the presence of executable touch
I0319 13:11:27.697259     896 checks.go:524] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0319 13:11:27.771089     896 checks.go:412] checking whether the given node name is reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
DOCKER_VERSION: 18.09.4
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
... skipping 513 lines ...
I0319 13:13:24.373584   13020 local.go:69] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests290734046/etcd.yaml"
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-19-13-13-22/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
I0319 13:13:25.417737   13020 round_trippers.go:443] GET https://172.17.0.7:6443/api/v1/namespaces/kube-system/pods/etcd-kinder-upgrade-control-plane-1 500 Internal Server Error in 71 milliseconds
I0319 13:13:58.842386   13020 round_trippers.go:443] GET https://172.17.0.7:6443/api/v1/namespaces/kube-system/pods/etcd-kinder-upgrade-control-plane-1 200 OK in 32921 milliseconds
Static pod: etcd-kinder-upgrade-control-plane-1 hash: 42a6e3d15d3c45d8a5f9adaf7637a9ce
I0319 13:13:58.924937   13020 round_trippers.go:443] GET https://172.17.0.7:6443/api/v1/namespaces/kube-system/pods/etcd-kinder-upgrade-control-plane-1 404 Not Found in 4 milliseconds
I0319 13:13:59.425062   13020 round_trippers.go:443] GET https://172.17.0.7:6443/api/v1/namespaces/kube-system/pods/etcd-kinder-upgrade-control-plane-1 404 Not Found in 4 milliseconds
I0319 13:13:59.925462   13020 round_trippers.go:443] GET https://172.17.0.7:6443/api/v1/namespaces/kube-system/pods/etcd-kinder-upgrade-control-plane-1 404 Not Found in 4 milliseconds
I0319 13:14:00.424721   13020 round_trippers.go:443] GET https://172.17.0.7:6443/api/v1/namespaces/kube-system/pods/etcd-kinder-upgrade-control-plane-1 404 Not Found in 3 milliseconds
... skipping 1167 lines ...
  _output/local/go/src/k8s.io/kubernetes/test/e2e_kubeadm/kubeadm_config_test.go:82
[AfterEach] [k8s.io] [sig-cluster-lifecycle] [area-kubeadm] kubeadm-config ConfigMap
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 19 13:20:10.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
•
Ran 31 of 37 Specs in 0.494 seconds
SUCCESS! -- 31 Passed | 0 Failed | 0 Pending | 6 Skipped
PASS

Ginkgo ran 1 suite in 591.746917ms
Test Suite Passed
[--skip=\[copy-certs\] /home/prow/go/src/k8s.io/kubernetes/_output/bin/e2e_kubeadm.test -- --report-dir=/logs/artifacts --report-prefix=e2e-kubeadm --kubeconfig=/root/.kube/kind-config-kinder-upgrade]
 completed!
... skipping 1053 lines ...
Mar 19 13:24:42.212: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:42.217: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:42.234: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:42.241: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:42.246: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:42.261: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:42.272: INFO: Lookups using dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3990.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3990.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local jessie_udp@dns-test-service-2.dns-3990.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3990.svc.cluster.local]

Mar 19 13:24:47.292: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:47.318: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:47.324: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:47.330: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:47.355: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:47.361: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:47.366: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:47.371: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:47.390: INFO: Lookups using dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3990.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3990.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local jessie_udp@dns-test-service-2.dns-3990.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3990.svc.cluster.local]

Mar 19 13:24:52.290: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:52.309: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:52.317: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:52.323: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:52.343: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:52.347: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:52.352: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:52.357: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:52.376: INFO: Lookups using dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3990.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3990.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local jessie_udp@dns-test-service-2.dns-3990.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3990.svc.cluster.local]

Mar 19 13:24:57.279: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:57.285: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:57.290: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:57.296: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:57.342: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:57.347: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:57.355: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:57.361: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3990.svc.cluster.local from pod dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2: the server could not find the requested resource (get pods dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2)
Mar 19 13:24:57.373: INFO: Lookups using dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3990.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3990.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3990.svc.cluster.local jessie_udp@dns-test-service-2.dns-3990.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3990.svc.cluster.local]

Mar 19 13:25:02.370: INFO: DNS probes using dns-3990/dns-test-14da83b2-4d92-4c3a-90a6-d80519a830c2 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 223 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Mar 19 13:25:28.257: INFO: Successfully updated pod "pod-update-activedeadlineseconds-290a6d48-7b13-4d55-92f6-d8244ffbbbbf"
Mar 19 13:25:28.257: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-290a6d48-7b13-4d55-92f6-d8244ffbbbbf" in namespace "pods-7037" to be "terminated due to deadline exceeded"
Mar 19 13:25:28.263: INFO: Pod "pod-update-activedeadlineseconds-290a6d48-7b13-4d55-92f6-d8244ffbbbbf": Phase="Running", Reason="", readiness=true. Elapsed: 5.566039ms
Mar 19 13:25:30.270: INFO: Pod "pod-update-activedeadlineseconds-290a6d48-7b13-4d55-92f6-d8244ffbbbbf": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.012357789s
Mar 19 13:25:30.270: INFO: Pod "pod-update-activedeadlineseconds-290a6d48-7b13-4d55-92f6-d8244ffbbbbf" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 19 13:25:30.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7037" for this suite.
Mar 19 13:25:36.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 974 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-6201
STEP: Creating statefulset with conflicting port in namespace statefulset-6201
STEP: Waiting until pod test-pod will start running in namespace statefulset-6201
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6201
Mar 19 13:26:56.697: INFO: Observed stateful pod in namespace: statefulset-6201, name: ss-0, uid: 3eee1ee0-d68b-418e-8ab1-f9cad7d253d9, status phase: Pending. Waiting for statefulset controller to delete.
Mar 19 13:26:57.048: INFO: Observed stateful pod in namespace: statefulset-6201, name: ss-0, uid: 3eee1ee0-d68b-418e-8ab1-f9cad7d253d9, status phase: Failed. Waiting for statefulset controller to delete.
Mar 19 13:26:57.061: INFO: Observed stateful pod in namespace: statefulset-6201, name: ss-0, uid: 3eee1ee0-d68b-418e-8ab1-f9cad7d253d9, status phase: Failed. Waiting for statefulset controller to delete.
Mar 19 13:26:57.070: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6201
STEP: Removing pod with conflicting port in namespace statefulset-6201
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6201 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88
Mar 19 13:27:05.186: INFO: Deleting all statefulset in ns statefulset-6201
... skipping 614 lines ...
Mar 19 13:27:58.883: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720221272, loc:(*time.Location)(0x787a8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720221272, loc:(*time.Location)(0x787a8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720221273, loc:(*time.Location)(0x787a8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720221272, loc:(*time.Location)(0x787a8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 19 13:28:00.887: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720221272, loc:(*time.Location)(0x787a8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720221272, loc:(*time.Location)(0x787a8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720221273, loc:(*time.Location)(0x787a8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720221272, loc:(*time.Location)(0x787a8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 19 13:28:02.884: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720221272, loc:(*time.Location)(0x787a8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720221272, loc:(*time.Location)(0x787a8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720221273, loc:(*time.Location)(0x787a8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720221272, loc:(*time.Location)(0x787a8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 19 13:28:05.936: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 19 13:28:06.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8787" for this suite.
... skipping 6 lines ...
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103


• [SLOW TEST:27.506 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[BeforeEach] version v1
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 787 lines ...
Mar 19 13:29:04.794: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local from pod dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37: the server could not find the requested resource (get pods dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37)
Mar 19 13:29:04.806: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local from pod dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37: the server could not find the requested resource (get pods dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37)
Mar 19 13:29:04.854: INFO: Unable to read jessie_udp@dns-test-service.dns-9381.svc.cluster.local from pod dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37: the server could not find the requested resource (get pods dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37)
Mar 19 13:29:04.859: INFO: Unable to read jessie_tcp@dns-test-service.dns-9381.svc.cluster.local from pod dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37: the server could not find the requested resource (get pods dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37)
Mar 19 13:29:04.867: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local from pod dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37: the server could not find the requested resource (get pods dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37)
Mar 19 13:29:04.872: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local from pod dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37: the server could not find the requested resource (get pods dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37)
Mar 19 13:29:04.914: INFO: Lookups using dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37 failed for: [wheezy_udp@dns-test-service.dns-9381.svc.cluster.local wheezy_tcp@dns-test-service.dns-9381.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local jessie_udp@dns-test-service.dns-9381.svc.cluster.local jessie_tcp@dns-test-service.dns-9381.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local]

Mar 19 13:29:09.920: INFO: Unable to read wheezy_udp@dns-test-service.dns-9381.svc.cluster.local from pod dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37: the server could not find the requested resource (get pods dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37)
Mar 19 13:29:09.926: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9381.svc.cluster.local from pod dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37: the server could not find the requested resource (get pods dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37)
Mar 19 13:29:09.931: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local from pod dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37: the server could not find the requested resource (get pods dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37)
Mar 19 13:29:09.936: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local from pod dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37: the server could not find the requested resource (get pods dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37)
Mar 19 13:29:09.977: INFO: Unable to read jessie_udp@dns-test-service.dns-9381.svc.cluster.local from pod dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37: the server could not find the requested resource (get pods dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37)
Mar 19 13:29:09.982: INFO: Unable to read jessie_tcp@dns-test-service.dns-9381.svc.cluster.local from pod dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37: the server could not find the requested resource (get pods dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37)
Mar 19 13:29:09.987: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local from pod dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37: the server could not find the requested resource (get pods dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37)
Mar 19 13:29:09.999: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local from pod dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37: the server could not find the requested resource (get pods dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37)
Mar 19 13:29:10.035: INFO: Lookups using dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37 failed for: [wheezy_udp@dns-test-service.dns-9381.svc.cluster.local wheezy_tcp@dns-test-service.dns-9381.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local jessie_udp@dns-test-service.dns-9381.svc.cluster.local jessie_tcp@dns-test-service.dns-9381.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local]

Mar 19 13:29:14.939: INFO: Unable to read wheezy_udp@dns-test-service.dns-9381.svc.cluster.local from pod dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37: the server could not find the requested resource (get pods dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37)
Mar 19 13:29:14.953: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9381.svc.cluster.local from pod dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37: the server could not find the requested resource (get pods dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37)
Mar 19 13:29:14.959: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local from pod dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37: the server could not find the requested resource (get pods dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37)
Mar 19 13:29:14.970: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local from pod dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37: the server could not find the requested resource (get pods dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37)
Mar 19 13:29:15.019: INFO: Unable to read jessie_udp@dns-test-service.dns-9381.svc.cluster.local from pod dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37: the server could not find the requested resource (get pods dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37)
Mar 19 13:29:15.025: INFO: Unable to read jessie_tcp@dns-test-service.dns-9381.svc.cluster.local from pod dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37: the server could not find the requested resource (get pods dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37)
Mar 19 13:29:15.033: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local from pod dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37: the server could not find the requested resource (get pods dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37)
Mar 19 13:29:15.038: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local from pod dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37: the server could not find the requested resource (get pods dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37)
Mar 19 13:29:15.094: INFO: Lookups using dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37 failed for: [wheezy_udp@dns-test-service.dns-9381.svc.cluster.local wheezy_tcp@dns-test-service.dns-9381.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local jessie_udp@dns-test-service.dns-9381.svc.cluster.local jessie_tcp@dns-test-service.dns-9381.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9381.svc.cluster.local]

Mar 19 13:29:20.268: INFO: DNS probes using dns-9381/dns-test-b64253ac-8f33-438c-bfd1-fe0903cb6f37 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 665 lines ...
[BeforeEach] [sig-network] Services
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91
[It] should serve multiport endpoints from pods  [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating service multi-endpoint-test in namespace services-4631
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4631 to expose endpoints map[]
Mar 19 13:29:35.886: INFO: Get endpoints failed (13.681091ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Mar 19 13:29:36.903: INFO: successfully validated that service multi-endpoint-test in namespace services-4631 exposes endpoints map[] (1.030447856s elapsed)
STEP: Creating pod pod1 in namespace services-4631
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4631 to expose endpoints map[pod1:[100]]
Mar 19 13:29:41.073: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.144894874s elapsed, will retry)
Mar 19 13:29:43.128: INFO: successfully validated that service multi-endpoint-test in namespace services-4631 exposes endpoints map[pod1:[100]] (6.199853535s elapsed)
STEP: Creating pod pod2 in namespace services-4631
... skipping 913 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 19 13:31:04.640: INFO: File wheezy_udp@dns-test-service-3.dns-6777.svc.cluster.local from pod  dns-6777/dns-test-7fc56a9b-673c-47ac-b574-df73bbf2e816 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 19 13:31:04.654: INFO: File jessie_udp@dns-test-service-3.dns-6777.svc.cluster.local from pod  dns-6777/dns-test-7fc56a9b-673c-47ac-b574-df73bbf2e816 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 19 13:31:04.654: INFO: Lookups using dns-6777/dns-test-7fc56a9b-673c-47ac-b574-df73bbf2e816 failed for: [wheezy_udp@dns-test-service-3.dns-6777.svc.cluster.local jessie_udp@dns-test-service-3.dns-6777.svc.cluster.local]

Mar 19 13:31:09.667: INFO: File wheezy_udp@dns-test-service-3.dns-6777.svc.cluster.local from pod  dns-6777/dns-test-7fc56a9b-673c-47ac-b574-df73bbf2e816 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 19 13:31:09.714: INFO: File jessie_udp@dns-test-service-3.dns-6777.svc.cluster.local from pod  dns-6777/dns-test-7fc56a9b-673c-47ac-b574-df73bbf2e816 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 19 13:31:09.714: INFO: Lookups using dns-6777/dns-test-7fc56a9b-673c-47ac-b574-df73bbf2e816 failed for: [wheezy_udp@dns-test-service-3.dns-6777.svc.cluster.local jessie_udp@dns-test-service-3.dns-6777.svc.cluster.local]

Mar 19 13:31:14.665: INFO: File wheezy_udp@dns-test-service-3.dns-6777.svc.cluster.local from pod  dns-6777/dns-test-7fc56a9b-673c-47ac-b574-df73bbf2e816 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 19 13:31:14.669: INFO: File jessie_udp@dns-test-service-3.dns-6777.svc.cluster.local from pod  dns-6777/dns-test-7fc56a9b-673c-47ac-b574-df73bbf2e816 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 19 13:31:14.669: INFO: Lookups using dns-6777/dns-test-7fc56a9b-673c-47ac-b574-df73bbf2e816 failed for: [wheezy_udp@dns-test-service-3.dns-6777.svc.cluster.local jessie_udp@dns-test-service-3.dns-6777.svc.cluster.local]

Mar 19 13:31:19.671: INFO: File wheezy_udp@dns-test-service-3.dns-6777.svc.cluster.local from pod  dns-6777/dns-test-7fc56a9b-673c-47ac-b574-df73bbf2e816 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 19 13:31:19.677: INFO: File jessie_udp@dns-test-service-3.dns-6777.svc.cluster.local from pod  dns-6777/dns-test-7fc56a9b-673c-47ac-b574-df73bbf2e816 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 19 13:31:19.677: INFO: Lookups using dns-6777/dns-test-7fc56a9b-673c-47ac-b574-df73bbf2e816 failed for: [wheezy_udp@dns-test-service-3.dns-6777.svc.cluster.local jessie_udp@dns-test-service-3.dns-6777.svc.cluster.local]

Mar 19 13:31:24.666: INFO: File jessie_udp@dns-test-service-3.dns-6777.svc.cluster.local from pod  dns-6777/dns-test-7fc56a9b-673c-47ac-b574-df73bbf2e816 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 19 13:31:24.666: INFO: Lookups using dns-6777/dns-test-7fc56a9b-673c-47ac-b574-df73bbf2e816 failed for: [jessie_udp@dns-test-service-3.dns-6777.svc.cluster.local]

Mar 19 13:31:29.669: INFO: DNS probes using dns-test-7fc56a9b-673c-47ac-b574-df73bbf2e816 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6777.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6777.svc.cluster.local; sleep 1; done
... skipping 132 lines ...
[BeforeEach] [sig-apps] Job
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 19 13:31:35.551: INFO: >>> kubeConfig: /root/.kube/kind-config-kinder-upgrade
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 19 13:31:51.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
Mar 19 13:31:58.153: INFO: namespace job-3867 deletion completed in 6.487098332s


• [SLOW TEST:22.602 seconds]
[sig-apps] Job
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 979 lines ...
Mar 19 13:33:18.815: INFO: >>> kubeConfig: /root/.kube/kind-config-kinder-upgrade
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Mar 19 13:33:23.928: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
... skipping 396 lines ...
[BeforeEach] [sig-network] Services
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91
[It] should serve a basic endpoint from pods  [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating service endpoint-test2 in namespace services-2083
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2083 to expose endpoints map[]
Mar 19 13:33:31.680: INFO: Get endpoints failed (10.057248ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Mar 19 13:33:32.691: INFO: successfully validated that service endpoint-test2 in namespace services-2083 exposes endpoints map[] (1.020371943s elapsed)
STEP: Creating pod pod1 in namespace services-2083
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2083 to expose endpoints map[pod1:[80]]
Mar 19 13:33:36.775: INFO: successfully validated that service endpoint-test2 in namespace services-2083 exposes endpoints map[pod1:[80]] (4.068355921s elapsed)
STEP: Creating pod pod2 in namespace services-2083
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2083 to expose endpoints map[pod1:[80] pod2:[80]]
... skipping 1302 lines ...
[BeforeEach] [sig-node] ConfigMap
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 19 13:35:28.228: INFO: >>> kubeConfig: /root/.kube/kind-config-kinder-upgrade
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap that has name configmap-test-emptyKey-0cee57eb-ae9a-4ca5-b750-d9d7c4e39867
[AfterEach] [sig-node] ConfigMap
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 19 13:35:28.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9554" for this suite.
Mar 19 13:35:34.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 19 13:35:34.544: INFO: namespace configmap-9554 deletion completed in 6.22898873s


• [SLOW TEST:6.319 seconds]
[sig-node] ConfigMap
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32
  should fail to create ConfigMap with empty key [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Secrets
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 19 13:35:31.153: INFO: >>> kubeConfig: /root/.kube/kind-config-kinder-upgrade
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating projection with secret that has name secret-emptykey-test-5a10e34c-3142-4a02-8061-6c1e7d042a44
[AfterEach] [sig-api-machinery] Secrets
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 19 13:35:31.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-951" for this suite.
Mar 19 13:35:37.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 19 13:35:37.503: INFO: namespace secrets-951 deletion completed in 6.240973097s


• [SLOW TEST:6.350 seconds]
[sig-api-machinery] Secrets
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should fail to create secret due to empty secret key [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 216 lines ...
Mar 19 13:35:55.771: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 19 13:36:08.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6896" for this suite.
Mar 19 13:36:14.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 491 lines ...
STEP: Creating a kubernetes client
Mar 19 13:35:55.935: INFO: >>> kubeConfig: /root/.kube/kind-config-kinder-upgrade
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating the pod
Mar 19 13:35:56.015: INFO: PodSpec: initContainers in spec.initContainers
Mar 19 13:36:55.676: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-f21177eb-69ad-4387-9d06-1b7fb3353986", GenerateName:"", Namespace:"init-container-9346", SelfLink:"/api/v1/namespaces/init-container-9346/pods/pod-init-f21177eb-69ad-4387-9d06-1b7fb3353986", UID:"3a3b489d-a9d5-4679-b0a9-8efe160623df", ResourceVersion:"17152", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720221756, loc:(*time.Location)(0x787a8e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"15261667"}, Annotations:map[string]string{"cni.projectcalico.org/podIP":"192.168.48.201/32"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nbdg6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0024055c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nbdg6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nbdg6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nbdg6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0032d70c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kinder-upgrade-worker-2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002e3d440), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0032d7140)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0032d7160)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0032d7168), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0032d716c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720221756, loc:(*time.Location)(0x787a8e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720221756, loc:(*time.Location)(0x787a8e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720221756, loc:(*time.Location)(0x787a8e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720221756, loc:(*time.Location)(0x787a8e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"192.168.48.201", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.48.201"}}, StartTime:(*v1.Time)(0xc002b48da0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000ee3ab0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://c0d3543997bff9773af17771a16369f992bff779dcba0ae3598f60f105debb5a", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002b48de0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002b48dc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0032d71ef)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 19 13:36:55.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9346" for this suite.
Mar 19 13:37:25.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 19 13:37:26.532: INFO: namespace init-container-9346 deletion completed in 30.849507205s


• [SLOW TEST:90.598 seconds]
[k8s.io] InitContainer [NodeConformance]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[BeforeEach] [sig-network] Networking
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 117 lines ...
Mar 19 13:37:29.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/kind-config-kinder-upgrade explain e2e-test-crd-publish-openapi-1573-crds.spec'
Mar 19 13:37:29.852: INFO: stderr: ""
Mar 19 13:37:29.852: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1573-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Mar 19 13:37:29.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/kind-config-kinder-upgrade explain e2e-test-crd-publish-openapi-1573-crds.spec.bars'
Mar 19 13:37:30.487: INFO: stderr: ""
Mar 19 13:37:30.487: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1573-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Mar 19 13:37:30.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/kind-config-kinder-upgrade explain e2e-test-crd-publish-openapi-1573-crds.spec.bars2'
Mar 19 13:37:31.095: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 19 13:37:35.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-823" for this suite.
... skipping 54 lines ...
STEP: Creating a kubernetes client
Mar 19 13:37:26.540: INFO: >>> kubeConfig: /root/.kube/kind-config-kinder-upgrade
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating the pod
Mar 19 13:37:26.767: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 19 13:37:38.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
Mar 19 13:37:44.281: INFO: namespace init-container-1119 deletion completed in 6.196753162s


• [SLOW TEST:17.742 seconds]
[k8s.io] InitContainer [NodeConformance]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSS
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 975 lines ...
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
      should run with the expected status [NodeConformance] [Conformance]
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
Mar 19 13:39:28.857: INFO: Running AfterSuite actions on all nodes

{"component":"entrypoint","file":"prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 40m0s timeout","time":"2020-03-19T13:42:11Z"}
{"component":"entrypoint","file":"prow/entrypoint/run.go:245","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","time":"2020-03-19T13:42:26Z"}