This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 269 succeeded
Started2020-03-15 00:50
Elapsed42m37s
Revisionrelease-1.16
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/7e5def59-7a16-4cee-adba-38bb2d7aca92/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/7e5def59-7a16-4cee-adba-38bb2d7aca92/targets/test
uploadercrier

No Test Failures!


Show 269 Passed Tests

Show 4499 Skipped Tests

Error lines from build-log.txt

... skipping 286 lines ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /kind/systemd/kubelet.service.
Created symlink /etc/systemd/system/kubelet.service → /kind/systemd/kubelet.service.
time="00:54:04" level=debug msg="Running: [docker exec kind-build-2e7f4465-306e-48c2-ad99-f8b665330549 mkdir -p /etc/systemd/system/kubelet.service.d]"
time="00:54:04" level=info msg="Adding /etc/systemd/system/kubelet.service.d/10-kubeadm.conf to the image"
time="00:54:04" level=debug msg="Running: [docker exec kind-build-2e7f4465-306e-48c2-ad99-f8b665330549 cp /alter/bits/systemd/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf]"
time="00:54:05" level=debug msg="Running: [docker exec kind-build-2e7f4465-306e-48c2-ad99-f8b665330549 chown -R root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf]"
time="00:54:05" level=debug msg="Running: [docker exec kind-build-2e7f4465-306e-48c2-ad99-f8b665330549 /bin/sh -c echo \"KUBELET_EXTRA_ARGS=--fail-swap-on=false\" >> /etc/default/kubelet]"
time="00:54:06" level=debug msg="Running: [docker exec kind-build-2e7f4465-306e-48c2-ad99-f8b665330549 mkdir -p /kinder]"
time="00:54:06" level=debug msg="Running: [docker exec kind-build-2e7f4465-306e-48c2-ad99-f8b665330549 rsync -r /alter/bits/upgrade /kinder]"
time="00:54:14" level=debug msg="Running: [docker exec kind-build-2e7f4465-306e-48c2-ad99-f8b665330549 chown -R root:root /kinder/upgrade]"
time="00:54:15" level=debug msg="Running: [docker exec kind-build-2e7f4465-306e-48c2-ad99-f8b665330549 /bin/sh -c which docker || true]"
time="00:54:15" level=info msg="Detected docker as container runtime"
time="00:54:15" level=info msg="Pre loading images ..."
... skipping 175 lines ...
kinder-upgrade-control-plane-1:$ Preparing /kind/kubeadm.conf
time="00:55:46" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-upgrade-control-plane-1]"
time="00:55:46" level=debug msg="Running: [docker exec kinder-upgrade-control-plane-1 kubeadm version -o=short]"
time="00:55:47" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.1+1f335d4ddb0831)"
time="00:55:47" level=debug msg="Preparing dockerPatch for kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.1+1f335d4ddb0831)"
time="00:55:47" level=debug msg="Preparing automaticCopyCertsPatches for kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.1+1f335d4ddb0831)"
time="00:55:47" level=debug msg="generated config:\napiServer:\n  certSANs:\n  - localhost\n  - 172.17.0.5\napiVersion: kubeadm.k8s.io/v1beta2\nclusterName: kinder-upgrade\ncontrolPlaneEndpoint: 172.17.0.7:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.15.12-beta.0.1+1f335d4ddb0831\nnetworking:\n  podSubnet: 192.168.0.0/16\n  serviceSubnet: \"\"\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\ncertificateKey: \"0123456789012345678901234567890123456789012345678901234567890123\"\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.5\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.5\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n"
time="00:55:47" level=debug msg="Running: [docker cp /tmp/kinder-upgrade-control-plane-1-676375531 kinder-upgrade-control-plane-1:/kind/kubeadm.conf]"

kinder-upgrade-lb:$ Updating load balancer configuration with 1 control plane backends
time="00:55:48" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-upgrade-control-plane-1]"
time="00:55:49" level=debug msg="Writing loadbalancer config on kinder-upgrade-lb..."
time="00:55:49" level=debug msg="Running: [docker cp /tmp/kinder-upgrade-lb-296408142 kinder-upgrade-lb:/usr/local/etc/haproxy/haproxy.cfg]"
... skipping 32 lines ...
I0315 00:55:51.089699     550 checks.go:382] validating the presence of executable ebtables
I0315 00:55:51.089770     550 checks.go:382] validating the presence of executable ethtool
I0315 00:55:51.089874     550 checks.go:382] validating the presence of executable socat
I0315 00:55:51.090230     550 checks.go:382] validating the presence of executable tc
I0315 00:55:51.090447     550 checks.go:382] validating the presence of executable touch
I0315 00:55:51.090604     550 checks.go:524] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
DOCKER_VERSION: 18.09.4
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0315 00:55:51.147722     550 checks.go:412] checking whether the given node name is reachable using net.LookupHost
I0315 00:55:51.148510     550 checks.go:622] validating kubelet version
I0315 00:55:51.261738     550 checks.go:131] validating if the service is enabled and active
I0315 00:55:51.285205     550 checks.go:209] validating availability of port 10250
I0315 00:55:51.285330     550 checks.go:209] validating availability of port 2379
I0315 00:55:51.285375     550 checks.go:209] validating availability of port 2380
... skipping 413 lines ...
kinder-upgrade-control-plane-2:$ Preparing /kind/kubeadm.conf
time="00:57:58" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-upgrade-control-plane-2]"
time="00:57:58" level=debug msg="Running: [docker exec kinder-upgrade-control-plane-2 kubeadm version -o=short]"
time="00:57:59" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.1+1f335d4ddb0831)"
time="00:57:59" level=debug msg="Preparing dockerPatch for kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.1+1f335d4ddb0831)"
time="00:57:59" level=debug msg="Preparing automaticCopyCertsPatches for kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.1+1f335d4ddb0831)"
time="00:57:59" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta2\ncontrolPlane:\n  certificateKey: \"0123456789012345678901234567890123456789012345678901234567890123\"\n  localAPIEndpoint:\n    advertiseAddress: 172.17.0.6\n    bindPort: 6443\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.6\n"
time="00:57:59" level=debug msg="Running: [docker cp /tmp/kinder-upgrade-control-plane-2-207070308 kinder-upgrade-control-plane-2:/kind/kubeadm.conf]"
time="00:58:00" level=debug msg="Running: [docker exec kinder-upgrade-control-plane-2 kubeadm version -o=short]"

kinder-upgrade-control-plane-2:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
time="00:58:00" level=debug msg="Running: [docker exec kinder-upgrade-control-plane-2 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]"
I0315 00:58:01.031371     685 join.go:364] [preflight] found NodeName empty; using OS hostname as NodeName
... skipping 17 lines ...
I0315 00:58:01.425442     685 checks.go:382] validating the presence of executable ebtables
I0315 00:58:01.425477     685 checks.go:382] validating the presence of executable ethtool
I0315 00:58:01.425510     685 checks.go:382] validating the presence of executable socat
I0315 00:58:01.425539     685 checks.go:382] validating the presence of executable tc
I0315 00:58:01.425595     685 checks.go:382] validating the presence of executable touch
I0315 00:58:01.426043     685 checks.go:524] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
DOCKER_VERSION: 18.09.4
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0315 00:58:01.488852     685 checks.go:412] checking whether the given node name is reachable using net.LookupHost
I0315 00:58:01.489106     685 checks.go:622] validating kubelet version
I0315 00:58:01.601396     685 checks.go:131] validating if the service is enabled and active
I0315 00:58:01.623321     685 checks.go:209] validating availability of port 10250
I0315 00:58:01.623619     685 checks.go:439] validating if the connectivity type is via proxy or direct
I0315 00:58:01.623682     685 join.go:433] [preflight] Discovering cluster-info
... skipping 282 lines ...
kinder-upgrade-control-plane-3:$ Preparing /kind/kubeadm.conf
time="01:00:01" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-upgrade-control-plane-3]"
time="01:00:01" level=debug msg="Running: [docker exec kinder-upgrade-control-plane-3 kubeadm version -o=short]"
time="01:00:02" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.1+1f335d4ddb0831)"
time="01:00:02" level=debug msg="Preparing dockerPatch for kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.1+1f335d4ddb0831)"
time="01:00:02" level=debug msg="Preparing automaticCopyCertsPatches for kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.1+1f335d4ddb0831)"
time="01:00:02" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta2\ncontrolPlane:\n  certificateKey: \"0123456789012345678901234567890123456789012345678901234567890123\"\n  localAPIEndpoint:\n    advertiseAddress: 172.17.0.2\n    bindPort: 6443\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.2\n"
time="01:00:02" level=debug msg="Running: [docker cp /tmp/kinder-upgrade-control-plane-3-726614326 kinder-upgrade-control-plane-3:/kind/kubeadm.conf]"
time="01:00:02" level=debug msg="Running: [docker exec kinder-upgrade-control-plane-3 kubeadm version -o=short]"

kinder-upgrade-control-plane-3:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
time="01:00:03" level=debug msg="Running: [docker exec kinder-upgrade-control-plane-3 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]"
I0315 01:00:03.895693     829 join.go:364] [preflight] found NodeName empty; using OS hostname as NodeName
... skipping 17 lines ...
I0315 01:00:04.271619     829 checks.go:382] validating the presence of executable ebtables
I0315 01:00:04.271674     829 checks.go:382] validating the presence of executable ethtool
I0315 01:00:04.271715     829 checks.go:382] validating the presence of executable socat
I0315 01:00:04.271750     829 checks.go:382] validating the presence of executable tc
I0315 01:00:04.271884     829 checks.go:382] validating the presence of executable touch
I0315 01:00:04.272616     829 checks.go:524] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
DOCKER_VERSION: 18.09.4
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0315 01:00:04.324885     829 checks.go:412] checking whether the given node name is reachable using net.LookupHost
I0315 01:00:04.325197     829 checks.go:622] validating kubelet version
I0315 01:00:04.436517     829 checks.go:131] validating if the service is enabled and active
I0315 01:00:04.457513     829 checks.go:209] validating availability of port 10250
I0315 01:00:04.457741     829 checks.go:439] validating if the connectivity type is via proxy or direct
I0315 01:00:04.457810     829 join.go:433] [preflight] Discovering cluster-info
... skipping 189 lines ...

kinder-upgrade-worker-1:$ Preparing /kind/kubeadm.conf
time="01:00:44" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-upgrade-worker-1]"
time="01:00:45" level=debug msg="Running: [docker exec kinder-upgrade-worker-1 kubeadm version -o=short]"
time="01:00:45" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.1+1f335d4ddb0831)"
time="01:00:45" level=debug msg="Preparing dockerPatch for kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.1+1f335d4ddb0831)"
time="01:00:45" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta2\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.3\n"
time="01:00:45" level=debug msg="Running: [docker cp /tmp/kinder-upgrade-worker-1-114066392 kinder-upgrade-worker-1:/kind/kubeadm.conf]"

kinder-upgrade-worker-1:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
time="01:00:46" level=debug msg="Running: [docker exec kinder-upgrade-worker-1 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]"
I0315 01:00:47.069647     851 join.go:364] [preflight] found NodeName empty; using OS hostname as NodeName
I0315 01:00:47.071077     851 joinconfiguration.go:75] loading configuration from "/kind/kubeadm.conf"
... skipping 16 lines ...
I0315 01:00:47.614488     851 checks.go:382] validating the presence of executable ebtables
I0315 01:00:47.614567     851 checks.go:382] validating the presence of executable ethtool
I0315 01:00:47.614611     851 checks.go:382] validating the presence of executable socat
I0315 01:00:47.614651     851 checks.go:382] validating the presence of executable tc
I0315 01:00:47.614709     851 checks.go:382] validating the presence of executable touch
I0315 01:00:47.614766     851 checks.go:524] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
DOCKER_VERSION: 18.09.4
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0315 01:00:47.674391     851 checks.go:412] checking whether the given node name is reachable using net.LookupHost
I0315 01:00:47.674914     851 checks.go:622] validating kubelet version
I0315 01:00:47.802508     851 checks.go:131] validating if the service is enabled and active
I0315 01:00:47.827108     851 checks.go:209] validating availability of port 10250
I0315 01:00:47.827332     851 checks.go:292] validating the existence of file /etc/kubernetes/pki/ca.crt
I0315 01:00:47.827352     851 checks.go:439] validating if the connectivity type is via proxy or direct
... skipping 82 lines ...

kinder-upgrade-worker-2:$ Preparing /kind/kubeadm.conf
time="01:01:13" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kinder-upgrade-worker-2]"
time="01:01:13" level=debug msg="Running: [docker exec kinder-upgrade-worker-2 kubeadm version -o=short]"
time="01:01:14" level=debug msg="Preparing kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.1+1f335d4ddb0831)"
time="01:01:14" level=debug msg="Preparing dockerPatch for kubeadm config v1beta2 (kubeadm version 1.15.12-beta.0.1+1f335d4ddb0831)"
time="01:01:14" level=debug msg="generated config:\napiVersion: kubeadm.k8s.io/v1beta2\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 172.17.0.7:6443\n    token: abcdef.0123456789abcdef\n    unsafeSkipCAVerification: true\nkind: JoinConfiguration\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  kubeletExtraArgs:\n    fail-swap-on: \"false\"\n    node-ip: 172.17.0.4\n"
time="01:01:14" level=debug msg="Running: [docker cp /tmp/kinder-upgrade-worker-2-798828887 kinder-upgrade-worker-2:/kind/kubeadm.conf]"

kinder-upgrade-worker-2:$ kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
time="01:01:15" level=debug msg="Running: [docker exec kinder-upgrade-worker-2 kubeadm join --config=/kind/kubeadm.conf --v=6 --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]"
I0315 01:01:15.430231     886 join.go:364] [preflight] found NodeName empty; using OS hostname as NodeName
I0315 01:01:15.430270     886 joinconfiguration.go:75] loading configuration from "/kind/kubeadm.conf"
... skipping 16 lines ...
I0315 01:01:15.844967     886 checks.go:382] validating the presence of executable ebtables
I0315 01:01:15.845017     886 checks.go:382] validating the presence of executable ethtool
I0315 01:01:15.845221     886 checks.go:382] validating the presence of executable socat
I0315 01:01:15.845339     886 checks.go:382] validating the presence of executable tc
I0315 01:01:15.845388     886 checks.go:382] validating the presence of executable touch
I0315 01:01:15.845526     886 checks.go:524] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
DOCKER_VERSION: 18.09.4
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0315 01:01:15.901993     886 checks.go:412] checking whether the given node name is reachable using net.LookupHost
I0315 01:01:15.902259     886 checks.go:622] validating kubelet version
I0315 01:01:16.037498     886 checks.go:131] validating if the service is enabled and active
I0315 01:01:16.067459     886 checks.go:209] validating availability of port 10250
I0315 01:01:16.067710     886 checks.go:292] validating the existence of file /etc/kubernetes/pki/ca.crt
I0315 01:01:16.067747     886 checks.go:439] validating if the connectivity type is via proxy or direct
... skipping 1679 lines ...
  _output/local/go/src/k8s.io/kubernetes/test/e2e_kubeadm/dns_addon_test.go:102
[AfterEach] [k8s.io] [sig-cluster-lifecycle] [area-kubeadm] DNS addon
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 15 01:08:33.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
•S
Ran 31 of 37 Specs in 0.388 seconds
SUCCESS! -- 31 Passed | 0 Failed | 0 Pending | 6 Skipped
PASS

Ginkgo ran 1 suite in 469.21104ms
Test Suite Passed
[--skip=\[copy-certs\] /home/prow/go/src/k8s.io/kubernetes/_output/bin/e2e_kubeadm.test -- --report-dir=/logs/artifacts --report-prefix=e2e-kubeadm --kubeconfig=/root/.kube/kind-config-kinder-upgrade]
 completed!
... skipping 1601 lines ...
STEP: Creating a kubernetes client
Mar 15 01:13:49.607: INFO: >>> kubeConfig: /root/.kube/kind-config-kinder-upgrade
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating the pod
Mar 15 01:13:49.663: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 15 01:13:58.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
Mar 15 01:14:04.969: INFO: namespace init-container-4551 deletion completed in 6.191547442s


• [SLOW TEST:15.363 seconds]
[k8s.io] InitContainer [NodeConformance]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 24 lines ...
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 15 01:13:31.274: INFO: File wheezy_udp@dns-test-service-3.dns-1841.svc.cluster.local from pod  dns-1841/dns-test-1552eefd-0c2d-46c7-915b-ad4b8e6baed1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 15 01:13:31.279: INFO: File jessie_udp@dns-test-service-3.dns-1841.svc.cluster.local from pod  dns-1841/dns-test-1552eefd-0c2d-46c7-915b-ad4b8e6baed1 contains '' instead of 'bar.example.com.'
Mar 15 01:13:31.279: INFO: Lookups using dns-1841/dns-test-1552eefd-0c2d-46c7-915b-ad4b8e6baed1 failed for: [wheezy_udp@dns-test-service-3.dns-1841.svc.cluster.local jessie_udp@dns-test-service-3.dns-1841.svc.cluster.local]

Mar 15 01:13:36.291: INFO: File wheezy_udp@dns-test-service-3.dns-1841.svc.cluster.local from pod  dns-1841/dns-test-1552eefd-0c2d-46c7-915b-ad4b8e6baed1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 15 01:13:36.297: INFO: File jessie_udp@dns-test-service-3.dns-1841.svc.cluster.local from pod  dns-1841/dns-test-1552eefd-0c2d-46c7-915b-ad4b8e6baed1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 15 01:13:36.297: INFO: Lookups using dns-1841/dns-test-1552eefd-0c2d-46c7-915b-ad4b8e6baed1 failed for: [wheezy_udp@dns-test-service-3.dns-1841.svc.cluster.local jessie_udp@dns-test-service-3.dns-1841.svc.cluster.local]

Mar 15 01:13:41.286: INFO: File wheezy_udp@dns-test-service-3.dns-1841.svc.cluster.local from pod  dns-1841/dns-test-1552eefd-0c2d-46c7-915b-ad4b8e6baed1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 15 01:13:41.291: INFO: File jessie_udp@dns-test-service-3.dns-1841.svc.cluster.local from pod  dns-1841/dns-test-1552eefd-0c2d-46c7-915b-ad4b8e6baed1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 15 01:13:41.291: INFO: Lookups using dns-1841/dns-test-1552eefd-0c2d-46c7-915b-ad4b8e6baed1 failed for: [wheezy_udp@dns-test-service-3.dns-1841.svc.cluster.local jessie_udp@dns-test-service-3.dns-1841.svc.cluster.local]

Mar 15 01:13:46.285: INFO: File wheezy_udp@dns-test-service-3.dns-1841.svc.cluster.local from pod  dns-1841/dns-test-1552eefd-0c2d-46c7-915b-ad4b8e6baed1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 15 01:13:46.305: INFO: File jessie_udp@dns-test-service-3.dns-1841.svc.cluster.local from pod  dns-1841/dns-test-1552eefd-0c2d-46c7-915b-ad4b8e6baed1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 15 01:13:46.305: INFO: Lookups using dns-1841/dns-test-1552eefd-0c2d-46c7-915b-ad4b8e6baed1 failed for: [wheezy_udp@dns-test-service-3.dns-1841.svc.cluster.local jessie_udp@dns-test-service-3.dns-1841.svc.cluster.local]

Mar 15 01:13:51.289: INFO: File wheezy_udp@dns-test-service-3.dns-1841.svc.cluster.local from pod  dns-1841/dns-test-1552eefd-0c2d-46c7-915b-ad4b8e6baed1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 15 01:13:51.294: INFO: File jessie_udp@dns-test-service-3.dns-1841.svc.cluster.local from pod  dns-1841/dns-test-1552eefd-0c2d-46c7-915b-ad4b8e6baed1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 15 01:13:51.294: INFO: Lookups using dns-1841/dns-test-1552eefd-0c2d-46c7-915b-ad4b8e6baed1 failed for: [wheezy_udp@dns-test-service-3.dns-1841.svc.cluster.local jessie_udp@dns-test-service-3.dns-1841.svc.cluster.local]

Mar 15 01:13:56.289: INFO: DNS probes using dns-test-1552eefd-0c2d-46c7-915b-ad4b8e6baed1 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1841.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1841.svc.cluster.local; sleep 1; done
... skipping 656 lines ...
[BeforeEach] [sig-node] ConfigMap
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 15 01:15:00.539: INFO: >>> kubeConfig: /root/.kube/kind-config-kinder-upgrade
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap that has name configmap-test-emptyKey-b7e2590f-26e2-4717-803e-1246b54441bb
[AfterEach] [sig-node] ConfigMap
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 15 01:15:00.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5398" for this suite.
Mar 15 01:15:06.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 15 01:15:07.006: INFO: namespace configmap-5398 deletion completed in 6.385974742s


• [SLOW TEST:6.467 seconds]
[sig-node] ConfigMap
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32
  should fail to create ConfigMap with empty key [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 1075 lines ...
[BeforeEach] [sig-network] Services
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91
[It] should serve multiport endpoints from pods  [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating service multi-endpoint-test in namespace services-4002
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4002 to expose endpoints map[]
Mar 15 01:15:51.890: INFO: Get endpoints failed (23.389157ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Mar 15 01:15:52.897: INFO: successfully validated that service multi-endpoint-test in namespace services-4002 exposes endpoints map[] (1.029935055s elapsed)
STEP: Creating pod pod1 in namespace services-4002
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4002 to expose endpoints map[pod1:[100]]
Mar 15 01:15:56.990: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.079186883s elapsed, will retry)
Mar 15 01:15:59.009: INFO: successfully validated that service multi-endpoint-test in namespace services-4002 exposes endpoints map[pod1:[100]] (6.098515558s elapsed)
STEP: Creating pod pod2 in namespace services-4002
... skipping 452 lines ...
STEP: Wait for the deployment to be ready
Mar 15 01:17:06.653: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 15 01:17:08.667: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719831826, loc:(*time.Location)(0x787a8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719831826, loc:(*time.Location)(0x787a8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719831826, loc:(*time.Location)(0x787a8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719831826, loc:(*time.Location)(0x787a8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 15 01:17:11.722: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 15 01:17:11.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9594" for this suite.
... skipping 6 lines ...
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103


• [SLOW TEST:18.553 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 162 lines ...
[BeforeEach] [sig-api-machinery] Secrets
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 15 01:17:38.394: INFO: >>> kubeConfig: /root/.kube/kind-config-kinder-upgrade
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating projection with secret that has name secret-emptykey-test-5f4ba11a-a7e6-43a4-922d-ac21ef07a9fb
[AfterEach] [sig-api-machinery] Secrets
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 15 01:17:38.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1669" for this suite.
Mar 15 01:17:44.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 15 01:17:44.572: INFO: namespace secrets-1669 deletion completed in 6.125994332s


• [SLOW TEST:6.179 seconds]
[sig-api-machinery] Secrets
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should fail to create secret due to empty secret key [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Probing container
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 751 lines ...
Mar 15 01:18:49.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/kind-config-kinder-upgrade explain e2e-test-crd-publish-openapi-8921-crds.spec'
Mar 15 01:18:49.794: INFO: stderr: ""
Mar 15 01:18:49.794: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8921-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Mar 15 01:18:49.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/kind-config-kinder-upgrade explain e2e-test-crd-publish-openapi-8921-crds.spec.bars'
Mar 15 01:18:50.308: INFO: stderr: ""
Mar 15 01:18:50.308: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8921-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Mar 15 01:18:50.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/kind-config-kinder-upgrade explain e2e-test-crd-publish-openapi-8921-crds.spec.bars2'
Mar 15 01:18:50.821: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 15 01:18:56.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5042" for this suite.
... skipping 144 lines ...
STEP: Creating a kubernetes client
Mar 15 01:17:54.912: INFO: >>> kubeConfig: /root/.kube/kind-config-kinder-upgrade
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating the pod
Mar 15 01:17:54.963: INFO: PodSpec: initContainers in spec.initContainers
Mar 15 01:18:54.155: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-bf6a4a39-c453-46fd-bc7c-ce5d65d2e0ad", GenerateName:"", Namespace:"init-container-7374", SelfLink:"