This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 59 failed / 148 succeeded
Started2020-02-03 14:54
Elapsed15h14m
Revision
Buildergke-prow-default-pool-cf4891d4-xlcs
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/4543151e-7d4c-434a-abfc-d7ad2c1bdbf9/targets/test'}}
pode5d3563e-4694-11ea-af91-fec11da50718
resultstorehttps://source.cloud.google.com/results/invocations/4543151e-7d4c-434a-abfc-d7ad2c1bdbf9/targets/test
infra-commit25ff2bb80
job-versionv1.15.10-beta.0.15+e91de4083dbd87
master_os_imagecos-73-11647-163-0
node_os_imagecos-73-11647-163-0
pode5d3563e-4694-11ea-af91-fec11da50718
revisionv1.15.10-beta.0.15+e91de4083dbd87

Test Failures


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] 10m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\son\sterminated\scontainer\sshould\sreport\stermination\smessage\s\[LinuxOnly\]\sas\sempty\swhen\spod\ssucceeds\sand\sTerminationMessagePolicy\sFallbackToLogsOnError\sis\sset\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 01:28:35.378: Couldn't delete ns: "container-runtime-7323": namespace container-runtime-7323 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace container-runtime-7323 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] 10m7s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sDocker\sContainers\sshould\sbe\sable\sto\soverride\sthe\simage\'s\sdefault\scommand\s\(docker\sentrypoint\)\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 19:12:36.624: Couldn't delete ns: "containers-8588": namespace containers-8588 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace containers-8588 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] 14m7s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\sexec\s\"cat\s\/tmp\/health\"\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 01:06:25.548: Couldn't delete ns: "container-probe-1822": namespace container-probe-1822 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace container-probe-1822 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Sysctls [NodeFeature:Sysctls] should reject invalid sysctls 10m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSysctls\s\[NodeFeature\:Sysctls\]\sshould\sreject\sinvalid\ssysctls$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 22:05:32.206: Couldn't delete ns: "sysctl-6745": namespace sysctl-6745 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace sysctl-6745 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should mutate configmap 10m19s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\smutate\sconfigmap$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 04:55:32.531: Couldn't delete ns: "webhook-1090": namespace webhook-1090 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace webhook-1090 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI works for CRD preserving unknown fields in an embedded object 10m19s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sCustomResourcePublishOpenAPI\sworks\sfor\sCRD\spreserving\sunknown\sfields\sin\san\sembedded\sobject$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 19:02:28.765: Couldn't delete ns: "crd-publish-openapi-1052": namespace crd-publish-openapi-1052 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace crd-publish-openapi-1052 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. 10m15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sResourceQuota\sshould\screate\sa\sResourceQuota\sand\scapture\sthe\slife\sof\sa\spod\.$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 18:19:38.015: Couldn't delete ns: "resourcequota-783": namespace resourcequota-783 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace resourcequota-783 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. 10m18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sResourceQuota\sshould\sverify\sResourceQuota\swith\sbest\seffort\sscope\.$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 02:12:54.640: Couldn't delete ns: "resourcequota-2941": namespace resourcequota-2941 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace resourcequota-2941 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] 10m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sKubectl\scluster\-info\sshould\scheck\sif\sKubernetes\smaster\sservices\sis\sincluded\sin\scluster\-info\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 18:40:26.649: Couldn't delete ns: "kubectl-8677": namespace kubectl-8677 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace kubectl-8677 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] 10m6s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sKubectl\srun\sdeployment\sshould\screate\sa\sdeployment\sfrom\san\simage\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 20:53:23.925: Couldn't delete ns: "kubectl-1437": namespace kubectl-1437 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace kubectl-1437 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client [k8s.io] Simple pod should support port-forward 10m51s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sSimple\spod\sshould\ssupport\sport\-forward$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 20:43:17.717: Couldn't delete ns: "kubectl-6328": namespace kubectl-6328 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace kubectl-6328 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover 17m12s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sRestart\s\[Disruptive\]\sshould\srestart\sall\snodes\sand\sensure\sall\snodes\sand\spods\srecover$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/restart.go:86
Feb  3 17:49:18.573: At least one pod wasn't running and ready after the restart.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/restart.go:115
				
				Click to see stdout/stderrfrom junit_01.xml

Find wasnt mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade] 22m10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sUpgrade\s\[Feature\:Upgrade\]\smaster\supgrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:MasterUpgrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:91
Feb  3 15:14:09.715: Unexpected error:
    <*errors.errorString | 0xc00295eb00>: {
        s: "error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.16.7-beta.0.19+4667bb628fa6b3]; got error exit status 1, stdout \"Fetching the previously installed CoreDNS version\\n\\n***WARNING***\\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\\nbefore running this script:\\n\\n# example: pin to etcd v3.0.17\\nexport ETCD_IMAGE=3.0.17\\nexport ETCD_VERSION=3.0.17\\n\\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\\n\\n== Pre-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.15+e91de4083dbd87\\\"\\nname: \\\"bootstrap-e2e-minion-group-4kf8\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.15+e91de4083dbd87\\\"\\nname: \\\"bootstrap-e2e-minion-group-gntk\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.15+e91de4083dbd87\\\"\\nname: \\\"bootstrap-e2e-minion-group-xdzt\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.15+e91de4083dbd87\\\"\\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.16.7-beta.0.19+4667bb628fa6b3/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\\n== Upgrading master environment variables. ==\\n== Waiting for new master to respond to API requests ==\\n....................== Done ==\\nWaiting for CoreDNS to update\\nFetching the latest installed CoreDNS version\\n== Downloading the CoreDNS migration tool ==\\n== Upgrading the CoreDNS ConfigMap ==\\nconfigmap/coredns configured\\n== The CoreDNS Config has been updated ==\\n== Validating cluster post-upgrade ==\\nValidating gce cluster, MULTIZONE=\\nFound 4 node(s).\\nNAME                              STATUS                     ROLES    AGE   VERSION\\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   11m   v1.16.7-beta.0.19+4667bb628fa6b3\\nbootstrap-e2e-minion-group-4kf8   Ready                      <none>   11m   v1.15.10-beta.0.15+e91de4083dbd87\\nbootstrap-e2e-minion-group-gntk   Ready                      <none>   11m   v1.15.10-beta.0.15+e91de4083dbd87\\nbootstrap-e2e-minion-group-xdzt   Ready                      <none>   11m   v1.15.10-beta.0.15+e91de4083dbd87\\nValidate output:\\nNAME                 AGE\\ncontroller-manager   <unknown>\\netcd-1               <unknown>\\nscheduler            <unknown>\\netcd-0               <unknown>\\n\\x1b[0;32mCluster validation succeeded\\x1b[0m\\n== Post-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.7-beta.0.19+4667bb628fa6b3\\\"\\nname: \\\"bootstrap-e2e-minion-group-4kf8\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.15+e91de4083dbd87\\\"\\nname: \\\"bootstrap-e2e-minion-group-gntk\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.15+e91de4083dbd87\\\"\\nname: \\\"bootstrap-e2e-minion-group-xdzt\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.15+e91de4083dbd87\\\"\\n\", stderr \"Project: gce-gci-upg-1-5-1-4-ctl-skew\\nNetwork Project: gce-gci-upg-1-5-1-4-ctl-skew\\nZone: us-west1-b\\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\\nNODE_NAMES=bootstrap-e2e-minion-group-4kf8 bootstrap-e2e-minion-group-gntk bootstrap-e2e-minion-group-xdzt\\nTrying to find master named 'bootstrap-e2e-master'\\nLooking for address 'bootstrap-e2e-master-ip'\\nUsing master: bootstrap-e2e-master (external IP: 34.83.112.248; internal IP: (not set))\\nDeleted [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-5-1-4-ctl-skew/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\\nCreated [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-5-1-4-ctl-skew/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: Some requests generated warnings:\\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\\n - The resource 'projects/cos-cloud/global/images/cos-73-11647-163-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-73-11647-182-0'.\\n\\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS\\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   34.83.112.248  RUNNING\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nProject: gce-gci-upg-1-5-1-4-ctl-skew\\nNetwork Project: gce-gci-upg-1-5-1-4-ctl-skew\\nZone: us-west1-b\\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\\n\"",
    }
    error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.16.7-beta.0.19+4667bb628fa6b3]; got error exit status 1, stdout "Fetching the previously installed CoreDNS version\n\n***WARNING***\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\nbefore running this script:\n\n# example: pin to etcd v3.0.17\nexport ETCD_IMAGE=3.0.17\nexport ETCD_VERSION=3.0.17\n\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\n\n== Pre-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.15+e91de4083dbd87\"\nname: \"bootstrap-e2e-minion-group-4kf8\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.15+e91de4083dbd87\"\nname: \"bootstrap-e2e-minion-group-gntk\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.15+e91de4083dbd87\"\nname: \"bootstrap-e2e-minion-group-xdzt\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.15+e91de4083dbd87\"\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.16.7-beta.0.19+4667bb628fa6b3/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\n== Upgrading master environment variables. ==\n== Waiting for new master to respond to API requests ==\n....................== Done ==\nWaiting for CoreDNS to update\nFetching the latest installed CoreDNS version\n== Downloading the CoreDNS migration tool ==\n== Upgrading the CoreDNS ConfigMap ==\nconfigmap/coredns configured\n== The CoreDNS Config has been updated ==\n== Validating cluster post-upgrade ==\nValidating gce cluster, MULTIZONE=\nFound 4 node(s).\nNAME                              STATUS                     ROLES    AGE   VERSION\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   11m   v1.16.7-beta.0.19+4667bb628fa6b3\nbootstrap-e2e-minion-group-4kf8   Ready                      <none>   11m   v1.15.10-beta.0.15+e91de4083dbd87\nbootstrap-e2e-minion-group-gntk   Ready                      <none>   11m   v1.15.10-beta.0.15+e91de4083dbd87\nbootstrap-e2e-minion-group-xdzt   Ready                      <none>   11m   v1.15.10-beta.0.15+e91de4083dbd87\nValidate output:\nNAME                 AGE\ncontroller-manager   <unknown>\netcd-1               <unknown>\nscheduler            <unknown>\netcd-0               <unknown>\n\x1b[0;32mCluster validation succeeded\x1b[0m\n== Post-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.7-beta.0.19+4667bb628fa6b3\"\nname: \"bootstrap-e2e-minion-group-4kf8\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.15+e91de4083dbd87\"\nname: \"bootstrap-e2e-minion-group-gntk\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.15+e91de4083dbd87\"\nname: \"bootstrap-e2e-minion-group-xdzt\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.15+e91de4083dbd87\"\n", stderr "Project: gce-gci-upg-1-5-1-4-ctl-skew\nNetwork Project: gce-gci-upg-1-5-1-4-ctl-skew\nZone: us-west1-b\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\nNODE_NAMES=bootstrap-e2e-minion-group-4kf8 bootstrap-e2e-minion-group-gntk bootstrap-e2e-minion-group-xdzt\nTrying to find master named 'bootstrap-e2e-master'\nLooking for address 'bootstrap-e2e-master-ip'\nUsing master: bootstrap-e2e-master (external IP: 34.83.112.248; internal IP: (not set))\nDeleted [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-5-1-4-ctl-skew/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\nCreated [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-5-1-4-ctl-skew/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: Some requests generated warnings:\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\n - The resource 'projects/cos-cloud/global/images/cos-73-11647-163-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-73-11647-182-0'.\n\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   34.83.112.248  RUNNING\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nProject: gce-gci-upg-1-5-1-4-ctl-skew\nNetwork Project: gce-gci-upg-1-5-1-4-ctl-skew\nZone: us-west1-b\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\n"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:106