This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 46 failed / 308 succeeded
Started2020-01-30 20:05
Elapsed15h20m
Revision
Buildergke-prow-default-pool-cf4891d4-s69b
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/028f6058-cb39-491a-b87e-1c7f3edaae43/targets/test'}}
podbcf026a8-439b-11ea-bb35-aec66a26cefa
resultstorehttps://source.cloud.google.com/results/invocations/028f6058-cb39-491a-b87e-1c7f3edaae43/targets/test
infra-commita933a9650
job-versionv1.15.10-beta.0.1+43baf8affdbbf7
master_os_imagecos-73-11647-163-0
node_os_imagecos-73-11647-163-0
podbcf026a8-439b-11ea-bb35-aec66a26cefa
revisionv1.15.10-beta.0.1+43baf8affdbbf7

Test Failures


Kubernetes e2e suite [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow] 12m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sVariable\sExpansion\sshould\sfail\ssubstituting\svalues\sin\sa\svolume\ssubpath\swith\sabsolute\spath\s\[sig\-storage\]\[NodeFeature\:VolumeSubpathEnvExpansion\]\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 08:57:53.499: Couldn't delete ns: "var-expansion-4547": namespace var-expansion-4547 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace var-expansion-4547 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should be able to deny custom resource creation and deletion 10m17s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\sbe\sable\sto\sdeny\scustom\sresource\screation\sand\sdeletion$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 07:53:00.328: Couldn't delete ns: "webhook-4817": namespace webhook-4817 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace webhook-4817 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] 10m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sGarbage\scollector\sshould\sdelete\sRS\screated\sby\sdeployment\swhen\snot\sorphaning\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 09:18:43.515: Couldn't delete ns: "gc-1457": namespace gc-1457 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace gc-1457 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage] 10m14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sResourceQuota\sshould\screate\sa\sResourceQuota\sand\scapture\sthe\slife\sof\sa\spersistent\svolume\sclaim\swith\sa\sstorage\sclass\.\s\[sig\-storage\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 04:56:16.226: Couldn't delete ns: "resourcequota-5204": namespace resourcequota-5204 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace resourcequota-5204 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] 10m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sSecrets\sshould\sfail\sto\screate\ssecret\sdue\sto\sempty\ssecret\skey\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 08:13:19.703: Couldn't delete ns: "secrets-4883": namespace secrets-4883 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace secrets-4883 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] ReplicationController light Should scale from 2 pods to 1 pod 26m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\sReplicationController\slight\sShould\sscale\sfrom\s2\spods\sto\s1\spod$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:82
timeout waiting 15m0s for 1 replicas
Unexpected error:
    <*errors.errorString | 0xc0002798c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 25m51s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\s\[Serial\]\s\[Slow\]\sDeployment\sShould\sscale\sfrom\s5\spods\sto\s3\spods\sand\sfrom\s3\sto\s1$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:43
timeout waiting 15m0s for 3 replicas
Unexpected error:
    <*errors.errorString | 0xc0002798c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects 10m43s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sPort\sforwarding\s\[k8s\.io\]\sWith\sa\sserver\slistening\son\s0\.0\.0\.0\s\[k8s\.io\]\sthat\sexpects\sa\sclient\srequest\sshould\ssupport\sa\sclient\sthat\sconnects\,\ssends\sNO\sDATA\,\sand\sdisconnects$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 09:40:31.712: Couldn't delete ns: "port-forwarding-4708": namespace port-forwarding-4708 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace port-forwarding-4708 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client [k8s.io] Simple pod should handle in-cluster config 11m51s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sSimple\spod\sshould\shandle\sin\-cluster\sconfig$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:621
Expected
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{../../../../kubernetes_skew/cluster/kubectl.sh [../../../../kubernetes_skew/cluster/kubectl.sh --server=https://34.82.17.239 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-8714 nginx -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1] []  <nil> I0131 08:34:24.407471     115 merged_client_builder.go:164] Using in-cluster namespace\nI0131 08:34:39.411046     115 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 15003 milliseconds\nI0131 08:34:39.411784     115 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.64.0.170:43911->10.0.0.10:53: read: connection refused\nI0131 08:34:54.416235     115 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 15003 milliseconds\nI0131 08:34:54.416423     115 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.64.0.170:48830->10.0.0.10:53: read: connection refused\nI0131 08:34:54.416603     115 shortcut.go:89] Error loading discovery information: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.64.0.170:48830->10.0.0.10:53: read: connection refused\nI0131 08:35:14.420910     115 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 20003 milliseconds\nI0131 08:35:14.420978     115 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.64.0.170:41210->10.0.0.10:53: read: connection refused\nI0131 08:35:29.424364     115 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 15003 milliseconds\nI0131 08:35:29.424445     115 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.64.0.170:43324->10.0.0.10:53: read: connection refused\nI0131 08:35:44.427248     115 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 15002 milliseconds\nI0131 08:35:44.427314     115 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.64.0.170:42335->10.0.0.10:53: read: connection refused\nI0131 08:35:44.427354     115 helpers.go:217] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.64.0.170:42335->10.0.0.10:53: read: connection refused\nF0131 08:35:44.427386     115 helpers.go:114] The connection to the server invalid was refused - did you specify the right host or port?\n + /tmp/kubectl get pods '--server=invalid' '--v=6'\ncommand terminated with exit code 255\n [] <nil> 0xc0066db7a0 exit status 255 <nil> <nil> true [0xc002f1a118 0xc002f1a130 0xc002f1a148] [0xc002f1a118 0xc002f1a130 0xc002f1a148] [0xc002f1a128 0xc002f1a140] [0xba6c10 0xba6c10] 0xc005a99560 <nil>}:\nCommand stdout:\nI0131 08:34:24.407471     115 merged_client_builder.go:164] Using in-cluster namespace\nI0131 08:34:39.411046     115 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 15003 milliseconds\nI0131 08:34:39.411784     115 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.64.0.170:43911->10.0.0.10:53: read: connection refused\nI0131 08:34:54.416235     115 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 15003 milliseconds\nI0131 08:34:54.416423     115 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.64.0.170:48830->10.0.0.10:53: read: connection refused\nI0131 08:34:54.416603     115 shortcut.go:89] Error loading discovery information: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.64.0.170:48830->10.0.0.10:53: read: connection refused\nI0131 08:35:14.420910     115 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 20003 milliseconds\nI0131 08:35:14.420978     115 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.64.0.170:41210->10.0.0.10:53: read: connection refused\nI0131 08:35:29.424364     115 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 15003 milliseconds\nI0131 08:35:29.424445     115 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.64.0.170:43324->10.0.0.10:53: read: connection refused\nI0131 08:35:44.427248     115 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 15002 milliseconds\nI0131 08:35:44.427314     115 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.64.0.170:42335->10.0.0.10:53: read: connection refused\nI0131 08:35:44.427354     115 helpers.go:217] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.64.0.170:42335->10.0.0.10:53: read: connection refused\nF0131 08:35:44.427386     115 helpers.go:114] The connection to the server invalid was refused - did you specify the right host or port?\n\nstderr:\n+ /tmp/kubectl get pods '--server=invalid' '--v=6'\ncommand terminated with exit code 255\n\nerror:\nexit status 255",
        },
        Code: 255,
    }
to contain substring
    <string>: Unable to connect to the server
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:727
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy 10m17s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sSimple\spod\sshould\ssupport\sexec\sthrough\san\sHTTP\sproxy$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 05:29:50.665: Couldn't delete ns: "kubectl-8624": namespace kubectl-8624 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace kubectl-8624 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client [k8s.io] Simple pod should support exec using resource/name 10m14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sSimple\spod\sshould\ssupport\sexec\susing\sresource\/name$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 04:09:38.071: Couldn't delete ns: "kubectl-613": namespace kubectl-613 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace kubectl-613 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover 17m41s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sRestart\s\[Disruptive\]\sshould\srestart\sall\snodes\sand\sensure\sall\snodes\sand\spods\srecover$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/restart.go:86
Jan 31 02:44:46.009: At least one pod wasn't running and ready after the restart.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/restart.go:115
				
				Click to see stdout/stderrfrom junit_01.xml

Find wasnt mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade] 21m23s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sUpgrade\s\[Feature\:Upgrade\]\smaster\supgrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:MasterUpgrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:91
Jan 30 20:28:42.403: Unexpected error:
    <*errors.errorString | 0xc0032c31b0>: {
        s: "error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.16.7-beta.0.1+8fc866b3067974]; got error exit status 1, stdout \"Fetching the previously installed CoreDNS version\\n\\n***WARNING***\\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\\nbefore running this script:\\n\\n# example: pin to etcd v3.0.17\\nexport ETCD_IMAGE=3.0.17\\nexport ETCD_VERSION=3.0.17\\n\\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\\n\\n== Pre-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.1+43baf8affdbbf7\\\"\\nname: \\\"bootstrap-e2e-minion-group-0xp3\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.1+43baf8affdbbf7\\\"\\nname: \\\"bootstrap-e2e-minion-group-9scp\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.1+43baf8affdbbf7\\\"\\nname: \\\"bootstrap-e2e-minion-group-g2xc\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.1+43baf8affdbbf7\\\"\\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.16.7-beta.0.1+8fc866b3067974/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\\n== Upgrading master environment variables. ==\\n== Waiting for new master to respond to API requests ==\\n........................== Done ==\\nWaiting for CoreDNS to update\\nFetching the latest installed CoreDNS version\\n== Downloading the CoreDNS migration tool ==\\n== Upgrading the CoreDNS ConfigMap ==\\nconfigmap/coredns configured\\n== The CoreDNS Config has been updated ==\\n== Validating cluster post-upgrade ==\\nValidating gce cluster, MULTIZONE=\\nFound 4 node(s).\\nNAME                              STATUS                     ROLES    AGE   VERSION\\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   14m   v1.16.7-beta.0.1+8fc866b3067974\\nbootstrap-e2e-minion-group-0xp3   Ready                      <none>   14m   v1.15.10-beta.0.1+43baf8affdbbf7\\nbootstrap-e2e-minion-group-9scp   Ready                      <none>   14m   v1.15.10-beta.0.1+43baf8affdbbf7\\nbootstrap-e2e-minion-group-g2xc   Ready                      <none>   14m   v1.15.10-beta.0.1+43baf8affdbbf7\\nValidate output:\\nNAME                 AGE\\netcd-1               <unknown>\\nscheduler            <unknown>\\ncontroller-manager   <unknown>\\netcd-0               <unknown>\\n\\x1b[0;32mCluster validation succeeded\\x1b[0m\\n== Post-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.7-beta.0.1+8fc866b3067974\\\"\\nname: \\\"bootstrap-e2e-minion-group-0xp3\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.1+43baf8affdbbf7\\\"\\nname: \\\"bootstrap-e2e-minion-group-9scp\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.1+43baf8affdbbf7\\\"\\nname: \\\"bootstrap-e2e-minion-group-g2xc\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.1+43baf8affdbbf7\\\"\\n\", stderr \"Project: k8s-gce-gci-1-5-1-6-ctl-skew\\nNetwork Project: k8s-gce-gci-1-5-1-6-ctl-skew\\nZone: us-west1-b\\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\\nNODE_NAMES=bootstrap-e2e-minion-group-0xp3 bootstrap-e2e-minion-group-9scp bootstrap-e2e-minion-group-g2xc\\nTrying to find master named 'bootstrap-e2e-master'\\nLooking for address 'bootstrap-e2e-master-ip'\\nUsing master: bootstrap-e2e-master (external IP: 34.82.17.239; internal IP: (not set))\\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-gce-gci-1-5-1-6-ctl-skew/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-gce-gci-1-5-1-6-ctl-skew/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: Some requests generated warnings:\\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\\n - The resource 'projects/cos-cloud/global/images/cos-73-11647-163-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-73-11647-182-0'.\\n\\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP   STATUS\\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   34.82.17.239  RUNNING\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nProject: k8s-gce-gci-1-5-1-6-ctl-skew\\nNetwork Project: k8s-gce-gci-1-5-1-6-ctl-skew\\nZone: us-west1-b\\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\\n\"",
    }
    error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.16.7-beta.0.1+8fc866b3067974]; got error exit status 1, stdout "Fetching the previously installed CoreDNS version\n\n***WARNING***\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\nbefore running this script:\n\n# example: pin to etcd v3.0.17\nexport ETCD_IMAGE=3.0.17\nexport ETCD_VERSION=3.0.17\n\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\n\n== Pre-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.1+43baf8affdbbf7\"\nname: \"bootstrap-e2e-minion-group-0xp3\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.1+43baf8affdbbf7\"\nname: \"bootstrap-e2e-minion-group-9scp\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.1+43baf8affdbbf7\"\nname: \"bootstrap-e2e-minion-group-g2xc\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.1+43baf8affdbbf7\"\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.16.7-beta.0.1+8fc866b3067974/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\n== Upgrading master environment variables. ==\n== Waiting for new master to respond to API requests ==\n........................== Done ==\nWaiting for CoreDNS to update\nFetching the latest installed CoreDNS version\n== Downloading the CoreDNS migration tool ==\n== Upgrading the CoreDNS ConfigMap ==\nconfigmap/coredns configured\n== The CoreDNS Config has been updated ==\n== Validating cluster post-upgrade ==\nValidating gce cluster, MULTIZONE=\nFound 4 node(s).\nNAME                              STATUS                     ROLES    AGE   VERSION\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   14m   v1.16.7-beta.0.1+8fc866b3067974\nbootstrap-e2e-minion-group-0xp3   Ready                      <none>   14m   v1.15.10-beta.0.1+43baf8affdbbf7\nbootstrap-e2e-minion-group-9scp   Ready                      <none>   14m   v1.15.10-beta.0.1+43baf8affdbbf7\nbootstrap-e2e-minion-group-g2xc   Ready                      <none>   14m   v1.15.10-beta.0.1+43baf8affdbbf7\nValidate output:\nNAME                 AGE\netcd-1               <unknown>\nscheduler            <unknown>\ncontroller-manager   <unknown>\netcd-0               <unknown>\n\x1b[0;32mCluster validation succeeded\x1b[0m\n== Post-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.7-beta.0.1+8fc866b3067974\"\nname: \"bootstrap-e2e-minion-group-0xp3\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.1+43baf8affdbbf7\"\nname: \"bootstrap-e2e-minion-group-9scp\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.1+43baf8affdbbf7\"\nname: \"bootstrap-e2e-minion-group-g2xc\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.1+43baf8affdbbf7\"\n", stderr "Project: k8s-gce-gci-1-5-1-6-ctl-skew\nNetwork Project: k8s-gce-gci-1-5-1-6-ctl-skew\nZone: us-west1-b\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\nNODE_NAMES=bootstrap-e2e-minion-group-0xp3 bootstrap-e2e-minion-group-9scp bootstrap-e2e-minion-group-g2xc\nTrying to find master named 'bootstrap-e2e-master'\nLooking for address 'bootstrap-e2e-master-ip'\nUsing master: bootstrap-e2e-master (external IP: 34.82.17.239; internal IP: (not set))\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-gce-gci-1-5-1-6-ctl-skew/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-gce-gci-1-5-1-6-ctl-skew/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: Some requests generated warnings:\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\n - The resource 'projects/cos-cloud/global/images/cos-73-11647-163-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-73-11647-182-0'.\n\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP   STATUS\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   34.82.17.239  RUNNING\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nProject: k8s-gce-gci-1-5-1-6-ctl-skew\nNetwork Project: k8s-gce-gci-1-5-1-6-ctl-skew\nZone: us-west1-b\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\n"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:106