This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 31 failed / 456 succeeded
Started2020-01-29 16:47
Elapsed15h15m
Revision
Buildergke-prow-default-pool-cf4891d4-4f4g
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/18892a78-5077-4f5c-9faa-20aeb111488a/targets/test'}}
podcfb4371a-42b6-11ea-a178-86cbaab4a521
resultstorehttps://source.cloud.google.com/results/invocations/18892a78-5077-4f5c-9faa-20aeb111488a/targets/test
infra-commit061a468ba
job-versionv1.15.10-beta.0.1+43baf8affdbbf7
master_os_imagecos-73-11647-163-0
node_os_imagecos-73-11647-163-0
podcfb4371a-42b6-11ea-a178-86cbaab4a521
revisionv1.15.10-beta.0.1+43baf8affdbbf7

Test Failures


Kubernetes e2e suite [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly] 10m5s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPrivilegedPod\s\[NodeConformance\]\sshould\senable\sprivileged\scommands\s\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 05:02:35.672: Couldn't delete ns: "e2e-privileged-pod-5921": namespace e2e-privileged-pod-5921 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace e2e-privileged-pod-5921 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] 10m15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sPreStop\sshould\scall\sprestop\swhen\skilling\sa\spod\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 04:31:02.790: Couldn't delete ns: "prestop-3486": namespace prestop-3486 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace prestop-3486 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI works for CRD preserving unknown fields in an embedded object 2m10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sCustomResourcePublishOpenAPI\sworks\sfor\sCRD\spreserving\sunknown\sfields\sin\san\sembedded\sobject$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_publish_openapi.go:196
Jan 29 17:44:44.926: failed to explain e2e-test-crd-publish-openapi-4708-crds: error running &{../../../../kubernetes_skew/cluster/kubectl.sh [../../../../kubernetes_skew/cluster/kubectl.sh --server=https://34.82.225.7 --kubeconfig=/workspace/.kube/config explain e2e-test-crd-publish-openapi-4708-crds] []  <nil>  Unable to connect to the server: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
 [] <nil> 0xc0064a3b60 exit status 1 <nil> <nil> true [0xc005ae2070 0xc005ae2088 0xc005ae20a0] [0xc005ae2070 0xc005ae2088 0xc005ae20a0] [0xc005ae2080 0xc005ae2098] [0xba6c10 0xba6c10] 0xc001943680 <nil>}:
Command stdout:

stderr:
Unable to connect to the server: context deadline exceeded (Client.Timeout exceeded while awaiting headers)

error:
exit status 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_publish_openapi.go:222
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob 10m38s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sGarbage\scollector\sshould\sdelete\sjobs\sand\spods\screated\sby\scronjob$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 06:46:12.108: Couldn't delete ns: "gc-1722": namespace gc-1722 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace gc-1722 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] Deployment deployment should delete old replica sets [Conformance] 10m7s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDeployment\sdeployment\sshould\sdelete\sold\sreplica\ssets\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 07:29:05.786: Couldn't delete ns: "deployment-67": namespace deployment-67 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace deployment-67 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects 10m43s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sPort\sforwarding\s\[k8s\.io\]\sWith\sa\sserver\slistening\son\s0\.0\.0\.0\s\[k8s\.io\]\sthat\sexpects\sa\sclient\srequest\sshould\ssupport\sa\sclient\sthat\sconnects\,\ssends\sNO\sDATA\,\sand\sdisconnects$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 02:27:56.655: Couldn't delete ns: "port-forwarding-7094": namespace port-forwarding-7094 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace port-forwarding-7094 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] 20m39s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sGuestbook\sapplication\sshould\screate\sand\sstop\sa\sworking\sapplication\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 30 03:18:39.325: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2154
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover 17m41s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sRestart\s\[Disruptive\]\sshould\srestart\sall\snodes\sand\sensure\sall\snodes\sand\spods\srecover$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/restart.go:86
Jan 30 01:55:35.250: At least one pod wasn't running and ready after the restart.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/restart.go:115
				
				Click to see stdout/stderrfrom junit_01.xml

Find wasnt mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade] 24m39s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sUpgrade\s\[Feature\:Upgrade\]\smaster\supgrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:MasterUpgrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:91
Jan 29 17:14:48.957: Unexpected error:
    <*errors.errorString | 0xc0018ab510>: {
        s: "error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.16.7-beta.0.1+8fc866b3067974]; got error exit status 1, stdout \"Fetching the previously installed CoreDNS version\\n\\n***WARNING***\\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\\nbefore running this script:\\n\\n# example: pin to etcd v3.0.17\\nexport ETCD_IMAGE=3.0.17\\nexport ETCD_VERSION=3.0.17\\n\\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\\n\\n== Pre-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.1+43baf8affdbbf7\\\"\\nname: \\\"bootstrap-e2e-minion-group-0dq8\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.1+43baf8affdbbf7\\\"\\nname: \\\"bootstrap-e2e-minion-group-9pcb\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.1+43baf8affdbbf7\\\"\\nname: \\\"bootstrap-e2e-minion-group-r9c5\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.1+43baf8affdbbf7\\\"\\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.16.7-beta.0.1+8fc866b3067974/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\\n== Upgrading master environment variables. ==\\n== Waiting for new master to respond to API requests ==\\n....................== Done ==\\nWaiting for CoreDNS to update\\nFetching the latest installed CoreDNS version\\n== Downloading the CoreDNS migration tool ==\\n== Upgrading the CoreDNS ConfigMap ==\\nconfigmap/coredns configured\\n== The CoreDNS Config has been updated ==\\n== Validating cluster post-upgrade ==\\nValidating gce cluster, MULTIZONE=\\nFound 4 node(s).\\nNAME                              STATUS                     ROLES    AGE   VERSION\\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   14m   v1.16.7-beta.0.1+8fc866b3067974\\nbootstrap-e2e-minion-group-0dq8   Ready                      <none>   14m   v1.15.10-beta.0.1+43baf8affdbbf7\\nbootstrap-e2e-minion-group-9pcb   Ready                      <none>   14m   v1.15.10-beta.0.1+43baf8affdbbf7\\nbootstrap-e2e-minion-group-r9c5   Ready                      <none>   14m   v1.15.10-beta.0.1+43baf8affdbbf7\\nValidate output:\\nNAME                 AGE\\netcd-1               <unknown>\\nscheduler            <unknown>\\ncontroller-manager   <unknown>\\netcd-0               <unknown>\\n\\x1b[0;32mCluster validation succeeded\\x1b[0m\\n== Post-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.7-beta.0.1+8fc866b3067974\\\"\\nname: \\\"bootstrap-e2e-minion-group-0dq8\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.1+43baf8affdbbf7\\\"\\nname: \\\"bootstrap-e2e-minion-group-9pcb\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.1+43baf8affdbbf7\\\"\\nname: \\\"bootstrap-e2e-minion-group-r9c5\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.1+43baf8affdbbf7\\\"\\n\", stderr \"Project: k8s-boskos-gce-project-20\\nNetwork Project: k8s-boskos-gce-project-20\\nZone: us-west1-b\\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\\nNODE_NAMES=bootstrap-e2e-minion-group-0dq8 bootstrap-e2e-minion-group-9pcb bootstrap-e2e-minion-group-r9c5\\nTrying to find master named 'bootstrap-e2e-master'\\nLooking for address 'bootstrap-e2e-master-ip'\\nUsing master: bootstrap-e2e-master (external IP: 34.82.225.7; internal IP: (not set))\\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-20/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-20/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: Some requests generated warnings:\\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\\n - The resource 'projects/cos-cloud/global/images/cos-73-11647-163-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-73-11647-182-0'.\\n\\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP  STATUS\\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   34.82.225.7  RUNNING\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nProject: k8s-boskos-gce-project-20\\nNetwork Project: k8s-boskos-gce-project-20\\nZone: us-west1-b\\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\\n\"",
    }
    error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.16.7-beta.0.1+8fc866b3067974]; got error exit status 1, stdout "Fetching the previously installed CoreDNS version\n\n***WARNING***\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\nbefore running this script:\n\n# example: pin to etcd v3.0.17\nexport ETCD_IMAGE=3.0.17\nexport ETCD_VERSION=3.0.17\n\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\n\n== Pre-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.1+43baf8affdbbf7\"\nname: \"bootstrap-e2e-minion-group-0dq8\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.1+43baf8affdbbf7\"\nname: \"bootstrap-e2e-minion-group-9pcb\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.1+43baf8affdbbf7\"\nname: \"bootstrap-e2e-minion-group-r9c5\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.1+43baf8affdbbf7\"\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.16.7-beta.0.1+8fc866b3067974/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\n== Upgrading master environment variables. ==\n== Waiting for new master to respond to API requests ==\n....................== Done ==\nWaiting for CoreDNS to update\nFetching the latest installed CoreDNS version\n== Downloading the CoreDNS migration tool ==\n== Upgrading the CoreDNS ConfigMap ==\nconfigmap/coredns configured\n== The CoreDNS Config has been updated ==\n== Validating cluster post-upgrade ==\nValidating gce cluster, MULTIZONE=\nFound 4 node(s).\nNAME                              STATUS                     ROLES    AGE   VERSION\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   14m   v1.16.7-beta.0.1+8fc866b3067974\nbootstrap-e2e-minion-group-0dq8   Ready                      <none>   14m   v1.15.10-beta.0.1+43baf8affdbbf7\nbootstrap-e2e-minion-group-9pcb   Ready                      <none>   14m   v1.15.10-beta.0.1+43baf8affdbbf7\nbootstrap-e2e-minion-group-r9c5   Ready                      <none>   14m   v1.15.10-beta.0.1+43baf8affdbbf7\nValidate output:\nNAME                 AGE\netcd-1               <unknown>\nscheduler            <unknown>\ncontroller-manager   <unknown>\netcd-0               <unknown>\n\x1b[0;32mCluster validation succeeded\x1b[0m\n== Post-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.7-beta.0.1+8fc866b3067974\"\nname: \"bootstrap-e2e-minion-group-0dq8\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.1+43baf8affdbbf7\"\nname: \"bootstrap-e2e-minion-group-9pcb\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.1+43baf8affdbbf7\"\nname: \"bootstrap-e2e-minion-group-r9c5\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.1+43baf8affdbbf7\"\n", stderr "Project: k8s-boskos-gce-project-20\nNetwork Project: k8s-boskos-gce-project-20\nZone: us-west1-b\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\nNODE_NAMES=bootstrap-e2e-minion-group-0dq8 bootstrap-e2e-minion-group-9pcb bootstrap-e2e-minion-group-r9c5\nTrying to find master named 'bootstrap-e2e-master'\nLooking for address 'bootstrap-e2e-master-ip'\nUsing master: bootstrap-e2e-master (external IP: 34.82.225.7; internal IP: (not set))\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-20/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-20/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: Some requests generated warnings:\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\n - The resource 'projects/cos-cloud/global/images/cos-73-11647-163-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-73-11647-182-0'.\n\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP  STATUS\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   34.82.225.7  RUNNING\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nProject: k8s-boskos-gce-project-20\nNetwork Project: k8s-boskos-gce-project-20\nZone: us-west1-b\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\n"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:106