This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 29 failed / 571 succeeded
Started2020-02-15 19:47
Elapsed15h13m
Revision
Buildergke-prow-default-pool-cf4891d4-l6s3
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/9bc7bb97-017a-4b83-8947-0df221b2baf8/targets/test'}}
pode78b3208-502b-11ea-9bea-16a0f55e352c
resultstorehttps://source.cloud.google.com/results/invocations/9bc7bb97-017a-4b83-8947-0df221b2baf8/targets/test
infra-commitf5dd3ee0e
job-versionv1.15.11-beta.0.1+3b43c8064a328d
master_os_imagecos-73-11647-163-0
node_os_imagecos-73-11647-163-0
pode78b3208-502b-11ea-9bea-16a0f55e352c
revisionv1.15.11-beta.0.1+3b43c8064a328d

Test Failures


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] 10m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\son\sterminated\scontainer\sshould\sreport\stermination\smessage\s\[LinuxOnly\]\sif\sTerminationMessagePath\sis\sset\sas\snon\-root\suser\sand\sat\sa\snon\-default\spath\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 09:17:58.809: Couldn't delete ns: "container-runtime-8190": namespace container-runtime-8190 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace container-runtime-8190 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] 10m26s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\sstarting\sa\scontainer\sthat\sexits\sshould\srun\swith\sthe\sexpected\sstatus\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 08:05:00.264: Couldn't delete ns: "container-runtime-683": namespace container-runtime-683 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace container-runtime-683 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] 10m6s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sDocker\sContainers\sshould\sbe\sable\sto\soverride\sthe\simage\'s\sdefault\scommand\sand\sarguments\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 09:07:54.755: Couldn't delete ns: "containers-219": namespace containers-219 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace containers-219 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters] should run without error 13m43s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sNodeProblemDetector\s\[DisabledForLargeClusters\]\sshould\srun\swithout\serror$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 09:31:42.651: Couldn't delete ns: "node-problem-detector-274": namespace node-problem-detector-274 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace node-problem-detector-274 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart 11m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDaemonRestart\s\[Disruptive\]\sKubelet\sshould\snot\srestart\scontainers\sacross\srestart$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 04:56:00.894: Couldn't delete ns: "daemonrestart-6081": namespace daemonrestart-6081 was not deleted with limit: timed out waiting for the condition, pods remaining: 2 (&errors.errorString{s:"namespace daemonrestart-6081 was not deleted with limit: timed out waiting for the condition, pods remaining: 2"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability 12m14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\s\[Serial\]\s\[Slow\]\sReplicationController\sShould\sscale\sfrom\s5\spods\sto\s3\spods\sand\sfrom\s3\sto\s1\sand\sverify\sdecision\sstability$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:64
Unexpected error:
    <*errors.errorString | 0xc003012260>: {
        s: "Only 4 pods started out of 5",
    }
    Only 4 pods started out of 5
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:475
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client [k8s.io] Simple pod should support exec 10m18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sSimple\spod\sshould\ssupport\sexec$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 08:15:18.786: Couldn't delete ns: "kubectl-3943": namespace kubectl-3943 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace kubectl-3943 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover 17m15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sRestart\s\[Disruptive\]\sshould\srestart\sall\snodes\sand\sensure\sall\snodes\sand\spods\srecover$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/restart.go:86
Feb 16 07:33:09.521: At least one pod wasn't running and ready after the restart.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/restart.go:115
				
				Click to see stdout/stderrfrom junit_01.xml

Find wasnt mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade] 21m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sUpgrade\s\[Feature\:Upgrade\]\smaster\supgrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:MasterUpgrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:91
Feb 15 20:05:41.532: Unexpected error:
    <*errors.errorString | 0xc0027b5050>: {
        s: "error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.16.8-beta.0.1+abdce0eac9e732]; got error exit status 1, stdout \"Fetching the previously installed CoreDNS version\\n\\n***WARNING***\\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\\nbefore running this script:\\n\\n# example: pin to etcd v3.0.17\\nexport ETCD_IMAGE=3.0.17\\nexport ETCD_VERSION=3.0.17\\n\\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\\n\\n== Pre-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.11-beta.0.1+3b43c8064a328d\\\"\\nname: \\\"bootstrap-e2e-minion-group-fn04\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.11-beta.0.1+3b43c8064a328d\\\"\\nname: \\\"bootstrap-e2e-minion-group-grfv\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.11-beta.0.1+3b43c8064a328d\\\"\\nname: \\\"bootstrap-e2e-minion-group-j8d1\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.11-beta.0.1+3b43c8064a328d\\\"\\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.16.8-beta.0.1+abdce0eac9e732/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\\n== Upgrading master environment variables. ==\\n== Waiting for new master to respond to API requests ==\\n......................== Done ==\\nWaiting for CoreDNS to update\\nFetching the latest installed CoreDNS version\\n== Downloading the CoreDNS migration tool ==\\n== Upgrading the CoreDNS ConfigMap ==\\nconfigmap/coredns configured\\n== The CoreDNS Config has been updated ==\\n== Validating cluster post-upgrade ==\\nValidating gce cluster, MULTIZONE=\\nFound 4 node(s).\\nNAME                              STATUS                     ROLES    AGE   VERSION\\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   10m   v1.16.8-beta.0.1+abdce0eac9e732\\nbootstrap-e2e-minion-group-fn04   Ready                      <none>   10m   v1.15.11-beta.0.1+3b43c8064a328d\\nbootstrap-e2e-minion-group-grfv   Ready                      <none>   10m   v1.15.11-beta.0.1+3b43c8064a328d\\nbootstrap-e2e-minion-group-j8d1   Ready                      <none>   10m   v1.15.11-beta.0.1+3b43c8064a328d\\nValidate output:\\nNAME                 AGE\\nscheduler            <unknown>\\netcd-1               <unknown>\\ncontroller-manager   <unknown>\\netcd-0               <unknown>\\n\\x1b[0;32mCluster validation succeeded\\x1b[0m\\n== Post-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.8-beta.0.1+abdce0eac9e732\\\"\\nname: \\\"bootstrap-e2e-minion-group-fn04\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.11-beta.0.1+3b43c8064a328d\\\"\\nname: \\\"bootstrap-e2e-minion-group-grfv\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.11-beta.0.1+3b43c8064a328d\\\"\\nname: \\\"bootstrap-e2e-minion-group-j8d1\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.11-beta.0.1+3b43c8064a328d\\\"\\n\", stderr \"Project: gce-up-c1-4-glat-up-clu\\nNetwork Project: gce-up-c1-4-glat-up-clu\\nZone: us-west1-b\\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\\nNODE_NAMES=bootstrap-e2e-minion-group-fn04 bootstrap-e2e-minion-group-grfv bootstrap-e2e-minion-group-j8d1\\nTrying to find master named 'bootstrap-e2e-master'\\nLooking for address 'bootstrap-e2e-master-ip'\\nUsing master: bootstrap-e2e-master (external IP: 35.247.122.87; internal IP: (not set))\\nDeleted [https://www.googleapis.com/compute/v1/projects/gce-up-c1-4-glat-up-clu/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\\nCreated [https://www.googleapis.com/compute/v1/projects/gce-up-c1-4-glat-up-clu/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: Some requests generated warnings:\\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\\n - The resource 'projects/cos-cloud/global/images/cos-73-11647-163-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-73-11647-182-0'.\\n\\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS\\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   35.247.122.87  RUNNING\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nProject: gce-up-c1-4-glat-up-clu\\nNetwork Project: gce-up-c1-4-glat-up-clu\\nZone: us-west1-b\\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\\n\"",
    }
    error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.16.8-beta.0.1+abdce0eac9e732]; got error exit status 1, stdout "Fetching the previously installed CoreDNS version\n\n***WARNING***\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\nbefore running this script:\n\n# example: pin to etcd v3.0.17\nexport ETCD_IMAGE=3.0.17\nexport ETCD_VERSION=3.0.17\n\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\n\n== Pre-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.11-beta.0.1+3b43c8064a328d\"\nname: \"bootstrap-e2e-minion-group-fn04\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.11-beta.0.1+3b43c8064a328d\"\nname: \"bootstrap-e2e-minion-group-grfv\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.11-beta.0.1+3b43c8064a328d\"\nname: \"bootstrap-e2e-minion-group-j8d1\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.11-beta.0.1+3b43c8064a328d\"\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.16.8-beta.0.1+abdce0eac9e732/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\n== Upgrading master environment variables. ==\n== Waiting for new master to respond to API requests ==\n......................== Done ==\nWaiting for CoreDNS to update\nFetching the latest installed CoreDNS version\n== Downloading the CoreDNS migration tool ==\n== Upgrading the CoreDNS ConfigMap ==\nconfigmap/coredns configured\n== The CoreDNS Config has been updated ==\n== Validating cluster post-upgrade ==\nValidating gce cluster, MULTIZONE=\nFound 4 node(s).\nNAME                              STATUS                     ROLES    AGE   VERSION\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   10m   v1.16.8-beta.0.1+abdce0eac9e732\nbootstrap-e2e-minion-group-fn04   Ready                      <none>   10m   v1.15.11-beta.0.1+3b43c8064a328d\nbootstrap-e2e-minion-group-grfv   Ready                      <none>   10m   v1.15.11-beta.0.1+3b43c8064a328d\nbootstrap-e2e-minion-group-j8d1   Ready                      <none>   10m   v1.15.11-beta.0.1+3b43c8064a328d\nValidate output:\nNAME                 AGE\nscheduler            <unknown>\netcd-1               <unknown>\ncontroller-manager   <unknown>\netcd-0               <unknown>\n\x1b[0;32mCluster validation succeeded\x1b[0m\n== Post-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.8-beta.0.1+abdce0eac9e732\"\nname: \"bootstrap-e2e-minion-group-fn04\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.11-beta.0.1+3b43c8064a328d\"\nname: \"bootstrap-e2e-minion-group-grfv\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.11-beta.0.1+3b43c8064a328d\"\nname: \"bootstrap-e2e-minion-group-j8d1\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.11-beta.0.1+3b43c8064a328d\"\n", stderr "Project: gce-up-c1-4-glat-up-clu\nNetwork Project: gce-up-c1-4-glat-up-clu\nZone: us-west1-b\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\nNODE_NAMES=bootstrap-e2e-minion-group-fn04 bootstrap-e2e-minion-group-grfv bootstrap-e2e-minion-group-j8d1\nTrying to find master named 'bootstrap-e2e-master'\nLooking for address 'bootstrap-e2e-master-ip'\nUsing master: bootstrap-e2e-master (external IP: 35.247.122.87; internal IP: (not set))\nDeleted [https://www.googleapis.com/compute/v1/projects/gce-up-c1-4-glat-up-clu/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\nCreated [https://www.googleapis.com/compute/v1/projects/gce-up-c1-4-glat-up-clu/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: Some requests generated warnings:\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\n - The resource 'projects/cos-cloud/global/images/cos-73-11647-163-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-73-11647-182-0'.\n\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   35.247.122.87  RUNNING\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nProject: gce-up-c1-4-glat-up-clu\nNetwork Project: gce-up-c1-4-glat-up-clu\nZone: us-west1-b\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\n"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:106