This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 28 failed / 49 succeeded
Started2020-02-13 15:35
Elapsed10h30m
Revision
Buildergke-prow-default-pool-cf4891d4-tv62
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/0171e8a3-0d69-49e5-a152-db0ee90da32b/targets/test'}}
pod4e060b44-4e76-11ea-8bf0-4660bf95a9d5
resultstorehttps://source.cloud.google.com/results/invocations/0171e8a3-0d69-49e5-a152-db0ee90da32b/targets/test
infra-commit10f1a3ece
job-versionv1.16.8-beta.0.1+abdce0eac9e732-dirty
pod4e060b44-4e76-11ea-8bf0-4660bf95a9d5
repok8s.io/kubernetes
repo-commitabdce0eac9e732e32c1dac7347c504c7c920c210
repos{u'k8s.io/kubernetes': u'release-1.16'}
revisionv1.16.8-beta.0.1+abdce0eac9e732-dirty

Test Failures


Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] 16m43s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sLifecycle\sHook\swhen\screate\sa\spod\swith\slifecycle\shook\sshould\sexecute\sprestop\shttp\shook\sproperly\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Feb 13 17:41:40.452: wait for pod "pod-with-prestop-http-hook" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc0000df000>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:178
				
				Click to see stdout/stderrfrom junit_01.xml

Find pod-with-prestop-http-hook mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] 9m10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sPreStop\sshould\scall\sprestop\swhen\skilling\sa\spod\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Feb 13 17:05:44.195: waiting for server pod to start
Unexpected error:
    <*errors.errorString | 0xc0000df000>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:75
				
				Click to see stdout/stderrfrom junit_01.xml

Find to mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] 7m36s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\s\[Privileged\:ClusterAdmin\]\sshould\shonor\stimeout\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 13 18:53:07.535: All nodes should be ready after test, Not ready nodes: ", 2116k8s000"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:393
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] 3m26s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sResourceQuota\sshould\screate\sa\sResourceQuota\sand\scapture\sthe\slife\sof\sa\ssecret\.\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 13 18:45:09.987: All nodes should be ready after test, Not ready nodes: ", 2116k8s000"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:393
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] CronJob should delete successful/failed finished jobs with limit of one job 9m32s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sCronJob\sshould\sdelete\ssuccessful\/failed\sfinished\sjobs\swith\slimit\sof\sone\sjob$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:233
Feb 13 22:34:30.264: Failed to ensure a finished cronjob exists in namespace cronjob-8748
Unexpected error:
    <*errors.errorString | 0xc0000df000>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:267
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] 24m48s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\s\[k8s\.io\]\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\sperform\scanary\supdates\sand\sphased\srolling\supdates\sof\stemplate\smodifications\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Feb 14 01:18:23.819: Failed waiting for state update: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:129
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] 11m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-auth\]\sServiceAccounts\sshould\sallow\sopting\sout\sof\sAPI\stoken\sautomount\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 13 18:41:51.829: Couldn't delete ns: "svcaccounts-7399": namespace svcaccounts-7399 was not deleted with limit: timed out waiting for the condition, pods remaining: 5 (&errors.errorString{s:"namespace svcaccounts-7399 was not deleted with limit: timed out waiting for the condition, pods remaining: 5"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] ReplicationController light Should scale from 2 pods to 1 pod [Slow] 3m26s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[Feature\:HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\sReplicationController\slight\sShould\sscale\sfrom\s2\spods\sto\s1\spod\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:82
Feb 13 17:49:56.420: Unexpected error:
    <*errors.errorString | 0xc00170a2c0>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:527
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability 14m26s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[Feature\:HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\s\[Serial\]\s\[Slow\]\sReplicationController\sShould\sscale\sfrom\s1\spod\sto\s3\spods\sand\sfrom\s3\sto\s5\sand\sverify\sdecision\sstability$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:61
Feb 13 20:49:41.366: Unexpected error:
    <*errors.errorString | 0xc001b1a350>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:527
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] 11m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\sKubectl\sdescribe\sshould\scheck\sif\skubectl\sdescribe\sprints\srelevant\sinformation\sfor\src\sand\spods\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 14 00:43:04.243: Couldn't delete ns: "kubectl-2222": namespace kubectl-2222 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace kubectl-2222 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] 13m39s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\sKubectl\sexpose\sshould\screate\sservices\sfor\src\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 13 19:08:02.597: All nodes should be ready after test, Not ready nodes: ", 2116k8s000"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:393
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] 19m8s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\sUpdate\sDemo\sshould\sdo\sa\srolling\supdate\sof\sa\sreplication\scontroller\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Feb 13 16:46:39.710: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/rc_util.go:260
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] 8m12s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sDNS\sshould\sprovide\sDNS\sfor\spods\sfor\sSubdomain\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Feb 13 22:44:02.389: Unexpected error:
    <*errors.errorString | 0xc0000df000>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:576
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] 19m20s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sConfigMap\soptional\supdates\sshould\sbe\sreflected\sin\svolume\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Feb 13 19:34:17.984: Unexpected error:
    <*errors.errorString | 0xc0000df000>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] 20m20s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sDownward\sAPI\svolume\sshould\sprovide\snode\sallocatable\s\(cpu\)\sas\sdefault\scpu\slimit\sif\sthe\slimit\sis\snot\sset\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Feb 13 18:17:34.357: wait for pod "downwardapi-volume-5370260b-9a37-49cb-a434-f8588eada253" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc0000df000>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:178
				
				Click to see stdout/stderrfrom junit_01.xml

Find downwardapi-volume-5370260b-9a37-49cb-a434-f8588eada253 mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] 22m20s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sDownward\sAPI\svolume\sshould\sprovide\spodname\sonly\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Feb 13 22:12:59.531: wait for pod "downwardapi-volume-6c23576c-6580-4856-8c17-3acc686bd850" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc0000df000>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:178