This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 21 failed / 597 succeeded
Started2019-05-22 12:35
Elapsed1h47m
Revision
Buildergke-prow-containerd-pool-99179761-xlfp
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/cbcaf7d8-5d8d-4138-9e96-d08076c27736/targets/test'}}
podea8aca22-7c8d-11e9-a44f-96e6949cb91f
resultstorehttps://source.cloud.google.com/results/invocations/cbcaf7d8-5d8d-4138-9e96-d08076c27736/targets/test
infra-commite96e1ab77
job-versionv1.12.9-beta.0.48+3e39ad05dbde34
master_os_image
node_os_imagecos-u-73-11647-182-0
podea8aca22-7c8d-11e9-a44f-96e6949cb91f
revisionv1.12.9-beta.0.48+3e39ad05dbde34

Test Failures


Cluster upgrade apparmor-upgrade 24m28s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\sapparmor\-upgrade$'
Pod should stay running
Expected
    <v1.PodPhase>: Failed
to equal
    <v1.PodPhase>: Running

k8s.io/kubernetes/test/e2e/upgrades.(*AppArmorUpgradeTest).verifyPodStillUp(0x89dfd58, 0xc000b75b80)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apparmor.go:89 +0x2d0
k8s.io/kubernetes/test/e2e/upgrades.(*AppArmorUpgradeTest).Test(0x89dfd58, 0xc000b75b80, 0xc0026f2fc0, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apparmor.go:72 +0x5a
k8s.io/kubernetes/test/e2e/lifecycle.(*chaosMonkeyAdapter).Test(0xc001893140, 0xc0018b1380)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:454 +0x309
k8s.io/kubernetes/test/e2e/chaosmonkey.(*chaosmonkey).Do.func1(0xc0018b1380, 0xc00271f790)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:89 +0x76
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:86 +0xa7
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Test 51m12s

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kubectl-path=../../../../kubernetes_skew/cluster/kubectl.sh --minStartupPods=8 --num-nodes=3 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


UpgradeTest 39m46s

error during kubetest --test --test_args=--ginkgo.focus=\[Feature:ClusterUpgrade\] --upgrade-image=gci --upgrade-target=ci/k8s-stable1 --num-nodes=3 --report-dir=/workspace/_artifacts --disable-log-dump=true --report-prefix=upgrade --check-version-skew=false: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


[k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] 7m14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=InitContainer\s\[NodeConformance\]\sshould\sinvoke\sinit\scontainers\son\sa\sRestartAlways\spod\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:147
Expected error:
    <*errors.errorString | 0xc4200d96b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:239
				
				Click to see stdout/stderrfrom junit_13.xml

Filter through log files | View test history on testgrid


[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] 6m38s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-api\-machinery\]\sCustomResourceDefinition\sresources\sSimple\sCustomResourceDefinition\screating\/deleting\scustom\sresource\sdefinition\sobjects\sworks\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:147
Expected error:
    <*errors.errorString | 0xc420119250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:239
				
				Click to see stdout/stderrfrom junit_10.xml

Filter through log files | View test history on testgrid


[sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] 6m23s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-api\-machinery\]\sSecrets\sshould\sbe\sconsumable\svia\sthe\senvironment\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:147
Expected error:
    <*errors.errorString | 0xc4200d96b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:239
				
				Click to see stdout/stderrfrom junit_25.xml

Filter through log files | View test history on testgrid


[sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction 10m35s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-apps\]\sDisruptionController\sevictions\:\senough\spods\,\sreplicaSet\,\spercentage\s\=\>\sshould\sallow\san\seviction$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:148
May 22 14:01:20.148: Couldn't delete ns: "e2e-tests-disruption-qfxd7": namespace e2e-tests-disruption-qfxd7 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace e2e-tests-disruption-qfxd7 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:343
				
				Click to see stdout/stderrfrom junit_11.xml

Filter through log files | View test history on testgrid


[sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] ReplicationController light Should scale from 2 pods to 1 pod 20m58s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\sReplicationController\slight\sShould\sscale\sfrom\s2\spods\sto\s1\spod$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:82
timeout waiting 15m0s for 1 replicas
Expected error:
    <*errors.errorString | 0xc4200d96b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:123
				
				Click to see stdout/stderrfrom junit_15.xml

Filter through log files | View test history on testgrid


[sig-cluster-lifecycle] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] 39m39s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-cluster\-lifecycle\]\sUpgrade\s\[Feature\:Upgrade\]\scluster\supgrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:ClusterUpgrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:143
Pod should stay running
Expected
    <v1.PodPhase>: Failed
to equal
    <v1.PodPhase>: Running
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apparmor.go:89