This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 12 failed / 603 succeeded
Started2019-05-22 18:38
Elapsed1h47m
Revision
Buildergke-prow-containerd-pool-99179761-6j45
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/d6876a33-4a0b-4c86-bf8e-de1dbdd34e6b/targets/test'}}
pod9fe46c37-7cc0-11e9-a44f-96e6949cb91f
resultstorehttps://source.cloud.google.com/results/invocations/d6876a33-4a0b-4c86-bf8e-de1dbdd34e6b/targets/test
infra-commitfb0685e82
job-versionv1.12.9-beta.0.48+3e39ad05dbde34
master_os_image
node_os_imagecos-u-73-11647-182-0
pod9fe46c37-7cc0-11e9-a44f-96e6949cb91f
revisionv1.12.9-beta.0.48+3e39ad05dbde34

Test Failures


Cluster upgrade apparmor-upgrade 28m42s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\sapparmor\-upgrade$'
Should be able to get pod
Unexpected error:
    <*errors.StatusError | 0xc002170360>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""},
            Status: "Failure",
            Message: "pods \"test-apparmor-gtkx8\" not found",
            Reason: "NotFound",
            Details: {
                Name: "test-apparmor-gtkx8",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    pods "test-apparmor-gtkx8" not found
occurred

k8s.io/kubernetes/test/e2e/upgrades.(*AppArmorUpgradeTest).verifyPodStillUp(0x89dfd58, 0xc000bb4dc0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apparmor.go:88 +0x159
k8s.io/kubernetes/test/e2e/upgrades.(*AppArmorUpgradeTest).Test(0x89dfd58, 0xc000bb4dc0, 0xc0029fe960, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apparmor.go:72 +0x5a
k8s.io/kubernetes/test/e2e/lifecycle.(*chaosMonkeyAdapter).Test(0xc001752a80, 0xc001bfb020)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:454 +0x309
k8s.io/kubernetes/test/e2e/chaosmonkey.(*chaosmonkey).Do.func1(0xc001bfb020, 0xc000ff7d80)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:89 +0x76
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:86 +0xa7
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Test 48m15s

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kubectl-path=../../../../kubernetes_skew/cluster/kubectl.sh --minStartupPods=8 --num-nodes=3 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


UpgradeTest 43m27s

error during kubetest --test --test_args=--ginkgo.focus=\[Feature:ClusterUpgrade\] --upgrade-image=gci --upgrade-target=ci/k8s-stable1 --num-nodes=3 --report-dir=/workspace/_artifacts --disable-log-dump=true --report-prefix=upgrade --check-version-skew=false: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


[sig-api-machinery] AdmissionWebhook Should mutate configmap 7m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\smutate\sconfigmap$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:89
waiting for the deployment status valid%!(EXTRA string=gcr.io/kubernetes-e2e-test-images/webhook:1.12v2, string=sample-webhook-deployment, string=e2e-tests-webhook-cj4rq)
Expected error:
    <*errors.errorString | 0xc42081af20>: {
        s: "error waiting for deployment \"sample-webhook-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63694151752, loc:(*time.Location)(0x6c33f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63694151752, loc:(*time.Location)(0x6c33f60)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63694151754, loc:(*time.Location)(0x6c33f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63694151751, loc:(*time.Location)(0x6c33f60)}}, Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"sample-webhook-deployment-5c6bc65b65\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}",
    }
    error waiting for deployment "sample-webhook-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63694151752, loc:(*time.Location)(0x6c33f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63694151752, loc:(*time.Location)(0x6c33f60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63694151754, loc:(*time.Location)(0x6c33f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63694151751, loc:(*time.Location)(0x6c33f60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5c6bc65b65\" is progressing."}}, CollisionCount:(*int32)(nil)}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:305
				
				Click to see stdout/stderrfrom junit_21.xml

Filter through log files | View test history on testgrid


[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] 6m10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-apps\]\sStatefulSet\s\[k8s\.io\]\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sBurst\sscaling\sshould\srun\sto\scompletion\seven\swith\sunhealthy\spods\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:147
Expected error:
    <*errors.errorString | 0xc4200d96b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:239
				
				Click to see stdout/stderrfrom junit_05.xml

Filter through log files | View test history on testgrid


[sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] ReplicationController light Should scale from 2 pods to 1 pod 7m5s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\sReplicationController\slight\sShould\sscale\sfrom\s2\spods\sto\s1\spod$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:82
Expected error:
    <*errors.errorString | 0xc4200d96a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:528
				
				Click to see stdout/stderrfrom junit_25.xml

Filter through log files | View test history on testgrid


[sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] 5m59s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sKubectl\srun\sdeployment\sshould\screate\sa\sdeployment\sfrom\san\simage\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:147
Expected error:
    <*errors.errorString | 0xc4200d96a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:239
				
				Click to see stdout/stderrfrom junit_11.xml

Filter through log files | View test history on testgrid


[sig-cluster-lifecycle] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] 43m20s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-cluster\-lifecycle\]\sUpgrade\s\[Feature\:Upgrade\]\scluster\supgrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:ClusterUpgrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:143
Should be able to get pod
Unexpected error:
    <*errors.StatusError | 0xc002170360>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""},
            Status: "Failure",
            Message: "pods \"test-apparmor-gtkx8\" not found",
            Reason: "NotFound",
            Details: {
                Name: "test-apparmor-gtkx8",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    pods "test-apparmor-gtkx8" not found
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apparmor.go:88