This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 39 failed / 630 succeeded
Started2019-11-16 20:23
Elapsed2h26m
Revision
Buildergke-prow-ssd-pool-1a225945-49gz
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/8f138d25-277e-45a5-9ddd-85ae0659e0c0/targets/test'}}
podd57e9066-08ae-11ea-be88-5a2ed842773b
resultstorehttps://source.cloud.google.com/results/invocations/8f138d25-277e-45a5-9ddd-85ae0659e0c0/targets/test
infra-commit0eec43e43
job-versionv1.16.4-beta.0.3+c0f31a4ef6304d-dirty
podd57e9066-08ae-11ea-be88-5a2ed842773b
repok8s.io/kubernetes
repo-commitc0f31a4ef6304d653f387455e7ed1723e7bb5385
repos{u'k8s.io/kubernetes': u'release-1.16', u'sigs.k8s.io/cloud-provider-azure': u'master'}
revisionv1.16.4-beta.0.3+c0f31a4ef6304d-dirty

Test Failures


Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] 17m58s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sLifecycle\sHook\swhen\screate\sa\spod\swith\slifecycle\shook\sshould\sexecute\spoststart\shttp\shook\sproperly\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:698
Nov 16 21:24:18.904: wait for pod "pod-with-poststart-http-hook" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc0000d5090>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
test/e2e/framework/pods.go:178
				
				Click to see stdout/stderrfrom junit_25.xml

Find pod-with-poststart-http-hook mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should be updated [NodeConformance] [Conformance] 12m13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\sbe\supdated\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:152
Nov 16 21:35:23.972: Couldn't delete ns: "pods-5697": namespace pods-5697 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace pods-5697 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_16.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] 7m7s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\sbe\srestarted\swith\sa\sexec\s\"cat\s\/tmp\/health\"\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:698
Nov 16 21:25:11.381: pod container-probe-5785/busybox-bc668cc2-6401-40a8-ad16-41b0c63f954e - expected number of restarts: 1, found restarts: 0
test/e2e/common/container_probe.go:462
				
				Click to see stdout/stderrfrom junit_28.xml

Find container-probe-5785/busybox-bc668cc2-6401-40a8-ad16-41b0c63f954e mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] 5m21s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\s\[Privileged\:ClusterAdmin\]\sshould\sbe\sable\sto\sdeny\sattaching\spod\s\[Conformance\]$'
test/e2e/apimachinery/webhook.go:88
Nov 16 21:34:40.202: waiting for the deployment status valid%!(EXTRA string=gcr.io/kubernetes-e2e-test-images/agnhost:2.6, string=sample-webhook-deployment, string=webhook-7809)
Unexpected error:
    <*errors.errorString | 0xc002642150>: {
        s: "error waiting for deployment \"sample-webhook-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63709536579, loc:(*time.Location)(0x846ab00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63709536579, loc:(*time.Location)(0x846ab00)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63709536579, loc:(*time.Location)(0x846ab00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63709536579, loc:(*time.Location)(0x846ab00)}}, Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"sample-webhook-deployment-86d95b659d\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}",
    }
    error waiting for deployment "sample-webhook-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63709536579, loc:(*time.Location)(0x846ab00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63709536579, loc:(*time.Location)(0x846ab00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63709536579, loc:(*time.Location)(0x846ab00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63709536579, loc:(*time.Location)(0x846ab00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
occurred
test/e2e/apimachinery/webhook.go:849
				
				Click to see stdout/stderrfrom junit_21.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction 15m37s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDisruptionController\sevictions\:\senough\spods\,\sreplicaSet\,\spercentage\s\=\>\sshould\sallow\san\seviction$'
test/e2e/framework/framework.go:152
Nov 16 21:41:16.374: Couldn't delete ns: "disruption-1170": namespace disruption-1170 was not deleted with limit: timed out waiting for the condition, pods remaining: 5 (&errors.errorString{s:"namespace disruption-1170 was not deleted with limit: timed out waiting for the condition, pods remaining: 5"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete 32m24s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\s\[k8s\.io\]\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\simplement\slegacy\sreplacement\swhen\sthe\supdate\sstrategy\sis\sOnDelete$'
test/e2e/apps/statefulset.go:88
Nov 16 21:53:28.245: Unexpected error:
    <*errors.errorString | 0xc000787a90>: {
        s: "Failed to scale statefulset to 0 in 10m0s. Remaining pods:\n[ss2-1: deletion 2019-11-16 21:36:14 +0000 UTC, phase Running, readiness false]",
    }
    Failed to scale statefulset to 0 in 10m0s. Remaining pods:
    [ss2-1: deletion 2019-11-16 21:36:14 +0000 UTC, phase Running, readiness false]
occurred
test/e2e/framework/statefulset/rest.go:148
				
				Click to see stdout/stderrfrom junit_17.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete 34m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\s\[k8s\.io\]\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\simplement\slegacy\sreplacement\swhen\sthe\supdate\sstrategy\sis\sOnDelete$'
test/e2e/apps/statefulset.go:88
Nov 16 22:27:32.028: Failed waiting for stateful set status.replicas updated to 0: timed out waiting for the condition
test/e2e/framework/statefulset/wait.go:272
				
				Click to see stdout/stderrfrom junit_17.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] 24m53s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\s\[k8s\.io\]\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\sperform\scanary\supdates\sand\sphased\srolling\supdates\sof\stemplate\smodifications\s\[Conformance\]$'
test/e2e/framework/framework.go:698
Nov 16 22:19:55.116: Failed waiting for state update: timed out waiting for the condition
test/e2e/framework/statefulset/wait.go:129
				
				Click to see stdout/stderrfrom junit_08.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] 46m14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\s\[k8s\.io\]\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\sperform\scanary\supdates\sand\sphased\srolling\supdates\sof\stemplate\smodifications\s\[Conformance\]$'
test/e2e/framework/framework.go:698
Nov 16 21:37:19.949: Failed waiting for state update: timed out waiting for the condition
test/e2e/framework/statefulset/wait.go:129
				
				Click to see stdout/stderrfrom junit_08.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] 6m43s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\sUpdate\sDemo\sshould\sdo\sa\srolling\supdate\sof\sa\sreplication\scontroller\s\s\[Conformance\]$'
test/e2e/framework/framework.go:698
Nov 16 21:34:54.580: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/usr/local/bin/kubectl [kubectl --server=https://kubetest-2946d6ca-08af-11ea-83ce-221128314880.westus2.cloudapp.azure.com --kubeconfig=/workspace/aks713732901/kubeconfig/kubeconfig.westus2.json rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-40] []  0xc0024cd280 Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\n Command \"rolling-update\" is deprecated, use \"rollout\" instead\nerror: timed out waiting for any update progress to be made\n [] <nil> 0xc001937e00 exit status 1 <nil> <nil> true [0xc001fbed80 0xc001fbeda8 0xc001fbedb8] [0xc001fbed80 0xc001fbeda8 0xc001fbedb8] [0xc001fbed88 0xc001fbeda0 0xc001fbedb0] [0x10f01d0 0x10f0300 0x10f0300] 0xc00206daa0 <nil>}:\nCommand stdout:\nCreated update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\n\nstderr:\nCommand \"rolling-update\" is deprecated, use \"rollout\" instead\nerror: timed out waiting for any update progress to be made\n\nerror:\nexit status 1",
        },
        Code: 1,
    }
    error running &{/usr/local/bin/kubectl [kubectl --server=https://kubetest-2946d6ca-08af-11ea-83ce-221128314880.westus2.cloudapp.azure.com --kubeconfig=/workspace/aks713732901/kubeconfig/kubeconfig.westus2.json rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-40] []  0xc0024cd280 Created update-demo-kitten
    Scaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling update-demo-kitten up to 1
    Scaling update-demo-nautilus down to 1
    Scaling update-demo-kitten up to 2
     Command "rolling-update" is deprecated, use "rollout" instead
    error: timed out waiting for any update progress to be made
     [] <nil> 0xc001937e00 exit status 1 <nil> <nil> true [0xc001fbed80 0xc001fbeda8 0xc001fbedb8] [0xc001fbed80 0xc001fbeda8 0xc001fbedb8] [0xc001fbed88 0xc001fbeda0 0xc001fbedb0] [0x10f01d0 0x10f0300 0x10f0300] 0xc00206daa0 <nil>}:
    Command stdout:
    Created update-demo-kitten
    Scaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling update-demo-kitten up to 1
    Scaling update-demo-nautilus down to 1
    Scaling update-demo-kitten up to 2
    
    stderr:
    Command "rolling-update" is deprecated, use "rollout" instead
    error: timed out waiting for any update progress to be made
    
    error:
    exit status 1
occurred
test/e2e/framework/util.go:1539