This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 52 failed / 622 succeeded
Started2019-11-19 20:31
Elapsed2h38m
Revision
Buildergke-prow-ssd-pool-1a225945-m4r6
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/869e68db-23f1-4227-8a73-85a923a4c6ed/targets/test'}}
pod7f3fc623-0b0b-11ea-b5e9-3289c6e090ac
resultstorehttps://source.cloud.google.com/results/invocations/869e68db-23f1-4227-8a73-85a923a4c6ed/targets/test
infra-commitca3411f8f
job-versionv1.16.4-beta.0.3+c0f31a4ef6304d-dirty
pod7f3fc623-0b0b-11ea-b5e9-3289c6e090ac
repok8s.io/kubernetes
repo-commitc0f31a4ef6304d653f387455e7ed1723e7bb5385
repos{u'k8s.io/kubernetes': u'release-1.16', u'sigs.k8s.io/cloud-provider-azure': u'master'}
revisionv1.16.4-beta.0.3+c0f31a4ef6304d-dirty

Test Failures


Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] 13m41s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sLifecycle\sHook\swhen\screate\sa\spod\swith\slifecycle\shook\sshould\sexecute\spoststart\sexec\shook\sproperly\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:152
Nov 19 21:55:13.505: Couldn't delete ns: "container-lifecycle-hook-5127": namespace container-lifecycle-hook-5127 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace container-lifecycle-hook-5127 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_05.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly] 12m24s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPrivilegedPod\s\[NodeConformance\]\sshould\senable\sprivileged\scommands\s\[LinuxOnly\]$'
test/e2e/framework/framework.go:152
Nov 19 21:33:34.875: Couldn't delete ns: "e2e-privileged-pod-6723": namespace e2e-privileged-pod-6723 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace e2e-privileged-pod-6723 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] 5m14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\sbe\srestarted\swith\sa\sexec\s\"cat\s\/tmp\/health\"\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:698
Nov 19 21:25:13.910: pod container-probe-7654/busybox-8ecedb8e-98eb-4867-8fbb-10f9e4e6c8b4 - expected number of restarts: 1, found restarts: 0
test/e2e/common/container_probe.go:462
				
				Click to see stdout/stderrfrom junit_24.xml

Find container-probe-7654/busybox-8ecedb8e-98eb-4867-8fbb-10f9e4e6c8b4 mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] 1m18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sPods\sExtended\s\[k8s\.io\]\sDelete\sGrace\sPeriod\sshould\sbe\ssubmitted\sand\sremoved\s\[Conformance\]$'
test/e2e/framework/framework.go:698
Nov 19 21:16:12.598: kubelet never observed the termination notice
Unexpected error:
    <*errors.errorString | 0xc0000a1090>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
test/e2e/node/pods.go:163
				
				Click to see stdout/stderrfrom junit_14.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] 2m11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sPods\sExtended\s\[k8s\.io\]\sDelete\sGrace\sPeriod\sshould\sbe\ssubmitted\sand\sremoved\s\[Conformance\]$'
test/e2e/framework/framework.go:698
Nov 19 21:17:46.990: kubelet never observed the termination notice
Unexpected error:
    <*errors.errorString | 0xc0000a1090>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
test/e2e/node/pods.go:163
				
				Click to see stdout/stderrfrom junit_14.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] Deployment deployment should support proportional scaling [Conformance] 5m17s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDeployment\sdeployment\sshould\ssupport\sproportional\sscaling\s\[Conformance\]$'
test/e2e/framework/framework.go:698
Nov 19 21:45:01.210: error in waiting for pods to come up: failed to wait for pods running: [timed out waiting for the condition]
Unexpected error:
    <*errors.errorString | 0xc00096c780>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
occurred
test/e2e/apps/deployment.go:719
				
				Click to see stdout/stderrfrom junit_15.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it 11m27s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDisruptionController\sshould\sblock\san\seviction\suntil\sthe\sPDB\sis\supdated\sto\sallow\sit$'
test/e2e/framework/framework.go:152
Nov 19 21:45:46.070: Couldn't delete ns: "disruption-4454": namespace disruption-4454 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace disruption-4454 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_20.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it 13m59s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDisruptionController\sshould\sblock\san\seviction\suntil\sthe\sPDB\sis\supdated\sto\sallow\sit$'
test/e2e/framework/framework.go:152
Nov 19 21:59:45.506: Couldn't delete ns: "disruption-6440": namespace disruption-6440 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace disruption-6440 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_20.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete 30m45s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\s\[k8s\.io\]\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\simplement\slegacy\sreplacement\swhen\sthe\supdate\sstrategy\sis\sOnDelete$'
test/e2e/apps/statefulset.go:88
Nov 19 22:16:03.040: Unexpected error:
    <*errors.errorString | 0xc00232c510>: {
        s: "Failed to scale statefulset to 0 in 10m0s. Remaining pods:\n[ss2-1: deletion 2019-11-19 21:58:26 +0000 UTC, phase Running, readiness false]",
    }
    Failed to scale statefulset to 0 in 10m0s. Remaining pods:
    [ss2-1: deletion 2019-11-19 21:58:26 +0000 UTC, phase Running, readiness false]
occurred
test/e2e/framework/statefulset/rest.go:148
				
				Click to see stdout/stderrfrom junit_18.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete 31m59s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\s\[k8s\.io\]\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\simplement\slegacy\sreplacement\swhen\sthe\supdate\sstrategy\sis\sOnDelete$'
test/e2e/apps/statefulset.go:88
Nov 19 22:47:58.466: Failed waiting for stateful set status.replicas updated to 0: timed out waiting for the condition
test/e2e/framework/statefulset/wait.go:272
				
				Click to see stdout/stderrfrom junit_18.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects 15m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sPort\sforwarding\sWith\sa\sserver\slistening\son\s0\.0\.0\.0\sthat\sexpects\sa\sclient\srequest\sshould\ssupport\sa\sclient\sthat\sconnects\,\ssends\sNO\sDATA\,\sand\sdisconnects$'
test/e2e/framework/framework.go:152
Nov 19 21:37:45.942: Couldn't delete ns: "port-forwarding-1695": namespace port-forwarding-1695 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace port-forwarding-1695 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_02.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] 13m24s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\sUpdate\sDemo\sshould\sdo\sa\srolling\supdate\sof\sa\sreplication\scontroller\s\s\[Conformance\]$'
test/e2e/framework/framework.go:152
Nov 19 21:46:25.236: Couldn't delete ns: "kubectl-4386": namespace kubectl-4386 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace kubectl-4386 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_06.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] 17m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\sUpdate\sDemo\sshould\sdo\sa\srolling\supdate\sof\sa\sreplication\scontroller\s\s\[Conformance\]$'
test/e2e/framework/framework.go:698
Nov 19 21:53:24.400: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/usr/local/bin/kubectl [kubectl --server=https://kubetest-bc7f35b3-0b0b-11ea-9979-628cf460ce88.westus2.cloudapp.azure.com --kubeconfig=/workspace/aks395345560/kubeconfig/kubeconfig.westus2.json rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3075] []  0xc001eeed80 Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\n Command \"rolling-update\" is deprecated, use \"rollout\" instead\nerror: timed out waiting for any update progress to be made\n [] <nil> 0xc000b569c0 exit status 1 <nil> <nil> true [0xc0011b0af0 0xc0011b0b18 0xc0011b0b28] [0xc0011b0af0 0xc0011b0b18 0xc0011b0b28] [0xc0011b0af8 0xc0011b0b10 0xc0011b0b20] [0x10f01d0 0x10f0300 0x10f0300] 0xc00162e120 <nil>}:\nCommand stdout:\nCreated update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\n\nstderr:\nCommand \"rolling-update\" is deprecated, use \"rollout\" instead\nerror: timed out waiting for any update progress to be made\n\nerror:\nexit status 1",
        },
        Code: 1,
    }
    error running &{/usr/local/bin/kubectl [kubectl --server=https://kubetest-bc7f35b3-0b0b-11ea-9979-628cf460ce88.westus2.cloudapp.azure.com --kubeconfig=/workspace/aks395345560/kubeconfig/kubeconfig.westus2.json rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3075] []  0xc001eeed80 Created update-demo-kitten
    Scaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling update-demo-kitten up to 1
    Scaling update-demo-nautilus down to 1
    Scaling update-demo-kitten up to 2
     Command "rolling-update" is deprecated, use "rollout" instead
    error: timed out waiting for any update progress to be made
     [] <nil> 0xc000b569c0 exit status 1 <nil> <nil> true [0xc0011b0af0 0xc0011b0b18 0xc0011b0b28] [0xc0011b0af0 0xc0011b0b18 0xc0011b0b28] [0xc0011b0af8 0xc0011b0b10 0xc0011b0b20] [0x10f01d0 0x10f0300 0x10f0300] 0xc00162e120 <nil>}:
    Command stdout:
    Created update-demo-kitten
    Scaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling update-demo-kitten up to 1
    Scaling update-demo-nautilus down to 1
    Scaling update-demo-kitten up to 2
    
    stderr:
    Command "rolling-update" is deprecated, use "rollout" instead
    error: timed out waiting for any update progress to be made
    
    error:
    exit status 1
occurred
test/e2e/framework/util.go:1539
				
				Click to see stdout/stderrfrom junit_06.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp 15m33s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sServices\sshould\sfunction\sfor\sclient\sIP\sbased\ssession\saffinity\:\sudp$'
test/e2e/network/networking.go:222
Nov 19 21:37:29.747: Unexpected error:
    <*errors.errorString | 0xc0000d5080>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
test/e2e/framework/networking_utils.go:660
				
				Click to see stdout/stderrfrom junit_28.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-network] Service endpoints latency should not be very high [Conformance] 12m17s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sService\sendpoints\slatency\sshould\snot\sbe\svery\shigh\s\s\[Conformance\]$'
test/e2e/framework/framework.go:152
Nov 19 21:33:46.930: Couldn't delete ns: "svc-latency-7216": namespace svc-latency-7216 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace svc-latency-7216 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_03.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-network] Services should allow pods to hairpin back to themselves through services 12m16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sServices\sshould\sallow\spods\sto\shairpin\sback\sto\sthemselves\sthrough\sservices$'
test/e2e/framework/framework.go:152
Nov 19 21:33:35.987: Couldn't delete ns: "services-6": namespace services-6 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace services-6 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_30.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-scheduling] PreemptionExecutionPath runs ReplicaSets to verify preemption running path 3m30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-scheduling\]\sPreemptionExecutionPath\sruns\sReplicaSets\sto\sverify\spreemption\srunning\spath$'
test/e2e/scheduling/preemption.go:345
Nov 19 21:25:53.654: Unexpected error:
    <*errors.errorString | 0xc00218b510>: {
        s: "replicaset \"rs-pod4\" never had desired number of .status.availableReplicas",
    }
    replicaset "rs-pod4" never had desired number of .status.availableReplicas
occurred
test/e2e/scheduling/preemption.go:510
				
				Click to see stdout/stderrfrom junit_16.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-scheduling] PreemptionExecutionPath runs ReplicaSets to verify preemption running path 3m55s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-scheduling\]\sPreemptionExecutionPath\sruns\sReplicaSets\sto\sverify\spreemption\srunning\spath$'
test/e2e/scheduling/preemption.go:345
Nov 19 21:22:26.091: Unexpected error:
    <*errors.errorString | 0xc0020eef40>: {
        s: "replicaset \"rs-pod4\" never had desired number of .status.availableReplicas",
    }
    replicaset "rs-pod4" never had desired number of .status.availableReplicas
occurred
test/e2e/scheduling/preemption.go:510