This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 40 failed / 17 succeeded
Started2020-10-21 05:25
Elapsed5h1m
Revision
Builderbef34a33-135d-11eb-a3c5-629ea79d0103
infra-commit5c86f7035
job-versionv1.20.0-alpha.3.28+5d49a6253c84c7-dirty
repok8s.io/kubernetes
repo-commit5d49a6253c84c7f99264f0f95453ec666c998359
repos{u'k8s.io/kubernetes': u'master'}
revisionv1.20.0-alpha.3.28+5d49a6253c84c7-dirty

Test Failures


E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sContainer\sManager\sMisc\s\[Serial\]\sValidate\sOOM\sscore\sadjustments\s\[NodeFeature\:OOMScoreAdj\]\sonce\sthe\snode\sis\ssetup\s\spod\sinfra\scontainers\soom\-score\-adj\sshould\sbe\s\-998\sand\sbest\seffort\scontainer\'s\sshould\sbe\s1000$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup Kubelet's oom-score-adj should be -999 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sContainer\sManager\sMisc\s\[Serial\]\sValidate\sOOM\sscore\sadjustments\s\[NodeFeature\:OOMScoreAdj\]\sonce\sthe\snode\sis\ssetup\sKubelet\'s\soom\-score\-adj\sshould\sbe\s\-999$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup burstable container's oom-score-adj should be between [2, 1000) 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sContainer\sManager\sMisc\s\[Serial\]\sValidate\sOOM\sscore\sadjustments\s\[NodeFeature\:OOMScoreAdj\]\sonce\sthe\snode\sis\ssetup\sburstable\scontainer\'s\soom\-score\-adj\sshould\sbe\sbetween\s\[2\,\s1000\)$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup container runtime's oom-score-adj should be -999 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sContainer\sManager\sMisc\s\[Serial\]\sValidate\sOOM\sscore\sadjustments\s\[NodeFeature\:OOMScoreAdj\]\sonce\sthe\snode\sis\ssetup\scontainer\sruntime\'s\soom\-score\-adj\sshould\sbe\s\-999$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup guaranteed container's oom-score-adj should be -998 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sContainer\sManager\sMisc\s\[Serial\]\sValidate\sOOM\sscore\sadjustments\s\[NodeFeature\:OOMScoreAdj\]\sonce\sthe\snode\sis\ssetup\sguaranteed\scontainer\'s\soom\-score\-adj\sshould\sbe\s\-998$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] ContainerLogRotation [Slow] [Serial] [Disruptive] when a container generates a lot of log should be rotated and limited to a fixed amount of files 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sContainerLogRotation\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\swhen\sa\scontainer\sgenerates\sa\slot\sof\slog\sshould\sbe\srotated\sand\slimited\sto\sa\sfixed\samount\sof\sfiles$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sCriticalPod\s\[Serial\]\s\[Disruptive\]\s\[NodeFeature\:CriticalPod\]\swhen\swe\sneed\sto\sadmit\sa\scritical\spod\sshould\sbe\sable\sto\screate\sand\sdelete\sa\scritical\spod$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sDensity\s\[Serial\]\s\[Slow\]\screate\sa\sbatch\sof\spods\slatency\/resource\sshould\sbe\swithin\slimit\swhen\screate\s10\spods\swith\s0s\sinterval$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sDensity\s\[Serial\]\s\[Slow\]\screate\sa\ssequence\sof\spods\slatency\/resource\sshould\sbe\swithin\slimit\swhen\screate\s10\spods\swith\s50\sbackground\spods$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] Docker features [Feature:Docker][Legacy:Docker] when live-restore is enabled [Serial] [Slow] [Disruptive] containers should not be disrupted when the daemon shuts down and restarts 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sDocker\sfeatures\s\[Feature\:Docker\]\[Legacy\:Docker\]\swhen\slive\-restore\sis\senabled\s\[Serial\]\s\[Slow\]\s\[Disruptive\]\scontainers\sshould\snot\sbe\sdisrupted\swhen\sthe\sdaemon\sshuts\sdown\sand\srestarts$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sDownward\sAPI\s\[Serial\]\s\[Disruptive\]\s\[NodeFeature\:EphemeralStorage\]\sDownward\sAPI\stests\sfor\slocal\sephemeral\sstorage\sshould\sprovide\scontainer\'s\slimits\.ephemeral\-storage\sand\srequests\.ephemeral\-storage\sas\senv\svars$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sDownward\sAPI\s\[Serial\]\s\[Disruptive\]\s\[NodeFeature\:EphemeralStorage\]\sDownward\sAPI\stests\sfor\slocal\sephemeral\sstorage\sshould\sprovide\sdefault\slimits\.ephemeral\-storage\sfrom\snode\sallocatable$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sGarbageCollect\s\[Serial\]\[NodeFeature\:GarbageCollect\]\sGarbage\sCollection\sTest\:\sMany\sRestarting\sContainers\sShould\seventually\sgarbage\scollect\scontainers\swhen\swe\sexceed\sthe\snumber\sof\sdead\scontainers\sper\scontainer$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sGarbageCollect\s\[Serial\]\[NodeFeature\:GarbageCollect\]\sGarbage\sCollection\sTest\:\sOne\sNon\-restarting\sContainer\sShould\seventually\sgarbage\scollect\scontainers\swhen\swe\sexceed\sthe\snumber\sof\sdead\scontainers\sper\scontainer$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] InodeEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sInodeEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sLocalStorageCapacityIsolationEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\s\[Feature\:LocalStorageCapacityIsolation\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sevictions\sdue\sto\spod\slocal\sstorage\sviolations\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: false) should eventually evict all of the correct pods 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sLocalStorageCapacityIsolationQuotaMonitoring\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\s\[Feature\:LocalStorageCapacityIsolationQuota\]\[NodeFeature\:LSCIQuotaMonitoring\]\swhen\swe\srun\scontainers\sthat\sshould\scause\suse\squotas\sfor\sLSCI\smonitoring\s\(quotas\senabled\:\sfalse\)\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sLocalStorageEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sLocalStorageSoftEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] MemoryAllocatableEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sMemoryAllocatableEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sMemoryPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] NVIDIA GPU Device Plugin [Feature:GPUDevicePlugin][NodeFeature:GPUDevicePlugin][Serial] [Disruptive] DevicePlugin checks that when Kubelet restarts exclusive GPU assignation to pods is kept. 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sNVIDIA\sGPU\sDevice\sPlugin\s\[Feature\:GPUDevicePlugin\]\[NodeFeature\:GPUDevicePlugin\]\[Serial\]\s\[Disruptive\]\sDevicePlugin\schecks\sthat\swhen\sKubelet\srestarts\sexclusive\sGPU\sassignation\sto\spods\sis\skept\.$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] NodeProblemDetector [NodeFeature:NodeProblemDetector] [Serial] [k8s.io] SystemLogMonitor should generate node condition and events for corresponding errors 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sNodeProblemDetector\s\[NodeFeature\:NodeProblemDetector\]\s\[Serial\]\s\[k8s\.io\]\sSystemLogMonitor\sshould\sgenerate\snode\scondition\sand\sevents\sfor\scorresponding\serrors$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sPriorityLocalStorageEvictionOrdering\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] PriorityMemoryEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sPriorityMemoryEvictionOrdering\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sMemoryPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure should eventually evict all of the correct pods 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sPriorityPidEvictionOrdering\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sPIDPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] Restart [Serial] [Slow] [Disruptive] [NodeFeature:ContainerRuntimeRestart] Container Runtime Network should recover from ip leak 5m47s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sRestart\s\[Serial\]\s\[Slow\]\s\[Disruptive\]\s\[NodeFeature\:ContainerRuntimeRestart\]\sContainer\sRuntime\sNetwork\sshould\srecover\sfrom\sip\sleak$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/restart_test.go:82
Unexpected error:
    <*url.Error | 0xc001e5c9c0>: {
        Op: "Get",
        URL: "http://127.0.0.1:8080/api/v1/namespaces/restart-test-8896/pods",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1],
                Port: 8080,
                Zone: "",
            },
            Err: {Syscall: "connect", Err: 0x6f},
        },
    }
    Get "http://127.0.0.1:8080/api/v1/namespaces/restart-test-8896/pods": dial tcp 127.0.0.1:8080: connect: connection refused
occurred
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_collector.go:383
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: state transitions: status and events should match expectations 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\s\[Feature\:DynamicKubeletConfig\]\[NodeFeature\:DynamicKubeletConfig\]\[Serial\]\[Disruptive\]\s\sdelete\sand\srecreate\sConfigMap\:\sstate\stransitions\:\sstatus\sand\sevents\sshould\smatch\sexpectations$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: recover to last-known-good version: status and events should match expectations 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\s\[Feature\:DynamicKubeletConfig\]\[NodeFeature\:DynamicKubeletConfig\]\[Serial\]\[Disruptive\]\s\supdate\sConfigMap\sin\-place\:\srecover\sto\slast\-known\-good\sversion\:\sstatus\sand\sevents\sshould\smatch\sexpectations$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: state transitions: status and events should match expectations 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\s\[Feature\:DynamicKubeletConfig\]\[NodeFeature\:DynamicKubeletConfig\]\[Serial\]\[Disruptive\]\s\supdate\sConfigMap\sin\-place\:\sstate\stransitions\:\sstatus\sand\sevents\sshould\smatch\sexpectations$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: non-nil last-known-good to a new non-nil last-known-good status and events should match expectations 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\s\[Feature\:DynamicKubeletConfig\]\[NodeFeature\:DynamicKubeletConfig\]\[Serial\]\[Disruptive\]\s\supdate\sNode\.Spec\.ConfigSource\:\snon\-nil\slast\-known\-good\sto\sa\snew\snon\-nil\slast\-known\-good\sstatus\sand\sevents\sshould\smatch\sexpectations$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap.KubeletConfigKey: status and events should match expectations 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\s\[Feature\:DynamicKubeletConfig\]\[NodeFeature\:DynamicKubeletConfig\]\[Serial\]\[Disruptive\]\s\supdate\sNode\.Spec\.ConfigSource\:\srecover\sto\slast\-known\-good\sConfigMap\.KubeletConfigKey\:\sstatus\sand\sevents\sshould\smatch\sexpectations$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: state transitions: status and events should match expectations 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\s\[Feature\:DynamicKubeletConfig\]\[NodeFeature\:DynamicKubeletConfig\]\[Serial\]\[Disruptive\]\s\supdate\sNode\.Spec\.ConfigSource\:\sstate\stransitions\:\sstatus\sand\sevents\sshould\smatch\sexpectations$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When checkpoint file is corrupted should complete pod sandbox clean up 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sDockershim\s\[Serial\]\s\[Disruptive\]\s\[Feature\:Docker\]\[Legacy\:Docker\]\sWhen\scheckpoint\sfile\sis\scorrupted\sshould\scomplete\spod\ssandbox\sclean\sup$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should clean up pod sandbox checkpoint after pod deletion 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sDockershim\s\[Serial\]\s\[Disruptive\]\s\[Feature\:Docker\]\[Legacy\:Docker\]\sshould\sclean\sup\spod\ssandbox\scheckpoint\safter\spod\sdeletion$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should remove dangling checkpoint file 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sDockershim\s\[Serial\]\s\[Disruptive\]\s\[Feature\:Docker\]\[Legacy\:Docker\]\sshould\sremove\sdangling\scheckpoint\sfile$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sResource\-usage\s\[Serial\]\s\[Slow\]\sregular\sresource\susage\stracking\sresource\stracking\sfor\s10\spods\sper\snode$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [sig-node] Topology Manager [Serial] [Feature:TopologyManager][NodeFeature:TopologyManager] With kubeconfig updated to static CPU Manager policy run the Topology Manager tests run Topology Manager node alignment test suite 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sTopology\sManager\s\[Serial\]\s\[Feature\:TopologyManager\]\[NodeFeature\:TopologyManager\]\sWith\skubeconfig\supdated\sto\sstatic\sCPU\sManager\spolicy\srun\sthe\sTopology\sManager\stests\srun\sTopology\sManager\snode\salignment\stest\ssuite$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [sig-node] Topology Manager [Serial] [Feature:TopologyManager][NodeFeature:TopologyManager] With kubeconfig updated to static CPU Manager policy run the Topology Manager tests run Topology Manager policy test suite 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sTopology\sManager\s\[Serial\]\s\[Feature\:TopologyManager\]\[NodeFeature\:TopologyManager\]\sWith\skubeconfig\supdated\sto\sstatic\sCPU\sManager\spolicy\srun\sthe\sTopology\sManager\stests\srun\sTopology\sManager\spolicy\stest\ssuite$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002329b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


Node Tests 4h59m

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-gke-ubuntu-updown --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeAlphaFeature:.+\]" --test_args=--feature-gates=DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=5h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config-serial.yaml (interrupted): exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Timeout 5h0m

kubetest --timeout triggered
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 17 Passed Tests

Show 281 Skipped Tests