This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 32 failed / 24 succeeded
Started2020-10-26 10:34
Elapsed5h1m
Revision
Buildercfe2dc02-1776-11eb-b256-6ee25ea2e440
infra-commite8cffc8b1
job-versionv1.20.0-alpha.3.118+53b2973440a29e-dirty
repok8s.io/kubernetes
repo-commit53b2973440a29e1682df6ba687cebc6764bba44c
repos{u'k8s.io/kubernetes': u'master'}
revisionv1.20.0-alpha.3.118+53b2973440a29e-dirty

Test Failures


E2eNode Suite [k8s.io] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod 22s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sCriticalPod\s\[Serial\]\s\[Disruptive\]\s\[NodeFeature\:CriticalPod\]\swhen\swe\sneed\sto\sadmit\sa\scritical\spod\sshould\sbe\sable\sto\screate\sand\sdelete\sa\scritical\spod$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:53
Unexpected error:
    <*errors.StatusError | 0xc000ede640>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "pods \"static-critical-pod\" not found",
            Reason: "NotFound",
            Details: {
                Name: "static-critical-pod",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    pods "static-critical-pod" not found
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval 56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sDensity\s\[Serial\]\s\[Slow\]\screate\sa\sbatch\sof\spods\slatency\/resource\sshould\sbe\swithin\slimit\swhen\screate\s10\spods\swith\s0s\sinterval$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods 1m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sDensity\s\[Serial\]\s\[Slow\]\screate\sa\ssequence\sof\spods\slatency\/resource\sshould\sbe\swithin\slimit\swhen\screate\s10\spods\swith\s50\sbackground\spods$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin Verifies the Kubelet device plugin functionality. 1m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sDevice\sPlugin\s\[Feature\:DevicePluginProbe\]\[NodeFeature\:DevicePluginProbe\]\[Serial\]\sDevicePlugin\sVerifies\sthe\sKubelet\sdevice\splugin\sfunctionality\.$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] Docker features [Feature:Docker][Legacy:Docker] when live-restore is enabled [Serial] [Slow] [Disruptive] containers should not be disrupted when the daemon shuts down and restarts 1m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sDocker\sfeatures\s\[Feature\:Docker\]\[Legacy\:Docker\]\swhen\slive\-restore\sis\senabled\s\[Serial\]\s\[Slow\]\s\[Disruptive\]\scontainers\sshould\snot\sbe\sdisrupted\swhen\sthe\sdaemon\sshuts\sdown\sand\srestarts$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container 1m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sGarbageCollect\s\[Serial\]\[NodeFeature\:GarbageCollect\]\sGarbage\sCollection\sTest\:\sMany\sPods\swith\sMany\sRestarting\sContainers\sShould\seventually\sgarbage\scollect\scontainers\swhen\swe\sexceed\sthe\snumber\sof\sdead\scontainers\sper\scontainer$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] InodeEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sInodeEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods 1m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sLocalStorageCapacityIsolationEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\s\[Feature\:LocalStorageCapacityIsolation\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sevictions\sdue\sto\spod\slocal\sstorage\sviolations\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: false) should eventually evict all of the correct pods 1m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sLocalStorageCapacityIsolationQuotaMonitoring\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\s\[Feature\:LocalStorageCapacityIsolationQuota\]\[NodeFeature\:LSCIQuotaMonitoring\]\swhen\swe\srun\scontainers\sthat\sshould\scause\suse\squotas\sfor\sLSCI\smonitoring\s\(quotas\senabled\:\sfalse\)\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 1m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sLocalStorageEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] MemoryAllocatableEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods 1m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sMemoryAllocatableEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sMemoryPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] NVIDIA GPU Device Plugin [Feature:GPUDevicePlugin][NodeFeature:GPUDevicePlugin][Serial] [Disruptive] DevicePlugin checks that when Kubelet restarts exclusive GPU assignation to pods is kept. 5m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sNVIDIA\sGPU\sDevice\sPlugin\s\[Feature\:GPUDevicePlugin\]\[NodeFeature\:GPUDevicePlugin\]\[Serial\]\s\[Disruptive\]\sDevicePlugin\schecks\sthat\swhen\sKubelet\srestarts\sexclusive\sGPU\sassignation\sto\spods\sis\skept\.$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/gpu_device_plugin_test.go:74
Timed out after 300.000s.
Expected
    <bool>: false
to be true
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/gpu_device_plugin_test.go:87
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] NodeProblemDetector [NodeFeature:NodeProblemDetector] [Serial] [k8s.io] SystemLogMonitor should generate node condition and events for corresponding errors 1m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sNodeProblemDetector\s\[NodeFeature\:NodeProblemDetector\]\s\[Serial\]\s\[k8s\.io\]\sSystemLogMonitor\sshould\sgenerate\snode\scondition\sand\sevents\sfor\scorresponding\serrors$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 1m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sPriorityLocalStorageEvictionOrdering\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure should eventually evict all of the correct pods 17m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sPriorityPidEvictionOrdering\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sPIDPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/util.go:152
Failed to get successful response from /configz
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/kubelet/config.go:105
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: error while ConfigMap is absent: status and events should match expectations 1m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\s\[Feature\:DynamicKubeletConfig\]\[NodeFeature\:DynamicKubeletConfig\]\[Serial\]\[Disruptive\]\s\sdelete\sand\srecreate\sConfigMap\:\serror\swhile\sConfigMap\sis\sabsent\:\sstatus\sand\sevents\sshould\smatch\sexpectations$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: state transitions: status and events should match expectations 1m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\s\[Feature\:DynamicKubeletConfig\]\[NodeFeature\:DynamicKubeletConfig\]\[Serial\]\[Disruptive\]\s\sdelete\sand\srecreate\sConfigMap\:\sstate\stransitions\:\sstatus\sand\sevents\sshould\smatch\sexpectations$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: recover to last-known-good version: status and events should match expectations 1m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\s\[Feature\:DynamicKubeletConfig\]\[NodeFeature\:DynamicKubeletConfig\]\[Serial\]\[Disruptive\]\s\supdate\sConfigMap\sin\-place\:\srecover\sto\slast\-known\-good\sversion\:\sstatus\sand\sevents\sshould\smatch\sexpectations$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: state transitions: status and events should match expectations 1m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\s\[Feature\:DynamicKubeletConfig\]\[NodeFeature\:DynamicKubeletConfig\]\[Serial\]\[Disruptive\]\s\supdate\sConfigMap\sin\-place\:\sstate\stransitions\:\sstatus\sand\sevents\sshould\smatch\sexpectations$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: 100 update stress test: status and events should match expectations 1m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\s\[Feature\:DynamicKubeletConfig\]\[NodeFeature\:DynamicKubeletConfig\]\[Serial\]\[Disruptive\]\s\supdate\sNode\.Spec\.ConfigSource\:\s100\supdate\sstress\stest\:\sstatus\sand\sevents\sshould\smatch\sexpectations$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: non-nil last-known-good to a new non-nil last-known-good status and events should match expectations 1m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\s\[Feature\:DynamicKubeletConfig\]\[NodeFeature\:DynamicKubeletConfig\]\[Serial\]\[Disruptive\]\s\supdate\sNode\.Spec\.ConfigSource\:\snon\-nil\slast\-known\-good\sto\sa\snew\snon\-nil\slast\-known\-good\sstatus\sand\sevents\sshould\smatch\sexpectations$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap.KubeletConfigKey: status and events should match expectations 1m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\s\[Feature\:DynamicKubeletConfig\]\[NodeFeature\:DynamicKubeletConfig\]\[Serial\]\[Disruptive\]\s\supdate\sNode\.Spec\.ConfigSource\:\srecover\sto\slast\-known\-good\sConfigMap\.KubeletConfigKey\:\sstatus\sand\sevents\sshould\smatch\sexpectations$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: state transitions: status and events should match expectations 1m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\s\[Feature\:DynamicKubeletConfig\]\[NodeFeature\:DynamicKubeletConfig\]\[Serial\]\[Disruptive\]\s\supdate\sNode\.Spec\.ConfigSource\:\sstate\stransitions\:\sstatus\sand\sevents\sshould\smatch\sexpectations$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When checkpoint file is corrupted should complete pod sandbox clean up 1m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sDockershim\s\[Serial\]\s\[Disruptive\]\s\[Feature\:Docker\]\[Legacy\:Docker\]\sWhen\scheckpoint\sfile\sis\scorrupted\sshould\scomplete\spod\ssandbox\sclean\sup$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When pod sandbox checkpoint is missing should complete pod sandbox clean up 56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sDockershim\s\[Serial\]\s\[Disruptive\]\s\[Feature\:Docker\]\[Legacy\:Docker\]\sWhen\spod\ssandbox\scheckpoint\sis\smissing\sshould\scomplete\spod\ssandbox\sclean\sup$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should remove dangling checkpoint file 1m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sDockershim\s\[Serial\]\s\[Disruptive\]\s\[Feature\:Docker\]\[Legacy\:Docker\]\sshould\sremove\sdangling\scheckpoint\sfile$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [sig-node] PodPidsLimit [Serial] With config updated with pids limits should set pids.max for Pod 1m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sPodPidsLimit\s\[Serial\]\sWith\sconfig\supdated\swith\spids\slimits\sshould\sset\spids\.max\sfor\sPod$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node 1m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sResource\-usage\s\[Serial\]\s\[Slow\]\sregular\sresource\susage\stracking\sresource\stracking\sfor\s10\spods\sper\snode$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [sig-node] Topology Manager [Serial] [Feature:TopologyManager][NodeFeature:TopologyManager] With kubeconfig updated to static CPU Manager policy run the Topology Manager tests run Topology Manager node alignment test suite 1m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sTopology\sManager\s\[Serial\]\s\[Feature\:TopologyManager\]\[NodeFeature\:TopologyManager\]\sWith\skubeconfig\supdated\sto\sstatic\sCPU\sManager\spolicy\srun\sthe\sTopology\sManager\stests\srun\sTopology\sManager\snode\salignment\stest\ssuite$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
Unexpected error:
    <*errors.errorString | 0xc0002349b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:236
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [sig-node] Topology Manager [Serial] [Feature:TopologyManager][NodeFeature:TopologyManager] With kubeconfig updated to static CPU Manager policy run the Topology Manager tests run Topology Manager policy test suite 9.13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sTopology\sManager\s\[Serial\]\s\[Feature\:TopologyManager\]\[NodeFeature\:TopologyManager\]\sWith\skubeconfig\supdated\sto\sstatic\sCPU\sManager\spolicy\srun\sthe\sTopology\sManager\stests\srun\sTopology\sManager\spolicy\stest\ssuite$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/topology_manager_test.go:705
expected log not found in container [non-gu-container] of pod [non-gu-pod]
Unexpected error:
    <*errors.errorString | 0xc003ffb790>: {
        s: "failed to match regexp \"^0-0\\n$\" in output \"0\\n\"",
    }
    failed to match regexp "^0-0\n$" in output "0\n"
occurred
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/cpu_manager_test.go:317
				
				Click to see stdout/stderrfrom junit_cos-stable2_01.xml

Find [non-gu-pod] mentions in log files | View test history on testgrid


Node Tests 4h59m

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-gke-ubuntu-1-6-ingres --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeAlphaFeature:.+\]" --test_args=--feature-gates=DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=5h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config-serial.yaml (interrupted): exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Timeout 5h0m

kubetest --timeout triggered
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 24 Passed Tests

Show 282 Skipped Tests