This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 11 failed / 85 succeeded
Started2020-02-16 01:17
Elapsed4h33m
Revision
Buildergke-prow-default-pool-cf4891d4-49d1
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/92165761-e201-4baa-b6b7-56fe1242cd44/targets/test'}}
poddcd0941d-5059-11ea-9bea-16a0f55e352c
resultstorehttps://source.cloud.google.com/results/invocations/92165761-e201-4baa-b6b7-56fe1242cd44/targets/test
infra-commitf5dd3ee0e
job-versionv1.18.0-alpha.5.160+3b22fcc7bdcf5c
poddcd0941d-5059-11ea-9bea-16a0f55e352c
repok8s.io/kubernetes
repo-commit3b22fcc7bdcf5cb6eac2e4bb10f3b943ba94e2f2
repos{u'k8s.io/kubernetes': u'master', u'github.com/containerd/cri': u'master'}
revisionv1.18.0-alpha.5.160+3b22fcc7bdcf5c

Test Failures


E2eNode Suite [k8s.io] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod 20s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sCriticalPod\s\[Serial\]\s\[Disruptive\]\s\[NodeFeature\:CriticalPod\]\swhen\swe\sneed\sto\sadmit\sa\scritical\spod\sshould\sbe\sable\sto\screate\sand\sdelete\sa\scritical\spod$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:53
Unexpected error:
    <*errors.StatusError | 0xc000d84a00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "pods \"static-critical-pod\" not found",
            Reason: "NotFound",
            Details: {
                Name: "static-critical-pod",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    pods "static-critical-pod" not found
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:91
				
				Click to see stdout/stderrfrom junit_cos-stable_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod 22s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sCriticalPod\s\[Serial\]\s\[Disruptive\]\s\[NodeFeature\:CriticalPod\]\swhen\swe\sneed\sto\sadmit\sa\scritical\spod\sshould\sbe\sable\sto\screate\sand\sdelete\sa\scritical\spod$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:53
Unexpected error:
    <*errors.StatusError | 0xc000a23900>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "pods \"static-critical-pod\" not found",
            Reason: "NotFound",
            Details: {
                Name: "static-critical-pod",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    pods "static-critical-pod" not found
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:91
				
				Click to see stdout/stderrfrom junit_ubuntu_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container 10m44s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sGarbageCollect\s\[Serial\]\[NodeFeature\:GarbageCollect\]\sGarbage\sCollection\sTest\:\sMany\sRestarting\sContainers\sShould\seventually\sgarbage\scollect\scontainers\swhen\swe\sexceed\sthe\snumber\sof\sdead\scontainers\sper\scontainer$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:171
Timed out after 600.000s.
Expected
    <*errors.errorString | 0xc0010a7a70>: {
        s: "pod gc-test-pod-many-containers-many-restarts-one-pod had container with restartcount 5.  Should have been at least 4",
    }
to be nil
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:183
				
				Click to see stdout/stderrfrom junit_cos-stable_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] InodeEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 16m25s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sInodeEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:524
wait for pod "innocent-pod" to disappear
Expected success, but got an error:
    <*errors.StatusError | 0xc00088c640>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "etcdserver: request timed out",
            Reason: "",
            Details: nil,
            Code: 500,
        },
    }
    etcdserver: request timed out
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146
				
				Click to see stdout/stderrfrom junit_ubuntu_01.xml

Find innocent-pod mentions in log files | View test history on testgrid


E2eNode Suite [k8s.io] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 14m40s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sLocalStorageEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:470
Timed out after 600.000s.
Expected
    <*errors.errorString | 0xc001890730>: {
        s: "pods that should be evicted are still running",
    }
to be nil
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:492
				
				Click to see stdout/stderrfrom junit_cos-stable_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 15m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sLocalStorageSoftEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:470
Timed out after 600.000s.
Expected
    <*errors.errorString | 0xc001886680>: {
        s: "pods that should be evicted are still running",
    }
to be nil
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:492
				
				Click to see stdout/stderrfrom junit_cos-stable_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 10m38s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sPriorityLocalStorageEvictionOrdering\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:470
priority 0 pod: guaranteed-disk-pod failed
Expected
    <v1.PodPhase>: Failed
not to equal
    <v1.PodPhase>: Failed
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:633