This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 4 failed / 16 succeeded
Started2023-03-20 22:39
Elapsed1h14m
Revision
Builder0252d26c-c770-11ed-9e5f-baf334638f51
infra-commite39fd1ac2
job-versionv1.27.0-beta.0.29+c9ff2866682432
kubetest-versionv20230222-b5208facd4
repok8s.io/kubernetes
repo-commitc9ff2866682432075da1a961bc5c3f681b34c8ea
repos{u'k8s.io/kubernetes': u'master'}
revisionv1.27.0-beta.0.29+c9ff2866682432

Test Failures


E2eNode Suite [It] [sig-node] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods 1m41s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[It\]\s\[sig\-node\]\sLocalStorageCapacityIsolationEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\s\[Feature\:LocalStorageCapacityIsolation\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sevictions\sdue\sto\spod\slocal\sstorage\sviolations\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
[TIMEDOUT] A suite timeout occurred
In [It] at: test/e2e_node/eviction_test.go:563 @ 03/20/23 23:52:34.211

This is the Progress Report generated when the suite timeout occurred:
  [sig-node] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations  should eventually evict all of the correct pods (Spec Runtime: 1m6.503s)
    test/e2e_node/eviction_test.go:563
    In [It] (Node Runtime: 27.187s)
      test/e2e_node/eviction_test.go:563
      At [By Step] checking eviction ordering and ensuring important pods don't fail (Step Runtime: 506ms)
        test/e2e_node/eviction_test.go:700

      Spec Goroutine
      goroutine 8226 [select]
        k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00094f7a0, {0x5b8fd00?, 0x8841898}, 0x1, {0x0, 0x0, 0x0})
          vendor/github.com/onsi/gomega/internal/async_assertion.go:538
        k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00094f7a0, {0x5b8fd00, 0x8841898}, {0x0, 0x0, 0x0})
          vendor/github.com/onsi/gomega/internal/async_assertion.go:145
      > k8s.io/kubernetes/test/e2e_node.runEvictionTest.func1.2({0x7fa5f81c5d00?, 0xc0018b24e0})
          test/e2e_node/eviction_test.go:585
        k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x5baef40?, 0xc0018b24e0})
          vendor/github.com/onsi/ginkgo/v2/internal/node.go:456
        k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3()
          vendor/github.com/onsi/ginkgo/v2/internal/suite.go:863
        k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode
          vendor/github.com/onsi/ginkgo/v2/internal/suite.go:850

      Begin Additional Progress Reports >>
        Expected success, but got an error:
            <*errors.errorString | 0xc001d1e5f0>: 
            pods that should be evicted are still running: []string{"emptydir-disk-sizelimit-pod", "container-disk-limit-pod", "container-emptydir-disk-limit-pod"}
            {
                s: "pods that should be evicted are still running: []string{\"emptydir-disk-sizelimit-pod\", \"container-disk-limit-pod\", \"container-emptydir-disk-limit-pod\"}",
            }
      << End Additional Progress Reports

      Goroutines of Interest
      goroutine 1 [chan receive, 60 minutes]
        testing.(*T).Run(0xc0000eb380, {0x52fc376?, 0x53e765?}, 0x558a8e8)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:1630
        testing.runTests.func1(0x8812380?)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:2036
        testing.tRunner(0xc0000eb380, 0xc0009ffb78)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:1576
        testing.runTests(0xc0009590e0?, {0x864cd70, 0x1, 0x1}, {0x88438a0?, 0xc000654750?, 0x0?})
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:2034
        testing.(*M).Run(0xc0009590e0)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:1906
      > k8s.io/kubernetes/test/e2e_node.TestMain(0xffffffffffffffff?)
          test/e2e_node/e2e_node_suite_test.go:145
        main.main()
          /tmp/go-build3463134305/b001/_testmain.go:49

      goroutine 245 [syscall, 59 minutes]
        syscall.Syscall6(0x100?, 0xc000de8cd8?, 0x6fc80d?, 0x1?, 0x52152e0?, 0xc0006f49a0?, 0x47edfe?)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/syscall/syscall_linux.go:91
        os.(*Process).blockUntilWaitable(0xc001308e10)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/wait_waitid.go:32
        os.(*Process).wait(0xc001308e10)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/exec_unix.go:22
        os.(*Process).Wait(...)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/exec.go:132
        os/exec.(*Cmd).Wait(0xc0003cfb80)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/exec/exec.go:890
      > k8s.io/kubernetes/test/e2e_node/services.(*server).start.func1()
          test/e2e_node/services/server.go:166
      > k8s.io/kubernetes/test/e2e_node/services.(*server).start
          test/e2e_node/services/server.go:123

[FAILED] A suite timeout occurred and then the following failure was recorded in the timedout node before it exited:
Context was cancelled after 27.154s.
Expected success, but got an error:
    <*errors.errorString | 0xc001d1e5f0>: 
    pods that should be evicted are still running: []string{"emptydir-disk-sizelimit-pod", "container-disk-limit-pod", "container-emptydir-disk-limit-pod"}
    {
        s: "pods that should be evicted are still running: []string{\"emptydir-disk-sizelimit-pod\", \"container-disk-limit-pod\", \"container-emptydir-disk-limit-pod\"}",
    }
In [It] at: test/e2e_node/eviction_test.go:585 @ 03/20/23 23:52:34.215

There were additional failures detected after the initial failure. These are visible in the timeline

				
				Click to see stdout/stderrfrom junit_fedora01.xml

Find local mentions in log files | View test history on testgrid


E2eNode Suite [It] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure should eventually evict all of the correct pods 3m18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[It\]\s\[sig\-node\]\sPriorityPidEvictionOrdering\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sPIDPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
[FAILED] Timed out after 120.000s.
Expected
    <*errors.errorString | 0xc001076700>: 
    NodeCondition: PIDPressure not encountered
    {
        s: "NodeCondition: PIDPressure not encountered",
    }
to be nil
In [It] at: test/e2e_node/eviction_test.go:571 @ 03/20/23 23:50:43.116

				
				Click to see stdout/stderrfrom junit_fedora01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [It] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure; PodDisruptionConditions enabled [NodeFeature:PodDisruptionConditions] should eventually evict all of the correct pods 2m44s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[It\]\s\[sig\-node\]\sPriorityPidEvictionOrdering\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sPIDPressure\;\sPodDisruptionConditions\senabled\s\[NodeFeature\:PodDisruptionConditions\]\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
[FAILED] Timed out after 120.000s.
Expected
    <*errors.errorString | 0xc001cc5410>: 
    NodeCondition: PIDPressure not encountered
    {
        s: "NodeCondition: PIDPressure not encountered",
    }
to be nil
In [It] at: test/e2e_node/eviction_test.go:571 @ 03/20/23 23:47:56.616

				
				Click to see stdout/stderrfrom junit_fedora01.xml

Filter through log files | View test history on testgrid


kubetest Node Tests 1h13m

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup -vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-gke-ubuntu --zone=us-west1-b --ssh-user=core --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[NodeFeature:Eviction\]" --test_args=--container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}" --test-timeout=5h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/crio/latest/image-config-cgrpv1.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 16 Passed Tests

Show 398 Skipped Tests