This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 4 failed / 15 succeeded
Started2023-03-18 07:37
Elapsed1h12m
Revision
Builderc6bffa82-c55f-11ed-93d6-92b4ce3fddda
infra-commitade17619a
job-versionv1.27.0-beta.0.22+fe91bc257b505e
kubetest-versionv20230222-b5208facd4
repok8s.io/kubernetes
repo-commitfe91bc257b505eb6057eb50b9c550a7c63e9fb91
repos{u'k8s.io/kubernetes': u'master'}
revisionv1.27.0-beta.0.22+fe91bc257b505e

Test Failures


E2eNode Suite [It] [sig-node] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 5m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[It\]\s\[sig\-node\]\sLocalStorageEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
[TIMEDOUT] A suite timeout occurred
In [It] at: test/e2e_node/eviction_test.go:563 @ 03/18/23 08:48:58.771

This is the Progress Report generated when the suite timeout occurred:
  [sig-node] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure  should eventually evict all of the correct pods (Spec Runtime: 4m27.537s)
    test/e2e_node/eviction_test.go:563
    In [It] (Node Runtime: 3m50.25s)
      test/e2e_node/eviction_test.go:563
      At [By Step] Waiting for node to have NodeCondition: DiskPressure (Step Runtime: 3m50.25s)
        test/e2e_node/eviction_test.go:564

      Spec Goroutine
      goroutine 7650 [select]
        k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00050ae70, {0x5b8e930?, 0x8840878}, 0x1, {0x0, 0x0, 0x0})
          vendor/github.com/onsi/gomega/internal/async_assertion.go:538
        k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00050ae70, {0x5b8e930, 0x8840878}, {0x0, 0x0, 0x0})
          vendor/github.com/onsi/gomega/internal/async_assertion.go:145
      > k8s.io/kubernetes/test/e2e_node.runEvictionTest.func1.2({0x7f60e25342c0?, 0xc00064cd20})
          test/e2e_node/eviction_test.go:571
        k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x5badde0?, 0xc00064cd20})
          vendor/github.com/onsi/ginkgo/v2/internal/node.go:456
        k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3()
          vendor/github.com/onsi/ginkgo/v2/internal/suite.go:863
        k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode
          vendor/github.com/onsi/ginkgo/v2/internal/suite.go:850

      Begin Additional Progress Reports >>
        Expected
            <*errors.errorString | 0xc001480aa0>: 
            NodeCondition: DiskPressure not encountered
            {
                s: "NodeCondition: DiskPressure not encountered",
            }
        to be nil
      << End Additional Progress Reports

      Goroutines of Interest
      goroutine 1 [chan receive, 60 minutes]
        testing.(*T).Run(0xc0004c96c0, {0x52fb256?, 0x53e765?}, 0x55897c8)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:1630
        testing.runTests.func1(0x8811360?)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:2036
        testing.tRunner(0xc0004c96c0, 0xc000a3fb78)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:1576
        testing.runTests(0xc000560d20?, {0x864bd70, 0x1, 0x1}, {0x30?, 0xc0008e1c50?, 0x0?})
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:2034
        testing.(*M).Run(0xc000560d20)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:1906
      > k8s.io/kubernetes/test/e2e_node.TestMain(0xffffffffffffffff?)
          test/e2e_node/e2e_node_suite_test.go:145
        main.main()
          /tmp/go-build1418458575/b001/_testmain.go:49

      goroutine 245 [syscall, 59 minutes]
        syscall.Syscall6(0x100?, 0xc0013c5cd8?, 0x6fc80d?, 0x1?, 0x52141c0?, 0xc000113d50?, 0x47edfe?)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/syscall/syscall_linux.go:91
        os.(*Process).blockUntilWaitable(0xc000dbeb70)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/wait_waitid.go:32
        os.(*Process).wait(0xc000dbeb70)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/exec_unix.go:22
        os.(*Process).Wait(...)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/exec.go:132
        os/exec.(*Cmd).Wait(0xc0006542c0)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/exec/exec.go:890
      > k8s.io/kubernetes/test/e2e_node/services.(*server).start.func1()
          test/e2e_node/services/server.go:166
      > k8s.io/kubernetes/test/e2e_node/services.(*server).start
          test/e2e_node/services/server.go:123

[FAILED] A suite timeout occurred and then the following failure was recorded in the timedout node before it exited:
Context was cancelled after 230.258s.
Expected
    <*errors.errorString | 0xc001480aa0>: 
    NodeCondition: DiskPressure not encountered
    {
        s: "NodeCondition: DiskPressure not encountered",
    }
to be nil
In [It] at: test/e2e_node/eviction_test.go:571 @ 03/18/23 08:48:58.78

There were additional failures detected after the initial failure. These are visible in the timeline

				
				Click to see stdout/stderrfrom junit_fedora01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [It] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure should eventually evict all of the correct pods 3m16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[It\]\s\[sig\-node\]\sPriorityPidEvictionOrdering\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sPIDPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
[FAILED] Timed out after 120.000s.
Expected
    <*errors.errorString | 0xc000cf22c0>: 
    NodeCondition: PIDPressure not encountered
    {
        s: "NodeCondition: PIDPressure not encountered",
    }
to be nil
In [It] at: test/e2e_node/eviction_test.go:571 @ 03/18/23 08:03:08.006

				
				Click to see stdout/stderrfrom junit_fedora01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [It] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure; PodDisruptionConditions enabled [NodeFeature:PodDisruptionConditions] should eventually evict all of the correct pods 2m46s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[It\]\s\[sig\-node\]\sPriorityPidEvictionOrdering\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sPIDPressure\;\sPodDisruptionConditions\senabled\s\[NodeFeature\:PodDisruptionConditions\]\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
[FAILED] Timed out after 120.000s.
Expected
    <*errors.errorString | 0xc001518430>: 
    NodeCondition: PIDPressure not encountered
    {
        s: "NodeCondition: PIDPressure not encountered",
    }
to be nil
In [It] at: test/e2e_node/eviction_test.go:571 @ 03/18/23 07:53:35.188

				
				Click to see stdout/stderrfrom junit_fedora01.xml

Filter through log files | View test history on testgrid


kubetest Node Tests 1h11m

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup -vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-gke-ubuntu-slow --zone=us-west1-b --ssh-user=core --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[NodeFeature:Eviction\]" --test_args=--container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}" --test-timeout=5h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/crio/latest/image-config-cgrpv1.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 15 Passed Tests

Show 399 Skipped Tests