This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 3 failed / 15 succeeded
Started2023-03-23 09:13
Elapsed1h15m
Revision
Builderf30778d5-c95a-11ed-8be2-1221e2764aa1
infra-commit4f78c66b8
job-versionv1.27.0-beta.0.66+d2be69ac11346d
kubetest-versionv20230321-850d5bc856
repok8s.io/kubernetes
repo-commitd2be69ac11346d2a0fab8c3c168c4255db99c56f
repos{u'k8s.io/kubernetes': u'master'}
revisionv1.27.0-beta.0.66+d2be69ac11346d

Test Failures


E2eNode Suite [It] [sig-node] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 14m28s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[It\]\s\[sig\-node\]\sLocalStorageSoftEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
[TIMEDOUT] A suite timeout occurred
In [It] at: test/e2e_node/eviction_test.go:563 @ 03/23/23 10:26:55.784

This is the Progress Report generated when the suite timeout occurred:
  [sig-node] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure  should eventually evict all of the correct pods (Spec Runtime: 13m23.167s)
    test/e2e_node/eviction_test.go:563
    In [It] (Node Runtime: 12m45.904s)
      test/e2e_node/eviction_test.go:563
      At [By Step] checking eviction ordering and ensuring important pods don't fail (Step Runtime: 1.203s)
        test/e2e_node/eviction_test.go:700

      Spec Goroutine
      goroutine 6385 [select]
        k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000209b90, {0x5b93800?, 0x88468b8}, 0x1, {0x0, 0x0, 0x0})
          vendor/github.com/onsi/gomega/internal/async_assertion.go:538
        k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000209b90, {0x5b93800, 0x88468b8}, {0x0, 0x0, 0x0})
          vendor/github.com/onsi/gomega/internal/async_assertion.go:145
      > k8s.io/kubernetes/test/e2e_node.runEvictionTest.func1.2({0x7f3b72bac900?, 0xc000fcb770})
          test/e2e_node/eviction_test.go:585
        k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x5bb2a40?, 0xc000fcb770})
          vendor/github.com/onsi/ginkgo/v2/internal/node.go:456
        k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3()
          vendor/github.com/onsi/ginkgo/v2/internal/suite.go:863
        k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode
          vendor/github.com/onsi/ginkgo/v2/internal/suite.go:850

      Begin Additional Progress Reports >>
        Expected success, but got an error:
            <*errors.errorString | 0xc000c65ed0>: 
            pods that should be evicted are still running: []string{"container-disk-hog-pod"}
            {
                s: "pods that should be evicted are still running: []string{\"container-disk-hog-pod\"}",
            }
      << End Additional Progress Reports

      Goroutines of Interest
      goroutine 1 [chan receive, 60 minutes]
        testing.(*T).Run(0xc000251860, {0x52ff476?, 0x53e765?}, 0x558dd48)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:1630
        testing.runTests.func1(0x88173a0?)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:2036
        testing.tRunner(0xc000251860, 0xc0009ffb78)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:1576
        testing.runTests(0xc0002797c0?, {0x8651d70, 0x1, 0x1}, {0x30?, 0xc00059a750?, 0x0?})
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:2034
        testing.(*M).Run(0xc0002797c0)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:1906
      > k8s.io/kubernetes/test/e2e_node.TestMain(0xffffffffffffffff?)
          test/e2e_node/e2e_node_suite_test.go:145
        main.main()
          /tmp/go-build1063036389/b001/_testmain.go:49

      goroutine 245 [syscall, 57 minutes]
        syscall.Syscall6(0x100?, 0xc0000a0cd8?, 0x6fc80d?, 0x1?, 0x52183e0?, 0xc000112070?, 0x47edfe?)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/syscall/syscall_linux.go:91
        os.(*Process).blockUntilWaitable(0xc00133e090)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/wait_waitid.go:32
        os.(*Process).wait(0xc00133e090)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/exec_unix.go:22
        os.(*Process).Wait(...)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/exec.go:132
        os/exec.(*Cmd).Wait(0xc000597b80)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/exec/exec.go:890
      > k8s.io/kubernetes/test/e2e_node/services.(*server).start.func1()
          test/e2e_node/services/server.go:166
      > k8s.io/kubernetes/test/e2e_node/services.(*server).start
          test/e2e_node/services/server.go:123

[FAILED] A suite timeout occurred and then the following failure was recorded in the timedout node before it exited:
Context was cancelled after 312.751s.
Expected success, but got an error:
    <*errors.errorString | 0xc000c65ed0>: 
    pods that should be evicted are still running: []string{"container-disk-hog-pod"}
    {
        s: "pods that should be evicted are still running: []string{\"container-disk-hog-pod\"}",
    }
In [It] at: test/e2e_node/eviction_test.go:585 @ 03/23/23 10:26:55.801

There were additional failures detected after the initial failure. These are visible in the timeline

				
				Click to see stdout/stderrfrom junit_fedora01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [It] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure should eventually evict all of the correct pods 3m24s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[It\]\s\[sig\-node\]\sPriorityPidEvictionOrdering\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sPIDPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
[FAILED] Timed out after 120.001s.
Expected
    <*errors.errorString | 0xc00145f790>: 
    NodeCondition: PIDPressure not encountered
    {
        s: "NodeCondition: PIDPressure not encountered",
    }
to be nil
In [It] at: test/e2e_node/eviction_test.go:571 @ 03/23/23 10:05:49.633

				
				Click to see stdout/stderrfrom junit_fedora01.xml

Filter through log files | View test history on testgrid


kubetest Node Tests 1h14m

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup -vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-gke-ubuntu-1-6-flaky --zone=us-west1-b --ssh-user=core --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[NodeFeature:Eviction\]" --test_args=--container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}" --test-timeout=5h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/crio/latest/image-config-cgrpv1.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 15 Passed Tests

Show 400 Skipped Tests