This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 2 failed / 17 succeeded
Started2023-03-17 04:36
Elapsed1h14m
Revision
Builder52acdc35-c47d-11ed-93d6-92b4ce3fddda
infra-commit97064ce01
job-versionv1.27.0-beta.0.6+8e01ee79bf78ae
kubetest-versionv20230222-b5208facd4
repok8s.io/kubernetes
repo-commit8e01ee79bf78aee1f6a42443acdfc33c89c09952
repos{u'k8s.io/kubernetes': u'master'}
revisionv1.27.0-beta.0.6+8e01ee79bf78ae

Test Failures


E2eNode Suite [It] [sig-node] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 3m44s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[It\]\s\[sig\-node\]\sPriorityLocalStorageEvictionOrdering\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
[TIMEDOUT] A suite timeout occurred
In [It] at: test/e2e_node/eviction_test.go:563 @ 03/17/23 05:50:09.263

This is the Progress Report generated when the suite timeout occurred:
  [sig-node] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure  should eventually evict all of the correct pods (Spec Runtime: 3m9.457s)
    test/e2e_node/eviction_test.go:563
    In [It] (Node Runtime: 2m32.2s)
      test/e2e_node/eviction_test.go:563
      At [By Step] Waiting for node to have NodeCondition: DiskPressure (Step Runtime: 2m32.199s)
        test/e2e_node/eviction_test.go:564

      Spec Goroutine
      goroutine 7677 [select]
        k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00162e620, {0x5b8e6f0?, 0x8840878}, 0x1, {0x0, 0x0, 0x0})
          vendor/github.com/onsi/gomega/internal/async_assertion.go:538
        k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00162e620, {0x5b8e6f0, 0x8840878}, {0x0, 0x0, 0x0})
          vendor/github.com/onsi/gomega/internal/async_assertion.go:145
      > k8s.io/kubernetes/test/e2e_node.runEvictionTest.func1.2({0x7fc6d09ca118?, 0xc001abf290})
          test/e2e_node/eviction_test.go:571
        k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x5badba0?, 0xc001abf290})
          vendor/github.com/onsi/ginkgo/v2/internal/node.go:456
        k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3()
          vendor/github.com/onsi/ginkgo/v2/internal/suite.go:863
        k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode
          vendor/github.com/onsi/ginkgo/v2/internal/suite.go:850

      Begin Additional Progress Reports >>
        Expected
            <*errors.errorString | 0xc000cf26d0>: 
            NodeCondition: DiskPressure not encountered
            {
                s: "NodeCondition: DiskPressure not encountered",
            }
        to be nil
      << End Additional Progress Reports

      Goroutines of Interest
      goroutine 1 [chan receive, 60 minutes]
        testing.(*T).Run(0xc0001f5d40, {0x52fb0d6?, 0x53e765?}, 0x5589648)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:1630
        testing.runTests.func1(0x8811360?)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:2036
        testing.tRunner(0xc0001f5d40, 0xc000b91b78)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:1576
        testing.runTests(0xc000512640?, {0x864bd70, 0x1, 0x1}, {0x30?, 0xc0009aed50?, 0x0?})
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:2034
        testing.(*M).Run(0xc000512640)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:1906
      > k8s.io/kubernetes/test/e2e_node.TestMain(0xffffffffffffffff?)
          test/e2e_node/e2e_node_suite_test.go:145
        main.main()
          /tmp/go-build748732425/b001/_testmain.go:49

      goroutine 245 [syscall, 59 minutes]
        syscall.Syscall6(0x100?, 0xc001439cd8?, 0x6fc80d?, 0x1?, 0x5214040?, 0xc000143ce0?, 0x47edfe?)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/syscall/syscall_linux.go:91
        os.(*Process).blockUntilWaitable(0xc000c60a50)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/wait_waitid.go:32
        os.(*Process).wait(0xc000c60a50)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/exec_unix.go:22
        os.(*Process).Wait(...)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/exec.go:132
        os/exec.(*Cmd).Wait(0xc000023e40)
          /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/exec/exec.go:890
      > k8s.io/kubernetes/test/e2e_node/services.(*server).start.func1()
          test/e2e_node/services/server.go:166
      > k8s.io/kubernetes/test/e2e_node/services.(*server).start
          test/e2e_node/services/server.go:123

[FAILED] A suite timeout occurred and then the following failure was recorded in the timedout node before it exited:
Context was cancelled after 152.209s.
Expected
    <*errors.errorString | 0xc000cf26d0>: 
    NodeCondition: DiskPressure not encountered
    {
        s: "NodeCondition: DiskPressure not encountered",
    }
to be nil
In [It] at: test/e2e_node/eviction_test.go:571 @ 03/17/23 05:50:09.273

There were additional failures detected after the initial failure. These are visible in the timeline

				
				Click to see stdout/stderrfrom junit_fedora01.xml

Filter through log files | View test history on testgrid


kubetest Node Tests 1h13m

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup -vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-gke-ubuntu-reboot --zone=us-west1-b --ssh-user=core --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[NodeFeature:Eviction\]" --test_args=--container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}" --test-timeout=5h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/crio/latest/image-config-cgrpv1.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 17 Passed Tests

Show 399 Skipped Tests