go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[It\]\s\[sig\-node\]\sLocalStorageCapacityIsolationEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\s\[Feature\:LocalStorageCapacityIsolation\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sevictions\sdue\sto\spod\slocal\sstorage\sviolations\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
[TIMEDOUT] A suite timeout occurred In [It] at: test/e2e_node/eviction_test.go:563 @ 03/20/23 23:52:34.211 This is the Progress Report generated when the suite timeout occurred: [sig-node] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods (Spec Runtime: 1m6.503s) test/e2e_node/eviction_test.go:563 In [It] (Node Runtime: 27.187s) test/e2e_node/eviction_test.go:563 At [By Step] checking eviction ordering and ensuring important pods don't fail (Step Runtime: 506ms) test/e2e_node/eviction_test.go:700 Spec Goroutine goroutine 8226 [select] k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00094f7a0, {0x5b8fd00?, 0x8841898}, 0x1, {0x0, 0x0, 0x0}) vendor/github.com/onsi/gomega/internal/async_assertion.go:538 k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00094f7a0, {0x5b8fd00, 0x8841898}, {0x0, 0x0, 0x0}) vendor/github.com/onsi/gomega/internal/async_assertion.go:145 > k8s.io/kubernetes/test/e2e_node.runEvictionTest.func1.2({0x7fa5f81c5d00?, 0xc0018b24e0}) test/e2e_node/eviction_test.go:585 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x5baef40?, 0xc0018b24e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:456 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:863 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:850 Begin Additional Progress Reports >> Expected success, but got an error: <*errors.errorString | 0xc001d1e5f0>: pods that should be evicted are still running: []string{"emptydir-disk-sizelimit-pod", "container-disk-limit-pod", "container-emptydir-disk-limit-pod"} { s: "pods that should be evicted are still running: []string{\"emptydir-disk-sizelimit-pod\", \"container-disk-limit-pod\", \"container-emptydir-disk-limit-pod\"}", } << End Additional Progress Reports Goroutines of Interest goroutine 1 [chan receive, 60 minutes] testing.(*T).Run(0xc0000eb380, {0x52fc376?, 0x53e765?}, 0x558a8e8) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:1630 testing.runTests.func1(0x8812380?) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:2036 testing.tRunner(0xc0000eb380, 0xc0009ffb78) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:1576 testing.runTests(0xc0009590e0?, {0x864cd70, 0x1, 0x1}, {0x88438a0?, 0xc000654750?, 0x0?}) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:2034 testing.(*M).Run(0xc0009590e0) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:1906 > k8s.io/kubernetes/test/e2e_node.TestMain(0xffffffffffffffff?) test/e2e_node/e2e_node_suite_test.go:145 main.main() /tmp/go-build3463134305/b001/_testmain.go:49 goroutine 245 [syscall, 59 minutes] syscall.Syscall6(0x100?, 0xc000de8cd8?, 0x6fc80d?, 0x1?, 0x52152e0?, 0xc0006f49a0?, 0x47edfe?) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/syscall/syscall_linux.go:91 os.(*Process).blockUntilWaitable(0xc001308e10) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/wait_waitid.go:32 os.(*Process).wait(0xc001308e10) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/exec_unix.go:22 os.(*Process).Wait(...) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/exec.go:132 os/exec.(*Cmd).Wait(0xc0003cfb80) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/exec/exec.go:890 > k8s.io/kubernetes/test/e2e_node/services.(*server).start.func1() test/e2e_node/services/server.go:166 > k8s.io/kubernetes/test/e2e_node/services.(*server).start test/e2e_node/services/server.go:123 [FAILED] A suite timeout occurred and then the following failure was recorded in the timedout node before it exited: Context was cancelled after 27.154s. Expected success, but got an error: <*errors.errorString | 0xc001d1e5f0>: pods that should be evicted are still running: []string{"emptydir-disk-sizelimit-pod", "container-disk-limit-pod", "container-emptydir-disk-limit-pod"} { s: "pods that should be evicted are still running: []string{\"emptydir-disk-sizelimit-pod\", \"container-disk-limit-pod\", \"container-emptydir-disk-limit-pod\"}", } In [It] at: test/e2e_node/eviction_test.go:585 @ 03/20/23 23:52:34.215 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit_fedora01.xml
> Enter [BeforeEach] [sig-node] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] - set up framework | framework.go:191 @ 03/20/23 23:51:27.708 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/20/23 23:51:27.708 STEP: Building a namespace api object, basename localstorage-eviction-test - test/e2e/framework/framework.go:250 @ 03/20/23 23:51:27.709 Mar 20 23:51:27.719: INFO: Skipping waiting for service account < Exit [BeforeEach] [sig-node] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] - set up framework | framework.go:191 @ 03/20/23 23:51:27.719 (11ms) > Enter [BeforeEach] [sig-node] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] - test/e2e/framework/metrics/init/init.go:33 @ 03/20/23 23:51:27.719 < Exit [BeforeEach] [sig-node] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] - test/e2e/framework/metrics/init/init.go:33 @ 03/20/23 23:51:27.719 (0s) > Enter [BeforeEach] when we run containers that should cause evictions due to pod local storage violations - test/e2e_node/util.go:176 @ 03/20/23 23:51:27.719 STEP: Stopping the kubelet - test/e2e_node/util.go:200 @ 03/20/23 23:51:27.74 Mar 20 23:51:27.810: INFO: Get running kubelet with systemctl: UNIT LOAD ACTIVE SUB DESCRIPTION kubelet-20230320T225220.service loaded active running /tmp/node-e2e-20230320T225220/kubelet --kubeconfig /tmp/node-e2e-20230320T225220/kubeconfig --root-dir /var/lib/kubelet --v 4 --hostname-override tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20230320T225220/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization of SUB. SUB = The low-level unit activation state, values depend on unit type. 1 loaded units listed. , kubelet-20230320T225220 W0320 23:51:27.877829 2678 util.go:477] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:42686->127.0.0.1:10248: read: connection reset by peer STEP: Starting the kubelet - test/e2e_node/util.go:216 @ 03/20/23 23:51:27.887 W0320 23:51:27.924610 2678 util.go:477] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused < Exit [BeforeEach] when we run containers that should cause evictions due to pod local storage violations - test/e2e_node/util.go:176 @ 03/20/23 23:51:32.928 (5.209s) > Enter [BeforeEach] TOP-LEVEL - test/e2e_node/eviction_test.go:549 @ 03/20/23 23:51:32.928 STEP: setting up pods to be used by tests - test/e2e_node/eviction_test.go:555 @ 03/20/23 23:52:02.955 < Exit [BeforeEach] TOP-LEVEL - test/e2e_node/eviction_test.go:549 @ 03/20/23 23:52:07.025 (34.097s) > Enter [It] should eventually evict all of the correct pods - test/e2e_node/eviction_test.go:563 @ 03/20/23 23:52:07.025 STEP: Waiting for node to have NodeCondition: NoPressure - test/e2e_node/eviction_test.go:564 @ 03/20/23 23:52:07.025 Mar 20 23:52:07.061: INFO: imageFsInfo.CapacityBytes: 20869787648, imageFsInfo.AvailableBytes: 15182643200 Mar 20 23:52:07.061: INFO: rootFsInfo.CapacityBytes: 20869787648, rootFsInfo.AvailableBytes: 15182643200 STEP: Waiting for evictions to occur - test/e2e_node/eviction_test.go:573 @ 03/20/23 23:52:07.061 Mar 20 23:52:07.121: INFO: Kubelet Metrics: [] Mar 20 23:52:07.144: INFO: imageFsInfo.CapacityBytes: 20869787648, imageFsInfo.AvailableBytes: 15182643200 Mar 20 23:52:07.144: INFO: rootFsInfo.CapacityBytes: 20869787648, rootFsInfo.AvailableBytes: 15182643200 Mar 20 23:52:07.148: INFO: fetching pod container-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:07.149: INFO: fetching pod container-disk-limit-pod; phase= Running Mar 20 23:52:07.149: INFO: fetching pod container-emptydir-disk-limit-pod; phase= Running Mar 20 23:52:07.149: INFO: fetching pod emptydir-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:07.149: INFO: fetching pod emptydir-disk-sizelimit-pod; phase= Running Mar 20 23:52:07.149: INFO: fetching pod emptydir-memory-sizelimit-pod; phase= Running STEP: checking eviction ordering and ensuring important pods don't fail - test/e2e_node/eviction_test.go:700 @ 03/20/23 23:52:07.149 Mar 20 23:52:09.173: INFO: Kubelet Metrics: [] Mar 20 23:52:09.198: INFO: imageFsInfo.CapacityBytes: 20869787648, imageFsInfo.AvailableBytes: 15018762240 Mar 20 23:52:09.198: INFO: rootFsInfo.CapacityBytes: 20869787648, rootFsInfo.AvailableBytes: 15018762240 Mar 20 23:52:09.204: INFO: fetching pod container-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:09.205: INFO: fetching pod container-disk-limit-pod; phase= Running Mar 20 23:52:09.205: INFO: fetching pod container-emptydir-disk-limit-pod; phase= Running Mar 20 23:52:09.205: INFO: fetching pod emptydir-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:09.205: INFO: fetching pod emptydir-disk-sizelimit-pod; phase= Running Mar 20 23:52:09.205: INFO: fetching pod emptydir-memory-sizelimit-pod; phase= Running STEP: checking eviction ordering and ensuring important pods don't fail - test/e2e_node/eviction_test.go:700 @ 03/20/23 23:52:09.205 Mar 20 23:52:11.224: INFO: Kubelet Metrics: [] Mar 20 23:52:11.242: INFO: imageFsInfo.CapacityBytes: 20869787648, imageFsInfo.AvailableBytes: 15018762240 Mar 20 23:52:11.242: INFO: rootFsInfo.CapacityBytes: 20869787648, rootFsInfo.AvailableBytes: 15018762240 Mar 20 23:52:11.245: INFO: fetching pod container-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:11.246: INFO: fetching pod container-disk-limit-pod; phase= Running Mar 20 23:52:11.246: INFO: fetching pod container-emptydir-disk-limit-pod; phase= Running Mar 20 23:52:11.246: INFO: fetching pod emptydir-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:11.246: INFO: fetching pod emptydir-disk-sizelimit-pod; phase= Running Mar 20 23:52:11.246: INFO: fetching pod emptydir-memory-sizelimit-pod; phase= Running STEP: checking eviction ordering and ensuring important pods don't fail - test/e2e_node/eviction_test.go:700 @ 03/20/23 23:52:11.247 Mar 20 23:52:13.266: INFO: Kubelet Metrics: [] Mar 20 23:52:13.284: INFO: imageFsInfo.CapacityBytes: 20869787648, imageFsInfo.AvailableBytes: 15018762240 Mar 20 23:52:13.284: INFO: rootFsInfo.CapacityBytes: 20869787648, rootFsInfo.AvailableBytes: 15018762240 Mar 20 23:52:13.290: INFO: fetching pod container-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:13.291: INFO: fetching pod container-disk-limit-pod; phase= Running Mar 20 23:52:13.291: INFO: fetching pod container-emptydir-disk-limit-pod; phase= Running Mar 20 23:52:13.291: INFO: fetching pod emptydir-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:13.291: INFO: fetching pod emptydir-disk-sizelimit-pod; phase= Running Mar 20 23:52:13.293: INFO: fetching pod emptydir-memory-sizelimit-pod; phase= Running STEP: checking eviction ordering and ensuring important pods don't fail - test/e2e_node/eviction_test.go:700 @ 03/20/23 23:52:13.293 Mar 20 23:52:15.316: INFO: Kubelet Metrics: [] Mar 20 23:52:15.353: INFO: imageFsInfo.CapacityBytes: 20869787648, imageFsInfo.AvailableBytes: 15018762240 Mar 20 23:52:15.353: INFO: rootFsInfo.CapacityBytes: 20869787648, rootFsInfo.AvailableBytes: 15018762240 Mar 20 23:52:15.353: INFO: Pod: container-disk-below-sizelimit-pod Mar 20 23:52:15.353: INFO: --- summary Container: container-disk-below-sizelimit-container UsedBytes: 0 Mar 20 23:52:15.358: INFO: fetching pod container-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:15.358: INFO: fetching pod container-disk-limit-pod; phase= Running Mar 20 23:52:15.358: INFO: fetching pod container-emptydir-disk-limit-pod; phase= Running Mar 20 23:52:15.358: INFO: fetching pod emptydir-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:15.358: INFO: fetching pod emptydir-disk-sizelimit-pod; phase= Running Mar 20 23:52:15.358: INFO: fetching pod emptydir-memory-sizelimit-pod; phase= Running STEP: checking eviction ordering and ensuring important pods don't fail - test/e2e_node/eviction_test.go:700 @ 03/20/23 23:52:15.358 Mar 20 23:52:17.373: INFO: Kubelet Metrics: [] Mar 20 23:52:17.387: INFO: imageFsInfo.CapacityBytes: 20869787648, imageFsInfo.AvailableBytes: 15018762240 Mar 20 23:52:17.387: INFO: rootFsInfo.CapacityBytes: 20869787648, rootFsInfo.AvailableBytes: 15018762240 Mar 20 23:52:17.387: INFO: Pod: container-emptydir-disk-limit-pod Mar 20 23:52:17.387: INFO: --- summary Container: container-emptydir-disk-limit-container UsedBytes: 0 Mar 20 23:52:17.387: INFO: --- summary Volume: test-volume UsedBytes: 30412800 Mar 20 23:52:17.387: INFO: Pod: emptydir-disk-sizelimit-pod Mar 20 23:52:17.388: INFO: --- summary Container: emptydir-disk-sizelimit-container UsedBytes: 0 Mar 20 23:52:17.388: INFO: --- summary Volume: test-volume UsedBytes: 27267072 Mar 20 23:52:17.388: INFO: Pod: container-disk-below-sizelimit-pod Mar 20 23:52:17.388: INFO: --- summary Container: container-disk-below-sizelimit-container UsedBytes: 0 Mar 20 23:52:17.388: INFO: Pod: emptydir-disk-below-sizelimit-pod Mar 20 23:52:17.388: INFO: --- summary Container: emptydir-disk-below-sizelimit-container UsedBytes: 0 Mar 20 23:52:17.388: INFO: --- summary Volume: test-volume UsedBytes: 28315648 Mar 20 23:52:17.392: INFO: fetching pod container-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:17.392: INFO: fetching pod container-disk-limit-pod; phase= Running Mar 20 23:52:17.392: INFO: fetching pod container-emptydir-disk-limit-pod; phase= Running Mar 20 23:52:17.392: INFO: fetching pod emptydir-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:17.392: INFO: fetching pod emptydir-disk-sizelimit-pod; phase= Running Mar 20 23:52:17.392: INFO: fetching pod emptydir-memory-sizelimit-pod; phase= Running STEP: checking eviction ordering and ensuring important pods don't fail - test/e2e_node/eviction_test.go:700 @ 03/20/23 23:52:17.393 Mar 20 23:52:19.406: INFO: Kubelet Metrics: [] Mar 20 23:52:19.422: INFO: imageFsInfo.CapacityBytes: 20869787648, imageFsInfo.AvailableBytes: 14652297216 Mar 20 23:52:19.422: INFO: rootFsInfo.CapacityBytes: 20869787648, rootFsInfo.AvailableBytes: 14652297216 Mar 20 23:52:19.422: INFO: Pod: container-disk-below-sizelimit-pod Mar 20 23:52:19.422: INFO: --- summary Container: container-disk-below-sizelimit-container UsedBytes: 0 Mar 20 23:52:19.423: INFO: Pod: emptydir-disk-below-sizelimit-pod Mar 20 23:52:19.423: INFO: --- summary Container: emptydir-disk-below-sizelimit-container UsedBytes: 0 Mar 20 23:52:19.423: INFO: --- summary Volume: test-volume UsedBytes: 28315648 Mar 20 23:52:19.423: INFO: Pod: container-disk-limit-pod Mar 20 23:52:19.423: INFO: Pod: emptydir-disk-sizelimit-pod Mar 20 23:52:19.423: INFO: --- summary Container: emptydir-disk-sizelimit-container UsedBytes: 0 Mar 20 23:52:19.423: INFO: --- summary Volume: test-volume UsedBytes: 27267072 Mar 20 23:52:19.423: INFO: Pod: container-emptydir-disk-limit-pod Mar 20 23:52:19.424: INFO: --- summary Container: container-emptydir-disk-limit-container UsedBytes: 0 Mar 20 23:52:19.424: INFO: --- summary Volume: test-volume UsedBytes: 30412800 Mar 20 23:52:19.430: INFO: fetching pod container-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:19.430: INFO: fetching pod container-disk-limit-pod; phase= Running Mar 20 23:52:19.430: INFO: fetching pod container-emptydir-disk-limit-pod; phase= Running Mar 20 23:52:19.431: INFO: fetching pod emptydir-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:19.431: INFO: fetching pod emptydir-disk-sizelimit-pod; phase= Running Mar 20 23:52:19.431: INFO: fetching pod emptydir-memory-sizelimit-pod; phase= Running STEP: checking eviction ordering and ensuring important pods don't fail - test/e2e_node/eviction_test.go:700 @ 03/20/23 23:52:19.431 Mar 20 23:52:21.445: INFO: Kubelet Metrics: [] Mar 20 23:52:21.459: INFO: imageFsInfo.CapacityBytes: 20869787648, imageFsInfo.AvailableBytes: 14652297216 Mar 20 23:52:21.459: INFO: rootFsInfo.CapacityBytes: 20869787648, rootFsInfo.AvailableBytes: 14652297216 Mar 20 23:52:21.459: INFO: Pod: emptydir-disk-below-sizelimit-pod Mar 20 23:52:21.459: INFO: --- summary Container: emptydir-disk-below-sizelimit-container UsedBytes: 0 Mar 20 23:52:21.459: INFO: --- summary Volume: test-volume UsedBytes: 28315648 Mar 20 23:52:21.459: INFO: Pod: emptydir-disk-sizelimit-pod Mar 20 23:52:21.460: INFO: --- summary Container: emptydir-disk-sizelimit-container UsedBytes: 0 Mar 20 23:52:21.460: INFO: --- summary Volume: test-volume UsedBytes: 27267072 Mar 20 23:52:21.460: INFO: Pod: container-disk-below-sizelimit-pod Mar 20 23:52:21.460: INFO: --- summary Container: container-disk-below-sizelimit-container UsedBytes: 0 Mar 20 23:52:21.460: INFO: Pod: container-emptydir-disk-limit-pod Mar 20 23:52:21.460: INFO: --- summary Container: container-emptydir-disk-limit-container UsedBytes: 0 Mar 20 23:52:21.460: INFO: --- summary Volume: test-volume UsedBytes: 30412800 Mar 20 23:52:21.461: INFO: Pod: container-disk-limit-pod Mar 20 23:52:21.461: INFO: --- summary Container: container-disk-limit-container UsedBytes: 0 Mar 20 23:52:21.461: INFO: Pod: emptydir-memory-sizelimit-pod Mar 20 23:52:21.461: INFO: --- summary Container: emptydir-memory-sizelimit-container UsedBytes: 0 Mar 20 23:52:21.461: INFO: --- summary Volume: test-volume UsedBytes: 26214400 Mar 20 23:52:21.465: INFO: fetching pod container-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:21.465: INFO: fetching pod container-disk-limit-pod; phase= Running Mar 20 23:52:21.465: INFO: fetching pod container-emptydir-disk-limit-pod; phase= Running Mar 20 23:52:21.466: INFO: fetching pod emptydir-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:21.466: INFO: fetching pod emptydir-disk-sizelimit-pod; phase= Running Mar 20 23:52:21.466: INFO: fetching pod emptydir-memory-sizelimit-pod; phase= Running STEP: checking eviction ordering and ensuring important pods don't fail - test/e2e_node/eviction_test.go:700 @ 03/20/23 23:52:21.466 Mar 20 23:52:23.492: INFO: Kubelet Metrics: [] Mar 20 23:52:23.513: INFO: imageFsInfo.CapacityBytes: 20869787648, imageFsInfo.AvailableBytes: 14652297216 Mar 20 23:52:23.513: INFO: rootFsInfo.CapacityBytes: 20869787648, rootFsInfo.AvailableBytes: 14652297216 Mar 20 23:52:23.513: INFO: Pod: container-disk-limit-pod Mar 20 23:52:23.514: INFO: --- summary Container: container-disk-limit-container UsedBytes: 0 Mar 20 23:52:23.514: INFO: Pod: emptydir-memory-sizelimit-pod Mar 20 23:52:23.514: INFO: --- summary Container: emptydir-memory-sizelimit-container UsedBytes: 0 Mar 20 23:52:23.514: INFO: --- summary Volume: test-volume UsedBytes: 26214400 Mar 20 23:52:23.514: INFO: Pod: container-emptydir-disk-limit-pod Mar 20 23:52:23.514: INFO: --- summary Container: container-emptydir-disk-limit-container UsedBytes: 0 Mar 20 23:52:23.514: INFO: --- summary Volume: test-volume UsedBytes: 105910272 Mar 20 23:52:23.514: INFO: Pod: emptydir-disk-sizelimit-pod Mar 20 23:52:23.514: INFO: --- summary Container: emptydir-disk-sizelimit-container UsedBytes: 0 Mar 20 23:52:23.515: INFO: --- summary Volume: test-volume UsedBytes: 105910272 Mar 20 23:52:23.515: INFO: Pod: emptydir-disk-below-sizelimit-pod Mar 20 23:52:23.515: INFO: --- summary Container: emptydir-disk-below-sizelimit-container UsedBytes: 0 Mar 20 23:52:23.515: INFO: --- summary Volume: test-volume UsedBytes: 28315648 Mar 20 23:52:23.515: INFO: Pod: container-disk-below-sizelimit-pod Mar 20 23:52:23.515: INFO: --- summary Container: container-disk-below-sizelimit-container UsedBytes: 0 Mar 20 23:52:23.518: INFO: fetching pod container-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:23.519: INFO: fetching pod container-disk-limit-pod; phase= Running Mar 20 23:52:23.519: INFO: fetching pod container-emptydir-disk-limit-pod; phase= Running Mar 20 23:52:23.519: INFO: fetching pod emptydir-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:23.519: INFO: fetching pod emptydir-disk-sizelimit-pod; phase= Running Mar 20 23:52:23.519: INFO: fetching pod emptydir-memory-sizelimit-pod; phase= Running STEP: checking eviction ordering and ensuring important pods don't fail - test/e2e_node/eviction_test.go:700 @ 03/20/23 23:52:23.519 Mar 20 23:52:25.533: INFO: Kubelet Metrics: [] Mar 20 23:52:25.546: INFO: imageFsInfo.CapacityBytes: 20869787648, imageFsInfo.AvailableBytes: 14652297216 Mar 20 23:52:25.546: INFO: rootFsInfo.CapacityBytes: 20869787648, rootFsInfo.AvailableBytes: 14652297216 Mar 20 23:52:25.546: INFO: Pod: container-disk-limit-pod Mar 20 23:52:25.547: INFO: --- summary Container: container-disk-limit-container UsedBytes: 0 Mar 20 23:52:25.547: INFO: Pod: container-disk-below-sizelimit-pod Mar 20 23:52:25.547: INFO: --- summary Container: container-disk-below-sizelimit-container UsedBytes: 0 Mar 20 23:52:25.547: INFO: Pod: emptydir-disk-sizelimit-pod Mar 20 23:52:25.547: INFO: --- summary Container: emptydir-disk-sizelimit-container UsedBytes: 0 Mar 20 23:52:25.547: INFO: --- summary Volume: test-volume UsedBytes: 105910272 Mar 20 23:52:25.547: INFO: Pod: container-emptydir-disk-limit-pod Mar 20 23:52:25.547: INFO: --- summary Container: container-emptydir-disk-limit-container UsedBytes: 0 Mar 20 23:52:25.548: INFO: --- summary Volume: test-volume UsedBytes: 105910272 Mar 20 23:52:25.548: INFO: Pod: emptydir-memory-sizelimit-pod Mar 20 23:52:25.548: INFO: --- summary Container: emptydir-memory-sizelimit-container UsedBytes: 0 Mar 20 23:52:25.548: INFO: --- summary Volume: test-volume UsedBytes: 104857600 Mar 20 23:52:25.548: INFO: Pod: emptydir-disk-below-sizelimit-pod Mar 20 23:52:25.548: INFO: --- summary Container: emptydir-disk-below-sizelimit-container UsedBytes: 0 Mar 20 23:52:25.548: INFO: --- summary Volume: test-volume UsedBytes: 103813120 Mar 20 23:52:25.551: INFO: fetching pod container-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:25.552: INFO: fetching pod container-disk-limit-pod; phase= Running Mar 20 23:52:25.552: INFO: fetching pod container-emptydir-disk-limit-pod; phase= Running Mar 20 23:52:25.552: INFO: fetching pod emptydir-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:25.552: INFO: fetching pod emptydir-disk-sizelimit-pod; phase= Running Mar 20 23:52:25.552: INFO: fetching pod emptydir-memory-sizelimit-pod; phase= Running STEP: checking eviction ordering and ensuring important pods don't fail - test/e2e_node/eviction_test.go:700 @ 03/20/23 23:52:25.552 Mar 20 23:52:27.567: INFO: Kubelet Metrics: [] Mar 20 23:52:27.580: INFO: imageFsInfo.CapacityBytes: 20869787648, imageFsInfo.AvailableBytes: 14652297216 Mar 20 23:52:27.580: INFO: rootFsInfo.CapacityBytes: 20869787648, rootFsInfo.AvailableBytes: 14652297216 Mar 20 23:52:27.580: INFO: Pod: emptydir-memory-sizelimit-pod Mar 20 23:52:27.580: INFO: --- summary Container: emptydir-memory-sizelimit-container UsedBytes: 0 Mar 20 23:52:27.581: INFO: --- summary Volume: test-volume UsedBytes: 104857600 Mar 20 23:52:27.581: INFO: Pod: emptydir-disk-below-sizelimit-pod Mar 20 23:52:27.581: INFO: --- summary Container: emptydir-disk-below-sizelimit-container UsedBytes: 0 Mar 20 23:52:27.581: INFO: --- summary Volume: test-volume UsedBytes: 103813120 Mar 20 23:52:27.581: INFO: Pod: emptydir-disk-sizelimit-pod Mar 20 23:52:27.581: INFO: --- summary Container: emptydir-disk-sizelimit-container UsedBytes: 0 Mar 20 23:52:27.581: INFO: --- summary Volume: test-volume UsedBytes: 105910272 Mar 20 23:52:27.581: INFO: Pod: container-disk-below-sizelimit-pod Mar 20 23:52:27.581: INFO: --- summary Container: container-disk-below-sizelimit-container UsedBytes: 0 Mar 20 23:52:27.581: INFO: Pod: container-emptydir-disk-limit-pod Mar 20 23:52:27.582: INFO: --- summary Container: container-emptydir-disk-limit-container UsedBytes: 0 Mar 20 23:52:27.582: INFO: --- summary Volume: test-volume UsedBytes: 105910272 Mar 20 23:52:27.582: INFO: Pod: container-disk-limit-pod Mar 20 23:52:27.582: INFO: --- summary Container: container-disk-limit-container UsedBytes: 0 Mar 20 23:52:27.585: INFO: fetching pod container-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:27.585: INFO: fetching pod container-disk-limit-pod; phase= Running Mar 20 23:52:27.586: INFO: fetching pod container-emptydir-disk-limit-pod; phase= Running Mar 20 23:52:27.586: INFO: fetching pod emptydir-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:27.586: INFO: fetching pod emptydir-disk-sizelimit-pod; phase= Running Mar 20 23:52:27.586: INFO: fetching pod emptydir-memory-sizelimit-pod; phase= Running STEP: checking eviction ordering and ensuring important pods don't fail - test/e2e_node/eviction_test.go:700 @ 03/20/23 23:52:27.586 Mar 20 23:52:29.608: INFO: Kubelet Metrics: [] Mar 20 23:52:29.620: INFO: imageFsInfo.CapacityBytes: 20869787648, imageFsInfo.AvailableBytes: 14653763584 Mar 20 23:52:29.621: INFO: rootFsInfo.CapacityBytes: 20869787648, rootFsInfo.AvailableBytes: 14653763584 Mar 20 23:52:29.621: INFO: Pod: container-disk-below-sizelimit-pod Mar 20 23:52:29.621: INFO: --- summary Container: container-disk-below-sizelimit-container UsedBytes: 0 Mar 20 23:52:29.621: INFO: Pod: container-emptydir-disk-limit-pod Mar 20 23:52:29.621: INFO: --- summary Container: container-emptydir-disk-limit-container UsedBytes: 0 Mar 20 23:52:29.621: INFO: --- summary Volume: test-volume UsedBytes: 105910272 Mar 20 23:52:29.621: INFO: Pod: container-disk-limit-pod Mar 20 23:52:29.622: INFO: --- summary Container: container-disk-limit-container UsedBytes: 0 Mar 20 23:52:29.622: INFO: Pod: emptydir-disk-below-sizelimit-pod Mar 20 23:52:29.622: INFO: --- summary Container: emptydir-disk-below-sizelimit-container UsedBytes: 0 Mar 20 23:52:29.622: INFO: --- summary Volume: test-volume UsedBytes: 103813120 Mar 20 23:52:29.622: INFO: Pod: emptydir-disk-sizelimit-pod Mar 20 23:52:29.622: INFO: --- summary Container: emptydir-disk-sizelimit-container UsedBytes: 0 Mar 20 23:52:29.622: INFO: --- summary Volume: test-volume UsedBytes: 105910272 Mar 20 23:52:29.622: INFO: Pod: emptydir-memory-sizelimit-pod Mar 20 23:52:29.623: INFO: --- summary Container: emptydir-memory-sizelimit-container UsedBytes: 0 Mar 20 23:52:29.623: INFO: --- summary Volume: test-volume UsedBytes: 104857600 Mar 20 23:52:29.626: INFO: fetching pod container-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:29.626: INFO: fetching pod container-disk-limit-pod; phase= Running Mar 20 23:52:29.626: INFO: fetching pod container-emptydir-disk-limit-pod; phase= Running Mar 20 23:52:29.626: INFO: fetching pod emptydir-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:29.626: INFO: fetching pod emptydir-disk-sizelimit-pod; phase= Running Mar 20 23:52:29.626: INFO: fetching pod emptydir-memory-sizelimit-pod; phase= Running STEP: checking eviction ordering and ensuring important pods don't fail - test/e2e_node/eviction_test.go:700 @ 03/20/23 23:52:29.626 Mar 20 23:52:31.640: INFO: Kubelet Metrics: [] Mar 20 23:52:31.653: INFO: imageFsInfo.CapacityBytes: 20869787648, imageFsInfo.AvailableBytes: 14653763584 Mar 20 23:52:31.653: INFO: rootFsInfo.CapacityBytes: 20869787648, rootFsInfo.AvailableBytes: 14653763584 Mar 20 23:52:31.653: INFO: Pod: emptydir-disk-sizelimit-pod Mar 20 23:52:31.653: INFO: --- summary Container: emptydir-disk-sizelimit-container UsedBytes: 0 Mar 20 23:52:31.653: INFO: --- summary Volume: test-volume UsedBytes: 105910272 Mar 20 23:52:31.653: INFO: Pod: container-emptydir-disk-limit-pod Mar 20 23:52:31.653: INFO: --- summary Container: container-emptydir-disk-limit-container UsedBytes: 0 Mar 20 23:52:31.653: INFO: --- summary Volume: test-volume UsedBytes: 105910272 Mar 20 23:52:31.653: INFO: Pod: emptydir-disk-below-sizelimit-pod Mar 20 23:52:31.653: INFO: --- summary Container: emptydir-disk-below-sizelimit-container UsedBytes: 0 Mar 20 23:52:31.653: INFO: --- summary Volume: test-volume UsedBytes: 103813120 Mar 20 23:52:31.653: INFO: Pod: emptydir-memory-sizelimit-pod Mar 20 23:52:31.653: INFO: --- summary Container: emptydir-memory-sizelimit-container UsedBytes: 0 Mar 20 23:52:31.653: INFO: --- summary Volume: test-volume UsedBytes: 104857600 Mar 20 23:52:31.653: INFO: Pod: container-disk-limit-pod Mar 20 23:52:31.653: INFO: --- summary Container: container-disk-limit-container UsedBytes: 0 Mar 20 23:52:31.653: INFO: Pod: container-disk-below-sizelimit-pod Mar 20 23:52:31.653: INFO: --- summary Container: container-disk-below-sizelimit-container UsedBytes: 0 Mar 20 23:52:31.656: INFO: fetching pod container-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:31.656: INFO: fetching pod container-disk-limit-pod; phase= Running Mar 20 23:52:31.656: INFO: fetching pod container-emptydir-disk-limit-pod; phase= Running Mar 20 23:52:31.656: INFO: fetching pod emptydir-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:31.656: INFO: fetching pod emptydir-disk-sizelimit-pod; phase= Running Mar 20 23:52:31.656: INFO: fetching pod emptydir-memory-sizelimit-pod; phase= Running STEP: checking eviction ordering and ensuring important pods don't fail - test/e2e_node/eviction_test.go:700 @ 03/20/23 23:52:31.656 Mar 20 23:52:33.685: INFO: Kubelet Metrics: [] Mar 20 23:52:33.699: INFO: imageFsInfo.CapacityBytes: 20869787648, imageFsInfo.AvailableBytes: 14653763584 Mar 20 23:52:33.700: INFO: rootFsInfo.CapacityBytes: 20869787648, rootFsInfo.AvailableBytes: 14653763584 Mar 20 23:52:33.700: INFO: Pod: container-emptydir-disk-limit-pod Mar 20 23:52:33.700: INFO: --- summary Container: container-emptydir-disk-limit-container UsedBytes: 0 Mar 20 23:52:33.700: INFO: --- summary Volume: test-volume UsedBytes: 105910272 Mar 20 23:52:33.700: INFO: Pod: container-disk-limit-pod Mar 20 23:52:33.700: INFO: --- summary Container: container-disk-limit-container UsedBytes: 0 Mar 20 23:52:33.700: INFO: Pod: emptydir-disk-below-sizelimit-pod Mar 20 23:52:33.700: INFO: --- summary Container: emptydir-disk-below-sizelimit-container UsedBytes: 0 Mar 20 23:52:33.700: INFO: --- summary Volume: test-volume UsedBytes: 103813120 Mar 20 23:52:33.701: INFO: Pod: container-disk-below-sizelimit-pod Mar 20 23:52:33.701: INFO: --- summary Container: container-disk-below-sizelimit-container UsedBytes: 0 Mar 20 23:52:33.701: INFO: Pod: emptydir-memory-sizelimit-pod Mar 20 23:52:33.701: INFO: --- summary Container: emptydir-memory-sizelimit-container UsedBytes: 0 Mar 20 23:52:33.701: INFO: --- summary Volume: test-volume UsedBytes: 104857600 Mar 20 23:52:33.701: INFO: Pod: emptydir-disk-sizelimit-pod Mar 20 23:52:33.701: INFO: --- summary Container: emptydir-disk-sizelimit-container UsedBytes: 0 Mar 20 23:52:33.701: INFO: --- summary Volume: test-volume UsedBytes: 105910272 Mar 20 23:52:33.705: INFO: fetching pod container-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:33.705: INFO: fetching pod container-disk-limit-pod; phase= Running Mar 20 23:52:33.705: INFO: fetching pod container-emptydir-disk-limit-pod; phase= Running Mar 20 23:52:33.705: INFO: fetching pod emptydir-disk-below-sizelimit-pod; phase= Running Mar 20 23:52:33.705: INFO: fetching pod emptydir-disk-sizelimit-pod; phase= Running Mar 20 23:52:33.705: INFO: fetching pod emptydir-memory-sizelimit-pod; phase= Running STEP: checking eviction ordering and ensuring important pods don't fail - test/e2e_node/eviction_test.go:700 @ 03/20/23 23:52:33.705 [TIMEDOUT] A suite timeout occurred In [It] at: test/e2e_node/eviction_test.go:563 @ 03/20/23 23:52:34.211 This is the Progress Report generated when the suite timeout occurred: [sig-node] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods (Spec Runtime: 1m6.503s) test/e2e_node/eviction_test.go:563 In [It] (Node Runtime: 27.187s) test/e2e_node/eviction_test.go:563 At [By Step] checking eviction ordering and ensuring important pods don't fail (Step Runtime: 506ms) test/e2e_node/eviction_test.go:700 Spec Goroutine goroutine 8226 [select] k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00094f7a0, {0x5b8fd00?, 0x8841898}, 0x1, {0x0, 0x0, 0x0}) vendor/github.com/onsi/gomega/internal/async_assertion.go:538 k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00094f7a0, {0x5b8fd00, 0x8841898}, {0x0, 0x0, 0x0}) vendor/github.com/onsi/gomega/internal/async_assertion.go:145 > k8s.io/kubernetes/test/e2e_node.runEvictionTest.func1.2({0x7fa5f81c5d00?, 0xc0018b24e0}) test/e2e_node/eviction_test.go:585 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x5baef40?, 0xc0018b24e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:456 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:863 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:850 Begin Additional Progress Reports >> Expected success, but got an error: <*errors.errorString | 0xc001d1e5f0>: pods that should be evicted are still running: []string{"emptydir-disk-sizelimit-pod", "container-disk-limit-pod", "container-emptydir-disk-limit-pod"} { s: "pods that should be evicted are still running: []string{\"emptydir-disk-sizelimit-pod\", \"container-disk-limit-pod\", \"container-emptydir-disk-limit-pod\"}", } << End Additional Progress Reports Goroutines of Interest goroutine 1 [chan receive, 60 minutes] testing.(*T).Run(0xc0000eb380, {0x52fc376?, 0x53e765?}, 0x558a8e8) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:1630 testing.runTests.func1(0x8812380?) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:2036 testing.tRunner(0xc0000eb380, 0xc0009ffb78) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:1576 testing.runTests(0xc0009590e0?, {0x864cd70, 0x1, 0x1}, {0x88438a0?, 0xc000654750?, 0x0?}) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:2034 testing.(*M).Run(0xc0009590e0) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:1906 > k8s.io/kubernetes/test/e2e_node.TestMain(0xffffffffffffffff?) test/e2e_node/e2e_node_suite_test.go:145 main.main() /tmp/go-build3463134305/b001/_testmain.go:49 goroutine 245 [syscall, 59 minutes] syscall.Syscall6(0x100?, 0xc000de8cd8?, 0x6fc80d?, 0x1?, 0x52152e0?, 0xc0006f49a0?, 0x47edfe?) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/syscall/syscall_linux.go:91 os.(*Process).blockUntilWaitable(0xc001308e10) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/wait_waitid.go:32 os.(*Process).wait(0xc001308e10) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/exec_unix.go:22 os.(*Process).Wait(...) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/exec.go:132 os/exec.(*Cmd).Wait(0xc0003cfb80) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/exec/exec.go:890 > k8s.io/kubernetes/test/e2e_node/services.(*server).start.func1() test/e2e_node/services/server.go:166 > k8s.io/kubernetes/test/e2e_node/services.(*server).start test/e2e_node/services/server.go:123 [FAILED] A suite timeout occurred and then the following failure was recorded in the timedout node before it exited: Context was cancelled after 27.154s. Expected success, but got an error: <*errors.errorString | 0xc001d1e5f0>: pods that should be evicted are still running: []string{"emptydir-disk-sizelimit-pod", "container-disk-limit-pod", "container-emptydir-disk-limit-pod"} { s: "pods that should be evicted are still running: []string{\"emptydir-disk-sizelimit-pod\", \"container-disk-limit-pod\", \"container-emptydir-disk-limit-pod\"}", } In [It] at: test/e2e_node/eviction_test.go:585 @ 03/20/23 23:52:34.215 < Exit [It] should eventually evict all of the correct pods - test/e2e_node/eviction_test.go:563 @ 03/20/23 23:52:34.216 (27.191s) > Enter [AfterEach] TOP-LEVEL - test/e2e_node/eviction_test.go:620 @ 03/20/23 23:52:34.216 STEP: deleting pods - test/e2e_node/eviction_test.go:631 @ 03/20/23 23:52:34.216 STEP: deleting pod: emptydir-disk-sizelimit-pod - test/e2e_node/eviction_test.go:633 @ 03/20/23 23:52:34.216 STEP: deleting pod: container-disk-limit-pod - test/e2e_node/eviction_test.go:633 @ 03/20/23 23:53:00.254 [TIMEDOUT] A grace period timeout occurred In [AfterEach] at: test/e2e_node/eviction_test.go:620 @ 03/20/23 23:53:04.216 This is the Progress Report generated when the grace period timeout occurred: [sig-node] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods (Spec Runtime: 1m36.508s) test/e2e_node/eviction_test.go:563 In [AfterEach] (Node Runtime: 30s) test/e2e_node/eviction_test.go:620 At [By Step] deleting pod: container-disk-limit-pod (Step Runtime: 3.962s) test/e2e_node/eviction_test.go:633 Spec Goroutine goroutine 8313 [select] k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000409180, {0x5b8fa90?, 0x8841898}, 0x1, {0x0, 0x0, 0x0}) vendor/github.com/onsi/gomega/internal/async_assertion.go:538 k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000409180, {0x5b8fa90, 0x8841898}, {0x0, 0x0, 0x0}) vendor/github.com/onsi/gomega/internal/async_assertion.go:145 k8s.io/kubernetes/test/e2e/framework.asyncAssertion.Should({{0x7fa5f81c5d00, 0xc001d96db0}, {0xc001874dc0, 0x1, 0x1}, 0x8bb2c97000, 0x77359400, 0x0}, {0x5b8fa90, 0x8841898}) test/e2e/framework/expect.go:234 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodNotFoundInNamespace({0x7fa5f81c5d00, 0xc001d96db0}, {0x5be7b50?, 0xc001901040}, {0xc0009552f0, 0x18}, {0xc001c7d3e0, 0x1f}, 0x0?) test/e2e/framework/pod/wait.go:538 k8s.io/kubernetes/test/e2e/framework/pod.(*PodClient).DeleteSync(0xc001b3e5a0, {0x7fa5f81c5d00, 0xc001d96db0}, {0xc0009552f0, 0x18}, {{{0x0, 0x0}, {0x0, 0x0}}, 0x0, ...}, ...) test/e2e/framework/pod/pod_client.go:184 > k8s.io/kubernetes/test/e2e_node.runEvictionTest.func1.3({0x7fa5f81c5d00?, 0xc001d96db0}) test/e2e_node/eviction_test.go:634 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x5baef40?, 0xc001d96db0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:456 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:863 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:850 Begin Additional Progress Reports >> Expected <*v1.Pod | 0xc00099b200>: metadata: creationTimestamp: "2023-03-20T23:52:02Z" deletionGracePeriodSeconds: 30 deletionTimestamp: "2023-03-20T23:53:30Z" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:spec: f:containers: k:{"name":"container-disk-limit-container"}: .: {} f:command: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: .: {} f:limits: .: {} f:ephemeral-storage: {} f:requests: .: {} f:ephemeral-storage: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:nodeName: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: {} f:terminationGracePeriodSeconds: {} manager: e2e_node.test operation: Update time: "2023-03-20T23:52:02Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: .: {} k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodScheduled"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.85.0.25"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: "2023-03-20T23:52:05Z" name: container-disk-limit-pod namespace: localstorage-eviction-test-8283 resourceVersion: "1709" uid: 70bab6ab-22f4-4698-bdf5-bf8357c65be1 spec: containers: - command: - sh - -c - i=0; while [ $i -lt 101 ]; do dd if=/dev/urandom of=file${i} bs=1048576 count=1 2>/dev/null; sleep .1; i=$(($i+1)); done; while true; do sleep 5; done image: registry.k8s.io/e2e-test-images/busybox:1.29-4 imagePullPolicy: Never name: container-disk-limit-container resources: limits: ephemeral-storage: 100Mi requests: ephemeral-storage: 100Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: Default enableServiceLinks: true nodeName: tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Never schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 status: conditions: - lastProbeTime: null lastTransitionTime: "2023-03-20T23:52:03Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-03-20T23:52:05Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2023-03-20T23:52:05Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-03-20T23:52:03Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://dc9b9414deff39f9d21a6b314ab672ff553023b0fb461988041ecb4f0406e4a1 image: registry.k8s.io/e2e-test-images/busybox:1.29-4 imageID: registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 lastState: {} name: container-disk-limit-container ready: true restartCount: 0 started: true state: running: startedAt: "2023-03-20T23:52:05Z" hostIP: 10.138.0.71 phase: Running podIP: 10.85.0.25 podIPs: - ip: 10.85.0.25 qosClass: BestEffort startTime: "2023-03-20T23:52:03Z" to be nil << End Additional Progress Reports Goroutines of Interest goroutine 1 [chan receive, 60 minutes] testing.(*T).Run(0xc0000eb380, {0x52fc376?, 0x53e765?}, 0x558a8e8) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:1630 testing.runTests.func1(0x8812380?) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:2036 testing.tRunner(0xc0000eb380, 0xc0009ffb78) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:1576 testing.runTests(0xc0009590e0?, {0x864cd70, 0x1, 0x1}, {0x88438a0?, 0xc000654750?, 0x0?}) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:2034 testing.(*M).Run(0xc0009590e0) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/testing/testing.go:1906 > k8s.io/kubernetes/test/e2e_node.TestMain(0xffffffffffffffff?) test/e2e_node/e2e_node_suite_test.go:145 main.main() /tmp/go-build3463134305/b001/_testmain.go:49 goroutine 245 [syscall, 60 minutes] syscall.Syscall6(0x100?, 0xc000de8cd8?, 0x6fc80d?, 0x1?, 0x52152e0?, 0xc0006f49a0?, 0x47edfe?) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/syscall/syscall_linux.go:91 os.(*Process).blockUntilWaitable(0xc001308e10) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/wait_waitid.go:32 os.(*Process).wait(0xc001308e10) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/exec_unix.go:22 os.(*Process).Wait(...) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/exec.go:132 os/exec.(*Cmd).Wait(0xc0003cfb80) /go/src/k8s.io/kubernetes/_output/local/.gimme/versions/go1.20.2.linux.amd64/src/os/exec/exec.go:890 > k8s.io/kubernetes/test/e2e_node/services.(*server).start.func1() test/e2e_node/services/server.go:166 > k8s.io/kubernetes/test/e2e_node/services.(*server).start test/e2e_node/services/server.go:123 [FAILED] A grace period timeout occurred and then the following failure was recorded in the timedout node before it exited: wait for pod "container-disk-limit-pod" to disappear: expected pod to not be found: Context was cancelled after 3.960s. Expected <*v1.Pod | 0xc00099b200>: metadata: creationTimestamp: "2023-03-20T23:52:02Z" deletionGracePeriodSeconds: 30 deletionTimestamp: "2023-03-20T23:53:30Z" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:spec: f:containers: k:{"name":"container-disk-limit-container"}: .: {} f:command: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: .: {} f:limits: .: {} f:ephemeral-storage: {} f:requests: .: {} f:ephemeral-storage: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:nodeName: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: {} f:terminationGracePeriodSeconds: {} manager: e2e_node.test operation: Update time: "2023-03-20T23:52:02Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: .: {} k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodScheduled"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.85.0.25"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: "2023-03-20T23:52:05Z" name: container-disk-limit-pod namespace: localstorage-eviction-test-8283 resourceVersion: "1709" uid: 70bab6ab-22f4-4698-bdf5-bf8357c65be1 spec: containers: - command: - sh - -c - i=0; while [ $i -lt 101 ]; do dd if=/dev/urandom of=file${i} bs=1048576 count=1 2>/dev/null; sleep .1; i=$(($i+1)); done; while true; do sleep 5; done image: registry.k8s.io/e2e-test-images/busybox:1.29-4 imagePullPolicy: Never name: container-disk-limit-container resources: limits: ephemeral-storage: 100Mi requests: ephemeral-storage: 100Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: Default enableServiceLinks: true nodeName: tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Never schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 status: conditions: - lastProbeTime: null lastTransitionTime: "2023-03-20T23:52:03Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-03-20T23:52:05Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2023-03-20T23:52:05Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-03-20T23:52:03Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://dc9b9414deff39f9d21a6b314ab672ff553023b0fb461988041ecb4f0406e4a1 image: registry.k8s.io/e2e-test-images/busybox:1.29-4 imageID: registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 lastState: {} name: container-disk-limit-container ready: true restartCount: 0 started: true state: running: startedAt: "2023-03-20T23:52:05Z" hostIP: 10.138.0.71 phase: Running podIP: 10.85.0.25 podIPs: - ip: 10.85.0.25 qosClass: BestEffort startTime: "2023-03-20T23:52:03Z" to be nil In [AfterEach] at: test/e2e/framework/pod/pod_client.go:184 @ 03/20/23 23:53:04.224 Mar 20 23:53:04.223: INFO: Failed inside E2E framework: k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodNotFoundInNamespace({0x7fa5f81c5d00, 0xc001d96db0}, {0x5be7b50?, 0xc001901040}, {0xc0009552f0, 0x18}, {0xc001c7d3e0, 0x1f}, 0x0?) test/e2e/framework/pod/wait.go:538 +0x190 k8s.io/kubernetes/test/e2e/framework/pod.(*PodClient).DeleteSync(0xc001b3e5a0, {0x7fa5f81c5d00, 0xc001d96db0}, {0xc0009552f0, 0x18}, {{{0x0, 0x0}, {0x0, 0x0}}, 0x0, ...}, ...) test/e2e/framework/pod/pod_client.go:184 +0x1db k8s.io/kubernetes/test/e2e_node.runEvictionTest.func1.3({0x7fa5f81c5d00?, 0xc001d96db0}) test/e2e_node/eviction_test.go:634 +0x1dd < Exit [AfterEach] TOP-LEVEL - test/e2e_node/eviction_test.go:620 @ 03/20/23 23:53:04.224 (30.008s) > Enter [AfterEach] when we run containers that should cause evictions due to pod local storage violations - test/e2e_node/util.go:190 @ 03/20/23 23:53:04.224 STEP: Stopping the kubelet - test/e2e_node/util.go:200 @ 03/20/23 23:53:04.224 Mar 20 23:53:04.261: INFO: Get running kubelet with systemctl: UNIT LOAD ACTIVE SUB DESCRIPTION kubelet-20230320T225220.service loaded active running /tmp/node-e2e-20230320T225220/kubelet --kubeconfig /tmp/node-e2e-20230320T225220/kubeconfig --root-dir /var/lib/kubelet --v 4 --hostname-override tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20230320T225220/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization of SUB. SUB = The low-level unit activation state, values depend on unit type. 1 loaded units listed. , kubelet-20230320T225220 W0320 23:53:04.330414 2678 util.go:477] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:45584->127.0.0.1:10248: read: connection reset by peer STEP: Starting the kubelet - test/e2e_node/util.go:216 @ 03/20/23 23:53:04.34 W0320 23:53:04.416443 2678 util.go:477] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused < Exit [AfterEach] when we run containers that should cause evictions due to pod local storage violations - test/e2e_node/util.go:190 @ 03/20/23 23:53:09.423 (5.199s) > Enter [AfterEach] [sig-node] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] - test/e2e/framework/node/init/init.go:33 @ 03/20/23 23:53:09.423 Mar 20 23:53:09.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] - test/e2e/framework/node/init/init.go:33 @ 03/20/23 23:53:09.425 (3ms) > Enter [DeferCleanup (Each)] [sig-node] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] - test/e2e/framework/metrics/init/init.go:35 @ 03/20/23 23:53:09.425 < Exit [DeferCleanup (Each)] [sig-node] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] - test/e2e/framework/metrics/init/init.go:35 @ 03/20/23 23:53:09.425 (0s) > Enter [DeferCleanup (Each)] [sig-node] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] - dump namespaces | framework.go:209 @ 03/20/23 23:53:09.425 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/20/23 23:53:09.425 STEP: Collecting events from namespace "localstorage-eviction-test-8283". - test/e2e/framework/debug/dump.go:42 @ 03/20/23 23:53:09.425 STEP: Found 25 events. - test/e2e/framework/debug/dump.go:46 @ 03/20/23 23:53:09.428 Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:03 +0000 UTC - event for container-disk-below-sizelimit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:04 +0000 UTC - event for container-disk-below-sizelimit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Created: Created container container-disk-below-sizelimit-container Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:04 +0000 UTC - event for container-disk-below-sizelimit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Started: Started container container-disk-below-sizelimit-container Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:04 +0000 UTC - event for container-disk-limit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:04 +0000 UTC - event for container-emptydir-disk-limit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:04 +0000 UTC - event for container-emptydir-disk-limit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Created: Created container container-emptydir-disk-limit-container Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:04 +0000 UTC - event for emptydir-disk-below-sizelimit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Created: Created container emptydir-disk-below-sizelimit-container Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:04 +0000 UTC - event for emptydir-disk-below-sizelimit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:04 +0000 UTC - event for emptydir-disk-sizelimit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:05 +0000 UTC - event for container-disk-limit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Created: Created container container-disk-limit-container Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:05 +0000 UTC - event for container-disk-limit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Started: Started container container-disk-limit-container Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:05 +0000 UTC - event for container-emptydir-disk-limit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Started: Started container container-emptydir-disk-limit-container Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:05 +0000 UTC - event for emptydir-disk-below-sizelimit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Started: Started container emptydir-disk-below-sizelimit-container Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:05 +0000 UTC - event for emptydir-disk-sizelimit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Created: Created container emptydir-disk-sizelimit-container Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:05 +0000 UTC - event for emptydir-disk-sizelimit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Started: Started container emptydir-disk-sizelimit-container Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:05 +0000 UTC - event for emptydir-memory-sizelimit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Created: Created container emptydir-memory-sizelimit-container Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:05 +0000 UTC - event for emptydir-memory-sizelimit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:05 +0000 UTC - event for emptydir-memory-sizelimit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Started: Started container emptydir-memory-sizelimit-container Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:28 +0000 UTC - event for emptydir-disk-sizelimit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Killing: Stopping container emptydir-disk-sizelimit-container Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:28 +0000 UTC - event for emptydir-disk-sizelimit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Evicted: Usage of EmptyDir volume "test-volume" exceeds the limit "100Mi". Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:38 +0000 UTC - event for container-emptydir-disk-limit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Killing: Stopping container container-emptydir-disk-limit-container Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:38 +0000 UTC - event for container-emptydir-disk-limit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Evicted: Pod ephemeral local storage usage exceeds the total limit of containers 100Mi. Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:38 +0000 UTC - event for emptydir-disk-sizelimit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} ExceededGracePeriod: Container runtime did not kill the pod within specified grace period. Mar 20 23:53:09.428: INFO: At 2023-03-20 23:52:48 +0000 UTC - event for container-emptydir-disk-limit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} ExceededGracePeriod: Container runtime did not kill the pod within specified grace period. Mar 20 23:53:09.428: INFO: At 2023-03-20 23:53:00 +0000 UTC - event for container-disk-limit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Killing: Stopping container container-disk-limit-container Mar 20 23:53:09.431: INFO: POD NODE PHASE GRACE CONDITIONS Mar 20 23:53:09.431: INFO: container-disk-below-sizelimit-pod tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:52:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:52:05 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:52:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:52:02 +0000 UTC }] Mar 20 23:53:09.431: INFO: container-emptydir-disk-limit-pod tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 Failed [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:52:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:53:05 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:53:05 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:52:03 +0000 UTC }] Mar 20 23:53:09.431: INFO: emptydir-disk-below-sizelimit-pod tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:52:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:52:05 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:52:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:52:03 +0000 UTC }] Mar 20 23:53:09.431: INFO: emptydir-memory-sizelimit-pod tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:52:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:52:05 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:52:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:52:02 +0000 UTC }] Mar 20 23:53:09.431: INFO: Mar 20 23:53:09.449: INFO: Logging node info for node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 Mar 20 23:53:09.450: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 8a7dfb05-f333-4f40-9f20-a57ae3eaa3a4 1721 0 2023-03-20 22:54:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-20 22:54:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2023-03-20 23:53:04 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20869787648 0} {<nil>} 20380652Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3841228800 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18782808853 0} {<nil>} 18782808853 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3579084800 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-20 23:53:04 +0000 UTC,LastTransitionTime:2023-03-20 23:44:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-20 23:53:04 +0000 UTC,LastTransitionTime:2023-03-20 23:34:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-20 23:53:04 +0000 UTC,LastTransitionTime:2023-03-20 22:54:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-20 23:53:04 +0000 UTC,LastTransitionTime:2023-03-20 23:53:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.71,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:42e8e67821f5970370e2e1f0c1acea4f,SystemUUID:42e8e678-21f5-9703-70e2-e1f0c1acea4f,BootID:f0916da1-6860-4791-9f27-ed232e503da7,KernelVersion:6.1.11-200.fc37.x86_64,OSImage:Fedora CoreOS 37.20230218.3.0,ContainerRuntimeVersion:cri-o://1.26.0,KubeletVersion:v1.27.0-beta.0.29+c9ff2866682432,KubeProxyVersion:v1.27.0-beta.0.29+c9ff2866682432,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 registry.k8s.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 registry.k8s.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/node-perf/tf-wide-deep@sha256:91ab3b5ee22441c99370944e2e2cb32670db62db433611b4e3780bdee6a8e5a1 registry.k8s.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d7e74e6555abe4b001aadddc248447b472ae35ccbb2c21ca0febace6c4c6d7bb registry.k8s.io/e2e-test-images/node-perf/tf-wide-deep:1.3],SizeBytes:559664987,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/gluster@sha256:033a12fe65438751690b519cebd4135a3485771086bcf437212b7b886bb7956c registry.k8s.io/e2e-test-images/volume/gluster@sha256:c52e01956fec2cf5968b87be8f06ae740ea5d208a3b41fa2c7970b13cc515be5 registry.k8s.io/e2e-test-images/volume/gluster:1.3],SizeBytes:352430719,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9 registry.k8s.io/etcd:3.5.7-0],SizeBytes:297083935,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:034f77d52166fcacb81d6a6db10a4e24644c241896822e6525925859fec09f47 registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:272589700,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost@sha256:da18b4806cfa370df04f9c3faa7d654a22a80467dc4cab92bd1b22b4abe4d5aa registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:129622797,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd@sha256:21e720a020bf8d492b5dd2fe0f31a5205021176f505ecf35b10177f8bfd68980 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:128894228,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:0ce71ef6d759425d22b10e65b439749fe5d13377a188e2fc060b731cdb4e6901 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.7],SizeBytes:115035523,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/node-perf/npb-is@sha256:8285539c79625b192a5e33fc3d21edc1a7776fb9afe15fae3b5037a7a8020839 registry.k8s.io/e2e-test-images/node-perf/npb-is@sha256:f941079315f73b182b0f416134253ee87ab51162cbd2e9fcd31bbe726999a977 registry.k8s.io/e2e-test-images/node-perf/npb-is:1.2],SizeBytes:99663088,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/node-perf/npb-ep@sha256:90b5cfc5451428aad4dd6af9960640f2506804d35aa05e83c11bf0a46ac318c8 registry.k8s.io/e2e-test-images/node-perf/npb-ep@sha256:a37ad3a2ccb2b8aa7ced0b7c884888d2cef953cfc9a158e3e8b914d52147091b registry.k8s.io/e2e-test-images/node-perf/npb-ep:1.2],SizeBytes:99661552,},ContainerImage{Names:[gcr.io/cadvisor/cadvisor@sha256:89e6137f068ded2e9a3a012ce71260b9afc57a19305842aa1074239841a539a7 gcr.io/cadvisor/cadvisor:v0.43.0],SizeBytes:87971088,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:dfa235c2d64c29405f40489cf631193b27bec6dcf13cfee9824e449f6ddac051 registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:43877486,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-device-plugin@sha256:3dd0413e5a78f1c2a6484f168ba3daf23ebb0b1141897237e9559db6c5f7101f registry.k8s.io/e2e-test-images/sample-device-plugin@sha256:e84f6ca27c51ddedf812637dd2bcf771ad69fdca1173e5690c372370d0f93c40 registry.k8s.io/e2e-test-images/sample-device-plugin:1.3],SizeBytes:41740418,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[registry.k8s.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:19251111,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx@sha256:c42b04e8cf71231fac5dbc833366f7ce2ae78ef8b9df4304fcb83edcd495f69f registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:17244936,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/ipc-utils@sha256:647d092bada3b46c449d875adf31d71c1dd29c244e9cca6a04fddf9d6bcac136 registry.k8s.io/e2e-test-images/ipc-utils@sha256:89fe5eb4c180571eb5c4e4179f1e9bc03497b1f50e45e4f720165617a603d737 registry.k8s.io/e2e-test-images/ipc-utils:1.3],SizeBytes:12251265,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[registry.k8s.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 registry.k8s.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox@sha256:4cdffea536d503c58d7e087bab34a43e63a11dcfa4132b5a1b838885f08fb730 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:1374155,},ContainerImage{Names:[registry.k8s.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10 registry.k8s.io/pause:3.9],SizeBytes:750414,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4 registry.k8s.io/pause:3.6],SizeBytes:690326,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 20 23:53:09.451: INFO: Logging kubelet events for node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 Mar 20 23:53:09.452: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 Mar 20 23:53:09.455: INFO: container-disk-below-sizelimit-pod started at 2023-03-20 23:52:02 +0000 UTC (0+1 container statuses recorded) Mar 20 23:53:09.455: INFO: Container container-disk-below-sizelimit-container ready: true, restart count 0 Mar 20 23:53:09.455: INFO: container-emptydir-disk-limit-pod started at 2023-03-20 23:52:03 +0000 UTC (0+1 container statuses recorded) Mar 20 23:53:09.455: INFO: Container container-emptydir-disk-limit-container ready: false, restart count 0 Mar 20 23:53:09.455: INFO: emptydir-disk-below-sizelimit-pod started at 2023-03-20 23:52:03 +0000 UTC (0+1 container statuses recorded) Mar 20 23:53:09.455: INFO: Container emptydir-disk-below-sizelimit-container ready: true, restart count 0 Mar 20 23:53:09.455: INFO: emptydir-memory-sizelimit-pod started at 2023-03-20 23:52:02 +0000 UTC (0+1 container statuses recorded) Mar 20 23:53:09.455: INFO: Container emptydir-memory-sizelimit-container ready: true, restart count 0 W0320 23:53:09.456327 2678 metrics_grabber.go:111] Can't find any pods in namespace kube-system to grab metrics from Mar 20 23:53:09.473: INFO: Latency metrics for node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/20/23 23:53:09.473 (48ms) < Exit [DeferCleanup (Each)] [sig-node] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] - dump namespaces | framework.go:209 @ 03/20/23 23:53:09.473 (48ms) > Enter [DeferCleanup (Each)] [sig-node] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] - tear down framework | framework.go:206 @ 03/20/23 23:53:09.473 STEP: Destroying namespace "localstorage-eviction-test-8283" for this suite. - test/e2e/framework/framework.go:351 @ 03/20/23 23:53:09.473 < Exit [DeferCleanup (Each)] [sig-node] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] - tear down framework | framework.go:206 @ 03/20/23 23:53:09.476 (3ms)
Find local mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[It\]\s\[sig\-node\]\sPriorityPidEvictionOrdering\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sPIDPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
[FAILED] Timed out after 120.000s. Expected <*errors.errorString | 0xc001076700>: NodeCondition: PIDPressure not encountered { s: "NodeCondition: PIDPressure not encountered", } to be nil In [It] at: test/e2e_node/eviction_test.go:571 @ 03/20/23 23:50:43.116from junit_fedora01.xml
> Enter [BeforeEach] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - set up framework | framework.go:191 @ 03/20/23 23:48:03.886 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/20/23 23:48:03.886 STEP: Building a namespace api object, basename pidpressure-eviction-test - test/e2e/framework/framework.go:250 @ 03/20/23 23:48:03.886 Mar 20 23:48:03.889: INFO: Skipping waiting for service account < Exit [BeforeEach] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - set up framework | framework.go:191 @ 03/20/23 23:48:03.889 (3ms) > Enter [BeforeEach] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - test/e2e/framework/metrics/init/init.go:33 @ 03/20/23 23:48:03.889 < Exit [BeforeEach] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - test/e2e/framework/metrics/init/init.go:33 @ 03/20/23 23:48:03.889 (0s) > Enter [BeforeEach] when we run containers that should cause PIDPressure - test/e2e_node/util.go:176 @ 03/20/23 23:48:03.889 STEP: Stopping the kubelet - test/e2e_node/util.go:200 @ 03/20/23 23:48:03.907 Mar 20 23:48:03.941: INFO: Get running kubelet with systemctl: UNIT LOAD ACTIVE SUB DESCRIPTION kubelet-20230320T225220.service loaded active running /tmp/node-e2e-20230320T225220/kubelet --kubeconfig /tmp/node-e2e-20230320T225220/kubeconfig --root-dir /var/lib/kubelet --v 4 --hostname-override tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20230320T225220/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization of SUB. SUB = The low-level unit activation state, values depend on unit type. 1 loaded units listed. , kubelet-20230320T225220 W0320 23:48:04.004837 2678 util.go:477] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:55882->127.0.0.1:10248: read: connection reset by peer STEP: Starting the kubelet - test/e2e_node/util.go:216 @ 03/20/23 23:48:04.014 W0320 23:48:04.052005 2678 util.go:477] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused < Exit [BeforeEach] when we run containers that should cause PIDPressure - test/e2e_node/util.go:176 @ 03/20/23 23:48:09.056 (5.167s) > Enter [BeforeEach] when we run containers that should cause PIDPressure - test/e2e_node/eviction_test.go:478 @ 03/20/23 23:48:09.056 < Exit [BeforeEach] when we run containers that should cause PIDPressure - test/e2e_node/eviction_test.go:478 @ 03/20/23 23:48:09.058 (2ms) > Enter [BeforeEach] TOP-LEVEL - test/e2e_node/eviction_test.go:549 @ 03/20/23 23:48:09.058 STEP: setting up pods to be used by tests - test/e2e_node/eviction_test.go:555 @ 03/20/23 23:48:39.086 < Exit [BeforeEach] TOP-LEVEL - test/e2e_node/eviction_test.go:549 @ 03/20/23 23:48:43.116 (34.058s) > Enter [It] should eventually evict all of the correct pods - test/e2e_node/eviction_test.go:563 @ 03/20/23 23:48:43.116 STEP: Waiting for node to have NodeCondition: PIDPressure - test/e2e_node/eviction_test.go:564 @ 03/20/23 23:48:43.116 Mar 20 23:48:43.172: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 8294 Mar 20 23:48:45.188: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 168 Mar 20 23:48:47.204: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 168 Mar 20 23:48:49.221: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:48:51.236: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:48:53.253: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:48:55.268: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:48:57.283: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:48:59.297: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:01.315: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:03.332: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:05.348: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:07.365: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:09.380: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:11.395: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:13.414: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:15.435: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:17.451: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:19.469: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:21.485: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:23.500: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:25.515: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:27.538: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:29.556: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:31.570: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:33.586: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:35.604: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:37.618: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:39.634: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:41.652: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:43.666: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:45.681: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:47.715: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:49.730: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:51.745: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:53.763: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:55.781: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:57.796: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:49:59.813: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:50:01.828: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:50:03.842: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:50:05.860: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:50:07.874: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:50:09.890: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:50:11.908: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:50:13.927: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:50:15.941: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:50:17.958: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:50:19.972: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:50:21.987: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:50:24.008: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:50:26.023: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:50:28.037: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:50:30.054: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:50:32.069: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:50:34.086: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:50:36.104: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:50:38.119: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:50:40.136: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 Mar 20 23:50:42.155: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 164 [FAILED] Timed out after 120.000s. Expected <*errors.errorString | 0xc001076700>: NodeCondition: PIDPressure not encountered { s: "NodeCondition: PIDPressure not encountered", } to be nil In [It] at: test/e2e_node/eviction_test.go:571 @ 03/20/23 23:50:43.116 < Exit [It] should eventually evict all of the correct pods - test/e2e_node/eviction_test.go:563 @ 03/20/23 23:50:43.116 (2m0.001s) > Enter [AfterEach] TOP-LEVEL - test/e2e_node/eviction_test.go:620 @ 03/20/23 23:50:43.116 STEP: deleting pods - test/e2e_node/eviction_test.go:631 @ 03/20/23 23:50:43.116 STEP: deleting pod: fork-bomb-container-with-low-priority-pod - test/e2e_node/eviction_test.go:633 @ 03/20/23 23:50:43.116 STEP: deleting pod: innocent-pod - test/e2e_node/eviction_test.go:633 @ 03/20/23 23:50:43.123 STEP: deleting pod: fork-bomb-container-with-high-priority-pod - test/e2e_node/eviction_test.go:633 @ 03/20/23 23:51:15.174 STEP: making sure NodeCondition PIDPressure no longer exists on the node - test/e2e_node/eviction_test.go:639 @ 03/20/23 23:51:15.179 STEP: making sure we have all the required images for testing - test/e2e_node/eviction_test.go:648 @ 03/20/23 23:51:15.221 STEP: making sure NodeCondition PIDPressure doesn't exist again after pulling images - test/e2e_node/eviction_test.go:652 @ 03/20/23 23:51:15.221 STEP: making sure we can start a new pod after the test - test/e2e_node/eviction_test.go:660 @ 03/20/23 23:51:15.224 Mar 20 23:51:17.241: INFO: Summary of pod events during the test: I0320 23:51:17.244223 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"fork-bomb-container-with-high-priority-pod", UID:"d8926f4b-19f9-4ca4-9511-ab7f4cda668e", APIVersion:"v1", ResourceVersion:"1490", FieldPath:"spec.containers{fork-bomb-container-with-high-priority-container}"}): type: 'Normal' reason: 'Pulled' Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine I0320 23:51:17.244248 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"fork-bomb-container-with-high-priority-pod", UID:"d8926f4b-19f9-4ca4-9511-ab7f4cda668e", APIVersion:"v1", ResourceVersion:"1490", FieldPath:"spec.containers{fork-bomb-container-with-high-priority-container}"}): type: 'Normal' reason: 'Created' Created container fork-bomb-container-with-high-priority-container I0320 23:51:17.244261 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"fork-bomb-container-with-high-priority-pod", UID:"d8926f4b-19f9-4ca4-9511-ab7f4cda668e", APIVersion:"v1", ResourceVersion:"1490", FieldPath:"spec.containers{fork-bomb-container-with-high-priority-container}"}): type: 'Normal' reason: 'Started' Started container fork-bomb-container-with-high-priority-container I0320 23:51:17.244271 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"fork-bomb-container-with-low-priority-pod", UID:"3b48e379-3d49-481c-bd68-b677473a55fe", APIVersion:"v1", ResourceVersion:"1491", FieldPath:"spec.containers{fork-bomb-container-with-low-priority-container}"}): type: 'Normal' reason: 'Pulled' Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine I0320 23:51:17.244282 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"fork-bomb-container-with-low-priority-pod", UID:"3b48e379-3d49-481c-bd68-b677473a55fe", APIVersion:"v1", ResourceVersion:"1491", FieldPath:"spec.containers{fork-bomb-container-with-low-priority-container}"}): type: 'Normal' reason: 'Created' Created container fork-bomb-container-with-low-priority-container I0320 23:51:17.244292 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"fork-bomb-container-with-low-priority-pod", UID:"3b48e379-3d49-481c-bd68-b677473a55fe", APIVersion:"v1", ResourceVersion:"1491", FieldPath:"spec.containers{fork-bomb-container-with-low-priority-container}"}): type: 'Normal' reason: 'Started' Started container fork-bomb-container-with-low-priority-container I0320 23:51:17.244302 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"innocent-pod", UID:"892742da-57e1-42be-af41-1a76543cc5c9", APIVersion:"v1", ResourceVersion:"1489", FieldPath:"spec.containers{innocent-container}"}): type: 'Normal' reason: 'Pulled' Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine I0320 23:51:17.244313 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"innocent-pod", UID:"892742da-57e1-42be-af41-1a76543cc5c9", APIVersion:"v1", ResourceVersion:"1489", FieldPath:"spec.containers{innocent-container}"}): type: 'Normal' reason: 'Created' Created container innocent-container I0320 23:51:17.244324 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"innocent-pod", UID:"892742da-57e1-42be-af41-1a76543cc5c9", APIVersion:"v1", ResourceVersion:"1489", FieldPath:"spec.containers{innocent-container}"}): type: 'Normal' reason: 'Started' Started container innocent-container I0320 23:51:17.244334 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"innocent-pod", UID:"892742da-57e1-42be-af41-1a76543cc5c9", APIVersion:"v1", ResourceVersion:"1489", FieldPath:"spec.containers{innocent-container}"}): type: 'Normal' reason: 'Killing' Stopping container innocent-container I0320 23:51:17.244354 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"test-admit-pod", UID:"064b5f34-f472-4bfe-a77f-6112d6791684", APIVersion:"v1", ResourceVersion:"1569", FieldPath:"spec.containers{test-admit-pod}"}): type: 'Normal' reason: 'Pulled' Container image "registry.k8s.io/pause:3.9" already present on machine I0320 23:51:17.244364 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"test-admit-pod", UID:"064b5f34-f472-4bfe-a77f-6112d6791684", APIVersion:"v1", ResourceVersion:"1569", FieldPath:"spec.containers{test-admit-pod}"}): type: 'Normal' reason: 'Created' Created container test-admit-pod I0320 23:51:17.244374 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"test-admit-pod", UID:"064b5f34-f472-4bfe-a77f-6112d6791684", APIVersion:"v1", ResourceVersion:"1569", FieldPath:"spec.containers{test-admit-pod}"}): type: 'Normal' reason: 'Started' Started container test-admit-pod Mar 20 23:51:17.244: INFO: Summary of node events during the test: I0320 23:51:17.252229 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet. I0320 23:51:17.252436 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientMemory I0320 23:51:17.252577 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasNoDiskPressure I0320 23:51:17.252711 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientPID I0320 23:51:17.252857 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods I0320 23:51:17.253023 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeReady I0320 23:51:17.253137 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet. I0320 23:51:17.253260 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientMemory I0320 23:51:17.253371 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasNoDiskPressure I0320 23:51:17.253531 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientPID I0320 23:51:17.253651 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeNotReady I0320 23:51:17.253784 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods I0320 23:51:17.253910 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeReady I0320 23:51:17.254042 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EvictionThresholdMet' Attempting to reclaim ephemeral-storage I0320 23:51:17.254166 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasDiskPressure I0320 23:51:17.254298 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet. I0320 23:51:17.254417 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientMemory I0320 23:51:17.254553 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasNoDiskPressure I0320 23:51:17.254671 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientPID I0320 23:51:17.254812 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeNotReady I0320 23:51:17.254926 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods I0320 23:51:17.255036 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeReady I0320 23:51:17.255159 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet. I0320 23:51:17.255269 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientMemory I0320 23:51:17.255369 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasNoDiskPressure I0320 23:51:17.255482 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientPID I0320 23:51:17.255587 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeNotReady I0320 23:51:17.255718 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods I0320 23:51:17.255834 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeReady I0320 23:51:17.255945 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EvictionThresholdMet' Attempting to reclaim ephemeral-storage I0320 23:51:17.256056 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasDiskPressure I0320 23:51:17.256196 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet. I0320 23:51:17.256309 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientMemory I0320 23:51:17.256424 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasNoDiskPressure I0320 23:51:17.256544 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientPID I0320 23:51:17.256657 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeNotReady I0320 23:51:17.256781 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods I0320 23:51:17.256903 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeReady I0320 23:51:17.257017 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet. I0320 23:51:17.257135 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientMemory I0320 23:51:17.257249 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasNoDiskPressure I0320 23:51:17.257368 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientPID I0320 23:51:17.257474 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeNotReady I0320 23:51:17.257585 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods I0320 23:51:17.257691 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeReady I0320 23:51:17.257796 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EvictionThresholdMet' Attempting to reclaim inodes I0320 23:51:17.257922 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasDiskPressure I0320 23:51:17.258029 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet. I0320 23:51:17.258145 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientMemory I0320 23:51:17.258255 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasNoDiskPressure I0320 23:51:17.258361 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientPID I0320 23:51:17.258467 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeNotReady I0320 23:51:17.258602 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods I0320 23:51:17.258717 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeReady I0320 23:51:17.258830 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet. I0320 23:51:17.258957 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientMemory I0320 23:51:17.259088 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasNoDiskPressure I0320 23:51:17.259203 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientPID I0320 23:51:17.259316 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeNotReady I0320 23:51:17.259433 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods I0320 23:51:17.259554 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeReady I0320 23:51:17.259668 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EvictionThresholdMet' Attempting to reclaim memory I0320 23:51:17.259792 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasInsufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasInsufficientMemory I0320 23:51:17.259908 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet. I0320 23:51:17.260015 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientMemory I0320 23:51:17.260122 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasNoDiskPressure I0320 23:51:17.260253 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientPID I0320 23:51:17.260360 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeNotReady I0320 23:51:17.260466 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods I0320 23:51:17.260582 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeReady I0320 23:51:17.260713 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet. I0320 23:51:17.260813 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientMemory I0320 23:51:17.260921 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasNoDiskPressure I0320 23:51:17.261020 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientPID I0320 23:51:17.261149 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeNotReady I0320 23:51:17.261266 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods I0320 23:51:17.261385 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeReady I0320 23:51:17.261502 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet. I0320 23:51:17.261626 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientMemory I0320 23:51:17.261748 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasNoDiskPressure I0320 23:51:17.261866 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientPID I0320 23:51:17.261995 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeNotReady I0320 23:51:17.262112 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods I0320 23:51:17.262229 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeReady I0320 23:51:17.262357 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet. I0320 23:51:17.262470 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientMemory I0320 23:51:17.262588 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasNoDiskPressure I0320 23:51:17.262725 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientPID I0320 23:51:17.262841 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeNotReady I0320 23:51:17.262982 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods I0320 23:51:17.263095 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeReady I0320 23:51:17.263208 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"fork-bomb-container-with-high-priority-pod", UID:"d8926f4b-19f9-4ca4-9511-ab7f4cda668e", APIVersion:"v1", ResourceVersion:"1490", FieldPath:"spec.containers{fork-bomb-container-with-high-priority-container}"}): type: 'Normal' reason: 'Pulled' Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine I0320 23:51:17.263322 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"fork-bomb-container-with-high-priority-pod", UID:"d8926f4b-19f9-4ca4-9511-ab7f4cda668e", APIVersion:"v1", ResourceVersion:"1490", FieldPath:"spec.containers{fork-bomb-container-with-high-priority-container}"}): type: 'Normal' reason: 'Created' Created container fork-bomb-container-with-high-priority-container I0320 23:51:17.263437 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"fork-bomb-container-with-high-priority-pod", UID:"d8926f4b-19f9-4ca4-9511-ab7f4cda668e", APIVersion:"v1", ResourceVersion:"1490", FieldPath:"spec.containers{fork-bomb-container-with-high-priority-container}"}): type: 'Normal' reason: 'Started' Started container fork-bomb-container-with-high-priority-container I0320 23:51:17.263506 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"fork-bomb-container-with-low-priority-pod", UID:"3b48e379-3d49-481c-bd68-b677473a55fe", APIVersion:"v1", ResourceVersion:"1491", FieldPath:"spec.containers{fork-bomb-container-with-low-priority-container}"}): type: 'Normal' reason: 'Pulled' Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine I0320 23:51:17.263640 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"fork-bomb-container-with-low-priority-pod", UID:"3b48e379-3d49-481c-bd68-b677473a55fe", APIVersion:"v1", ResourceVersion:"1491", FieldPath:"spec.containers{fork-bomb-container-with-low-priority-container}"}): type: 'Normal' reason: 'Created' Created container fork-bomb-container-with-low-priority-container I0320 23:51:17.263714 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"fork-bomb-container-with-low-priority-pod", UID:"3b48e379-3d49-481c-bd68-b677473a55fe", APIVersion:"v1", ResourceVersion:"1491", FieldPath:"spec.containers{fork-bomb-container-with-low-priority-container}"}): type: 'Normal' reason: 'Started' Started container fork-bomb-container-with-low-priority-container I0320 23:51:17.263781 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"innocent-pod", UID:"892742da-57e1-42be-af41-1a76543cc5c9", APIVersion:"v1", ResourceVersion:"1489", FieldPath:"spec.containers{innocent-container}"}): type: 'Normal' reason: 'Pulled' Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine I0320 23:51:17.263852 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"innocent-pod", UID:"892742da-57e1-42be-af41-1a76543cc5c9", APIVersion:"v1", ResourceVersion:"1489", FieldPath:"spec.containers{innocent-container}"}): type: 'Normal' reason: 'Created' Created container innocent-container I0320 23:51:17.263930 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"innocent-pod", UID:"892742da-57e1-42be-af41-1a76543cc5c9", APIVersion:"v1", ResourceVersion:"1489", FieldPath:"spec.containers{innocent-container}"}): type: 'Normal' reason: 'Started' Started container innocent-container I0320 23:51:17.263998 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"innocent-pod", UID:"892742da-57e1-42be-af41-1a76543cc5c9", APIVersion:"v1", ResourceVersion:"1489", FieldPath:"spec.containers{innocent-container}"}): type: 'Normal' reason: 'Killing' Stopping container innocent-container I0320 23:51:17.264065 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"test-admit-pod", UID:"064b5f34-f472-4bfe-a77f-6112d6791684", APIVersion:"v1", ResourceVersion:"1569", FieldPath:"spec.containers{test-admit-pod}"}): type: 'Normal' reason: 'Pulled' Container image "registry.k8s.io/pause:3.9" already present on machine I0320 23:51:17.264132 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"test-admit-pod", UID:"064b5f34-f472-4bfe-a77f-6112d6791684", APIVersion:"v1", ResourceVersion:"1569", FieldPath:"spec.containers{test-admit-pod}"}): type: 'Normal' reason: 'Created' Created container test-admit-pod I0320 23:51:17.264198 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4995", Name:"test-admit-pod", UID:"064b5f34-f472-4bfe-a77f-6112d6791684", APIVersion:"v1", ResourceVersion:"1569", FieldPath:"spec.containers{test-admit-pod}"}): type: 'Normal' reason: 'Started' Started container test-admit-pod < Exit [AfterEach] TOP-LEVEL - test/e2e_node/eviction_test.go:620 @ 03/20/23 23:51:17.264 (34.147s) > Enter [AfterEach] when we run containers that should cause PIDPressure - test/e2e_node/util.go:190 @ 03/20/23 23:51:17.264 STEP: Stopping the kubelet - test/e2e_node/util.go:200 @ 03/20/23 23:51:17.264 Mar 20 23:51:17.297: INFO: Get running kubelet with systemctl: UNIT LOAD ACTIVE SUB DESCRIPTION kubelet-20230320T225220.service loaded active running /tmp/node-e2e-20230320T225220/kubelet --kubeconfig /tmp/node-e2e-20230320T225220/kubeconfig --root-dir /var/lib/kubelet --v 4 --hostname-override tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20230320T225220/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization of SUB. SUB = The low-level unit activation state, values depend on unit type. 1 loaded units listed. , kubelet-20230320T225220 W0320 23:51:17.365069 2678 util.go:477] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:43060->127.0.0.1:10248: read: connection reset by peer STEP: Starting the kubelet - test/e2e_node/util.go:216 @ 03/20/23 23:51:17.374 W0320 23:51:17.415223 2678 util.go:477] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused < Exit [AfterEach] when we run containers that should cause PIDPressure - test/e2e_node/util.go:190 @ 03/20/23 23:51:22.419 (5.155s) > Enter [AfterEach] when we run containers that should cause PIDPressure - test/e2e_node/eviction_test.go:482 @ 03/20/23 23:51:22.419 < Exit [AfterEach] when we run containers that should cause PIDPressure - test/e2e_node/eviction_test.go:482 @ 03/20/23 23:51:22.421 (2ms) > Enter [AfterEach] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - test/e2e/framework/node/init/init.go:33 @ 03/20/23 23:51:22.421 Mar 20 23:51:22.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - test/e2e/framework/node/init/init.go:33 @ 03/20/23 23:51:22.423 (2ms) > Enter [DeferCleanup (Each)] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - test/e2e/framework/metrics/init/init.go:35 @ 03/20/23 23:51:22.423 < Exit [DeferCleanup (Each)] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - test/e2e/framework/metrics/init/init.go:35 @ 03/20/23 23:51:22.423 (0s) > Enter [DeferCleanup (Each)] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - dump namespaces | framework.go:209 @ 03/20/23 23:51:22.423 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/20/23 23:51:22.423 STEP: Collecting events from namespace "pidpressure-eviction-test-4995". - test/e2e/framework/debug/dump.go:42 @ 03/20/23 23:51:22.423 STEP: Found 13 events. - test/e2e/framework/debug/dump.go:46 @ 03/20/23 23:51:22.425 Mar 20 23:51:22.425: INFO: At 2023-03-20 23:48:39 +0000 UTC - event for fork-bomb-container-with-high-priority-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 20 23:51:22.425: INFO: At 2023-03-20 23:48:39 +0000 UTC - event for innocent-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 20 23:51:22.425: INFO: At 2023-03-20 23:48:40 +0000 UTC - event for fork-bomb-container-with-high-priority-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Created: Created container fork-bomb-container-with-high-priority-container Mar 20 23:51:22.425: INFO: At 2023-03-20 23:48:40 +0000 UTC - event for fork-bomb-container-with-high-priority-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Started: Started container fork-bomb-container-with-high-priority-container Mar 20 23:51:22.425: INFO: At 2023-03-20 23:48:40 +0000 UTC - event for fork-bomb-container-with-low-priority-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 20 23:51:22.425: INFO: At 2023-03-20 23:48:40 +0000 UTC - event for fork-bomb-container-with-low-priority-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Started: Started container fork-bomb-container-with-low-priority-container Mar 20 23:51:22.425: INFO: At 2023-03-20 23:48:40 +0000 UTC - event for fork-bomb-container-with-low-priority-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Created: Created container fork-bomb-container-with-low-priority-container Mar 20 23:51:22.425: INFO: At 2023-03-20 23:48:40 +0000 UTC - event for innocent-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Created: Created container innocent-container Mar 20 23:51:22.425: INFO: At 2023-03-20 23:48:40 +0000 UTC - event for innocent-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Started: Started container innocent-container Mar 20 23:51:22.425: INFO: At 2023-03-20 23:50:43 +0000 UTC - event for innocent-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Killing: Stopping container innocent-container Mar 20 23:51:22.425: INFO: At 2023-03-20 23:51:15 +0000 UTC - event for test-admit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Mar 20 23:51:22.425: INFO: At 2023-03-20 23:51:15 +0000 UTC - event for test-admit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Created: Created container test-admit-pod Mar 20 23:51:22.425: INFO: At 2023-03-20 23:51:15 +0000 UTC - event for test-admit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Started: Started container test-admit-pod Mar 20 23:51:22.426: INFO: POD NODE PHASE GRACE CONDITIONS Mar 20 23:51:22.426: INFO: test-admit-pod tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:51:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:51:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:51:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:51:15 +0000 UTC }] Mar 20 23:51:22.426: INFO: Mar 20 23:51:22.437: INFO: Logging node info for node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 Mar 20 23:51:22.438: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 8a7dfb05-f333-4f40-9f20-a57ae3eaa3a4 1584 0 2023-03-20 22:54:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-20 22:54:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2023-03-20 23:51:17 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20869787648 0} {<nil>} 20380652Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3841228800 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18782808853 0} {<nil>} 18782808853 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3579084800 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-20 23:51:17 +0000 UTC,LastTransitionTime:2023-03-20 23:44:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-20 23:51:17 +0000 UTC,LastTransitionTime:2023-03-20 23:34:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-20 23:51:17 +0000 UTC,LastTransitionTime:2023-03-20 22:54:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-20 23:51:17 +0000 UTC,LastTransitionTime:2023-03-20 23:51:17 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.71,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:42e8e67821f5970370e2e1f0c1acea4f,SystemUUID:42e8e678-21f5-9703-70e2-e1f0c1acea4f,BootID:f0916da1-6860-4791-9f27-ed232e503da7,KernelVersion:6.1.11-200.fc37.x86_64,OSImage:Fedora CoreOS 37.20230218.3.0,ContainerRuntimeVersion:cri-o://1.26.0,KubeletVersion:v1.27.0-beta.0.29+c9ff2866682432,KubeProxyVersion:v1.27.0-beta.0.29+c9ff2866682432,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 registry.k8s.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 registry.k8s.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/node-perf/tf-wide-deep@sha256:91ab3b5ee22441c99370944e2e2cb32670db62db433611b4e3780bdee6a8e5a1 registry.k8s.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d7e74e6555abe4b001aadddc248447b472ae35ccbb2c21ca0febace6c4c6d7bb registry.k8s.io/e2e-test-images/node-perf/tf-wide-deep:1.3],SizeBytes:559664987,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/gluster@sha256:033a12fe65438751690b519cebd4135a3485771086bcf437212b7b886bb7956c registry.k8s.io/e2e-test-images/volume/gluster@sha256:c52e01956fec2cf5968b87be8f06ae740ea5d208a3b41fa2c7970b13cc515be5 registry.k8s.io/e2e-test-images/volume/gluster:1.3],SizeBytes:352430719,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9 registry.k8s.io/etcd:3.5.7-0],SizeBytes:297083935,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:034f77d52166fcacb81d6a6db10a4e24644c241896822e6525925859fec09f47 registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:272589700,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost@sha256:da18b4806cfa370df04f9c3faa7d654a22a80467dc4cab92bd1b22b4abe4d5aa registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:129622797,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd@sha256:21e720a020bf8d492b5dd2fe0f31a5205021176f505ecf35b10177f8bfd68980 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:128894228,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:0ce71ef6d759425d22b10e65b439749fe5d13377a188e2fc060b731cdb4e6901 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.7],SizeBytes:115035523,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/node-perf/npb-is@sha256:8285539c79625b192a5e33fc3d21edc1a7776fb9afe15fae3b5037a7a8020839 registry.k8s.io/e2e-test-images/node-perf/npb-is@sha256:f941079315f73b182b0f416134253ee87ab51162cbd2e9fcd31bbe726999a977 registry.k8s.io/e2e-test-images/node-perf/npb-is:1.2],SizeBytes:99663088,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/node-perf/npb-ep@sha256:90b5cfc5451428aad4dd6af9960640f2506804d35aa05e83c11bf0a46ac318c8 registry.k8s.io/e2e-test-images/node-perf/npb-ep@sha256:a37ad3a2ccb2b8aa7ced0b7c884888d2cef953cfc9a158e3e8b914d52147091b registry.k8s.io/e2e-test-images/node-perf/npb-ep:1.2],SizeBytes:99661552,},ContainerImage{Names:[gcr.io/cadvisor/cadvisor@sha256:89e6137f068ded2e9a3a012ce71260b9afc57a19305842aa1074239841a539a7 gcr.io/cadvisor/cadvisor:v0.43.0],SizeBytes:87971088,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:dfa235c2d64c29405f40489cf631193b27bec6dcf13cfee9824e449f6ddac051 registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:43877486,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-device-plugin@sha256:3dd0413e5a78f1c2a6484f168ba3daf23ebb0b1141897237e9559db6c5f7101f registry.k8s.io/e2e-test-images/sample-device-plugin@sha256:e84f6ca27c51ddedf812637dd2bcf771ad69fdca1173e5690c372370d0f93c40 registry.k8s.io/e2e-test-images/sample-device-plugin:1.3],SizeBytes:41740418,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[registry.k8s.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:19251111,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx@sha256:c42b04e8cf71231fac5dbc833366f7ce2ae78ef8b9df4304fcb83edcd495f69f registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:17244936,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/ipc-utils@sha256:647d092bada3b46c449d875adf31d71c1dd29c244e9cca6a04fddf9d6bcac136 registry.k8s.io/e2e-test-images/ipc-utils@sha256:89fe5eb4c180571eb5c4e4179f1e9bc03497b1f50e45e4f720165617a603d737 registry.k8s.io/e2e-test-images/ipc-utils:1.3],SizeBytes:12251265,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[registry.k8s.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 registry.k8s.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox@sha256:4cdffea536d503c58d7e087bab34a43e63a11dcfa4132b5a1b838885f08fb730 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:1374155,},ContainerImage{Names:[registry.k8s.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10 registry.k8s.io/pause:3.9],SizeBytes:750414,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4 registry.k8s.io/pause:3.6],SizeBytes:690326,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 20 23:51:22.439: INFO: Logging kubelet events for node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 Mar 20 23:51:22.441: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 Mar 20 23:51:22.443: INFO: test-admit-pod started at 2023-03-20 23:51:15 +0000 UTC (0+1 container statuses recorded) Mar 20 23:51:22.443: INFO: Container test-admit-pod ready: true, restart count 0 W0320 23:51:22.444482 2678 metrics_grabber.go:111] Can't find any pods in namespace kube-system to grab metrics from Mar 20 23:51:22.460: INFO: Latency metrics for node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/20/23 23:51:22.461 (38ms) < Exit [DeferCleanup (Each)] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - dump namespaces | framework.go:209 @ 03/20/23 23:51:22.461 (38ms) > Enter [DeferCleanup (Each)] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - tear down framework | framework.go:206 @ 03/20/23 23:51:22.461 STEP: Destroying namespace "pidpressure-eviction-test-4995" for this suite. - test/e2e/framework/framework.go:351 @ 03/20/23 23:51:22.461 < Exit [DeferCleanup (Each)] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - tear down framework | framework.go:206 @ 03/20/23 23:51:22.463 (3ms)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[It\]\s\[sig\-node\]\sPriorityPidEvictionOrdering\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sPIDPressure\;\sPodDisruptionConditions\senabled\s\[NodeFeature\:PodDisruptionConditions\]\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
[FAILED] Timed out after 120.000s. Expected <*errors.errorString | 0xc001cc5410>: NodeCondition: PIDPressure not encountered { s: "NodeCondition: PIDPressure not encountered", } to be nil In [It] at: test/e2e_node/eviction_test.go:571 @ 03/20/23 23:47:56.616from junit_fedora01.xml
> Enter [BeforeEach] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - set up framework | framework.go:191 @ 03/20/23 23:45:19.325 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/20/23 23:45:19.325 STEP: Building a namespace api object, basename pidpressure-eviction-test - test/e2e/framework/framework.go:250 @ 03/20/23 23:45:19.325 Mar 20 23:45:19.328: INFO: Skipping waiting for service account < Exit [BeforeEach] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - set up framework | framework.go:191 @ 03/20/23 23:45:19.328 (3ms) > Enter [BeforeEach] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - test/e2e/framework/metrics/init/init.go:33 @ 03/20/23 23:45:19.328 < Exit [BeforeEach] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - test/e2e/framework/metrics/init/init.go:33 @ 03/20/23 23:45:19.328 (0s) > Enter [BeforeEach] when we run containers that should cause PIDPressure; PodDisruptionConditions enabled [NodeFeature:PodDisruptionConditions] - test/e2e_node/util.go:176 @ 03/20/23 23:45:19.328 STEP: Stopping the kubelet - test/e2e_node/util.go:200 @ 03/20/23 23:45:19.35 Mar 20 23:45:19.382: INFO: Get running kubelet with systemctl: UNIT LOAD ACTIVE SUB DESCRIPTION kubelet-20230320T225220.service loaded active running /tmp/node-e2e-20230320T225220/kubelet --kubeconfig /tmp/node-e2e-20230320T225220/kubeconfig --root-dir /var/lib/kubelet --v 4 --hostname-override tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20230320T225220/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization of SUB. SUB = The low-level unit activation state, values depend on unit type. 1 loaded units listed. , kubelet-20230320T225220 W0320 23:45:19.442844 2678 util.go:477] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:40366->127.0.0.1:10248: read: connection reset by peer STEP: Starting the kubelet - test/e2e_node/util.go:216 @ 03/20/23 23:45:19.452 W0320 23:45:19.492130 2678 util.go:477] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused < Exit [BeforeEach] when we run containers that should cause PIDPressure; PodDisruptionConditions enabled [NodeFeature:PodDisruptionConditions] - test/e2e_node/util.go:176 @ 03/20/23 23:45:24.506 (5.178s) > Enter [BeforeEach] TOP-LEVEL - test/e2e_node/eviction_test.go:549 @ 03/20/23 23:45:24.506 STEP: setting up pods to be used by tests - test/e2e_node/eviction_test.go:555 @ 03/20/23 23:45:54.595 < Exit [BeforeEach] TOP-LEVEL - test/e2e_node/eviction_test.go:549 @ 03/20/23 23:45:56.616 (32.11s) > Enter [It] should eventually evict all of the correct pods - test/e2e_node/eviction_test.go:563 @ 03/20/23 23:45:56.616 STEP: Waiting for node to have NodeCondition: PIDPressure - test/e2e_node/eviction_test.go:564 @ 03/20/23 23:45:56.616 Mar 20 23:45:56.658: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 3595 Mar 20 23:45:58.674: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 157 Mar 20 23:46:00.688: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 159 Mar 20 23:46:02.709: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 157 Mar 20 23:46:04.725: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 157 Mar 20 23:46:06.739: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 157 Mar 20 23:46:08.757: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 157 Mar 20 23:46:10.772: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 157 Mar 20 23:46:12.788: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 157 Mar 20 23:46:14.804: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 157 Mar 20 23:46:16.825: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 157 Mar 20 23:46:18.841: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 157 Mar 20 23:46:20.858: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 157 Mar 20 23:46:22.872: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 157 Mar 20 23:46:24.887: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:46:26.903: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:46:28.919: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:46:30.934: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:46:32.950: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:46:34.966: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:46:36.980: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:46:39.005: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:46:41.022: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:46:43.035: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:46:45.050: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:46:47.064: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:46:49.079: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:46:51.095: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:46:53.111: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:46:55.126: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:46:57.142: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:46:59.158: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:01.174: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:03.190: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:05.205: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:07.225: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:09.241: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:11.256: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:13.271: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:15.287: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:17.300: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:19.315: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:21.331: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:23.344: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:25.369: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:27.385: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:29.399: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:31.413: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:33.427: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:35.440: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:37.453: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:39.468: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:41.482: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:43.496: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:45.512: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:47.526: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:49.539: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:51.555: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:53.569: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 Mar 20 23:47:55.583: INFO: Node.Rlimit.MaxPID: 28637, Node.Rlimit.RunningProcesses: 156 [FAILED] Timed out after 120.000s. Expected <*errors.errorString | 0xc001cc5410>: NodeCondition: PIDPressure not encountered { s: "NodeCondition: PIDPressure not encountered", } to be nil In [It] at: test/e2e_node/eviction_test.go:571 @ 03/20/23 23:47:56.616 < Exit [It] should eventually evict all of the correct pods - test/e2e_node/eviction_test.go:563 @ 03/20/23 23:47:56.616 (2m0.001s) > Enter [AfterEach] TOP-LEVEL - test/e2e_node/eviction_test.go:620 @ 03/20/23 23:47:56.616 STEP: deleting pods - test/e2e_node/eviction_test.go:631 @ 03/20/23 23:47:56.617 STEP: deleting pod: fork-bomb-container-pod - test/e2e_node/eviction_test.go:633 @ 03/20/23 23:47:56.617 STEP: making sure NodeCondition PIDPressure no longer exists on the node - test/e2e_node/eviction_test.go:639 @ 03/20/23 23:47:56.623 STEP: making sure we have all the required images for testing - test/e2e_node/eviction_test.go:648 @ 03/20/23 23:47:56.654 STEP: making sure NodeCondition PIDPressure doesn't exist again after pulling images - test/e2e_node/eviction_test.go:652 @ 03/20/23 23:47:56.654 STEP: making sure we can start a new pod after the test - test/e2e_node/eviction_test.go:660 @ 03/20/23 23:47:56.656 Mar 20 23:47:58.677: INFO: Summary of pod events during the test: I0320 23:47:58.679705 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4657", Name:"fork-bomb-container-pod", UID:"d803fdd0-2913-4d5d-915f-2f9c283e3cb4", APIVersion:"v1", ResourceVersion:"1387", FieldPath:"spec.containers{fork-bomb-container-container}"}): type: 'Normal' reason: 'Pulled' Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine I0320 23:47:58.679731 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4657", Name:"fork-bomb-container-pod", UID:"d803fdd0-2913-4d5d-915f-2f9c283e3cb4", APIVersion:"v1", ResourceVersion:"1387", FieldPath:"spec.containers{fork-bomb-container-container}"}): type: 'Normal' reason: 'Created' Created container fork-bomb-container-container I0320 23:47:58.679743 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4657", Name:"fork-bomb-container-pod", UID:"d803fdd0-2913-4d5d-915f-2f9c283e3cb4", APIVersion:"v1", ResourceVersion:"1387", FieldPath:"spec.containers{fork-bomb-container-container}"}): type: 'Normal' reason: 'Started' Started container fork-bomb-container-container I0320 23:47:58.679754 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4657", Name:"test-admit-pod", UID:"d792efd9-36dd-494a-b248-793b855c2f4b", APIVersion:"v1", ResourceVersion:"1435", FieldPath:"spec.containers{test-admit-pod}"}): type: 'Normal' reason: 'Pulled' Container image "registry.k8s.io/pause:3.9" already present on machine I0320 23:47:58.679763 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4657", Name:"test-admit-pod", UID:"d792efd9-36dd-494a-b248-793b855c2f4b", APIVersion:"v1", ResourceVersion:"1435", FieldPath:"spec.containers{test-admit-pod}"}): type: 'Normal' reason: 'Created' Created container test-admit-pod I0320 23:47:58.679774 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4657", Name:"test-admit-pod", UID:"d792efd9-36dd-494a-b248-793b855c2f4b", APIVersion:"v1", ResourceVersion:"1435", FieldPath:"spec.containers{test-admit-pod}"}): type: 'Normal' reason: 'Started' Started container test-admit-pod Mar 20 23:47:58.679: INFO: Summary of node events during the test: I0320 23:47:58.686511 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet. I0320 23:47:58.686713 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientMemory I0320 23:47:58.686840 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasNoDiskPressure I0320 23:47:58.687001 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientPID I0320 23:47:58.687123 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods I0320 23:47:58.687261 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeReady I0320 23:47:58.687381 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet. I0320 23:47:58.687516 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientMemory I0320 23:47:58.687664 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasNoDiskPressure I0320 23:47:58.687810 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientPID I0320 23:47:58.687941 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeNotReady I0320 23:47:58.688081 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods I0320 23:47:58.688220 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeReady I0320 23:47:58.688356 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EvictionThresholdMet' Attempting to reclaim ephemeral-storage I0320 23:47:58.688475 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasDiskPressure I0320 23:47:58.688611 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet. I0320 23:47:58.688736 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientMemory I0320 23:47:58.688865 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasNoDiskPressure I0320 23:47:58.688997 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientPID I0320 23:47:58.689128 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeNotReady I0320 23:47:58.689246 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods I0320 23:47:58.689376 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeReady I0320 23:47:58.689487 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet. I0320 23:47:58.689595 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientMemory I0320 23:47:58.689727 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasNoDiskPressure I0320 23:47:58.689832 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientPID I0320 23:47:58.689954 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeNotReady I0320 23:47:58.690071 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods I0320 23:47:58.690182 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeReady I0320 23:47:58.690281 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EvictionThresholdMet' Attempting to reclaim ephemeral-storage I0320 23:47:58.690426 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasDiskPressure I0320 23:47:58.690534 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet. I0320 23:47:58.690646 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientMemory I0320 23:47:58.690755 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasNoDiskPressure I0320 23:47:58.690883 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientPID I0320 23:47:58.690991 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeNotReady I0320 23:47:58.691097 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods I0320 23:47:58.691202 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeReady I0320 23:47:58.691307 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet. I0320 23:47:58.691413 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientMemory I0320 23:47:58.691518 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasNoDiskPressure I0320 23:47:58.691630 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientPID I0320 23:47:58.691736 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeNotReady I0320 23:47:58.691868 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods I0320 23:47:58.692016 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeReady I0320 23:47:58.692134 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EvictionThresholdMet' Attempting to reclaim inodes I0320 23:47:58.692267 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasDiskPressure I0320 23:47:58.692401 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet. I0320 23:47:58.692519 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientMemory I0320 23:47:58.692645 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasNoDiskPressure I0320 23:47:58.692765 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientPID I0320 23:47:58.692892 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeNotReady I0320 23:47:58.693012 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods I0320 23:47:58.693146 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeReady I0320 23:47:58.693299 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet. I0320 23:47:58.693456 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientMemory I0320 23:47:58.693583 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasNoDiskPressure I0320 23:47:58.693689 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientPID I0320 23:47:58.693795 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeNotReady I0320 23:47:58.693911 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods I0320 23:47:58.694018 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeReady I0320 23:47:58.694123 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EvictionThresholdMet' Attempting to reclaim memory I0320 23:47:58.694229 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasInsufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasInsufficientMemory I0320 23:47:58.694341 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet. I0320 23:47:58.694447 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientMemory I0320 23:47:58.694558 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasNoDiskPressure I0320 23:47:58.694665 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientPID I0320 23:47:58.694770 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeNotReady I0320 23:47:58.694898 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods I0320 23:47:58.695004 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeReady I0320 23:47:58.695111 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet. I0320 23:47:58.695216 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientMemory I0320 23:47:58.695322 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasNoDiskPressure I0320 23:47:58.695431 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeHasSufficientPID I0320 23:47:58.695538 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeNotReady I0320 23:47:58.695658 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods I0320 23:47:58.695763 2678 util.go:244] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", UID:"tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 status is now: NodeReady I0320 23:47:58.695895 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4657", Name:"fork-bomb-container-pod", UID:"d803fdd0-2913-4d5d-915f-2f9c283e3cb4", APIVersion:"v1", ResourceVersion:"1387", FieldPath:"spec.containers{fork-bomb-container-container}"}): type: 'Normal' reason: 'Pulled' Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine I0320 23:47:58.696011 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4657", Name:"fork-bomb-container-pod", UID:"d803fdd0-2913-4d5d-915f-2f9c283e3cb4", APIVersion:"v1", ResourceVersion:"1387", FieldPath:"spec.containers{fork-bomb-container-container}"}): type: 'Normal' reason: 'Created' Created container fork-bomb-container-container I0320 23:47:58.696117 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4657", Name:"fork-bomb-container-pod", UID:"d803fdd0-2913-4d5d-915f-2f9c283e3cb4", APIVersion:"v1", ResourceVersion:"1387", FieldPath:"spec.containers{fork-bomb-container-container}"}): type: 'Normal' reason: 'Started' Started container fork-bomb-container-container I0320 23:47:58.696222 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4657", Name:"test-admit-pod", UID:"d792efd9-36dd-494a-b248-793b855c2f4b", APIVersion:"v1", ResourceVersion:"1435", FieldPath:"spec.containers{test-admit-pod}"}): type: 'Normal' reason: 'Pulled' Container image "registry.k8s.io/pause:3.9" already present on machine I0320 23:47:58.696333 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4657", Name:"test-admit-pod", UID:"d792efd9-36dd-494a-b248-793b855c2f4b", APIVersion:"v1", ResourceVersion:"1435", FieldPath:"spec.containers{test-admit-pod}"}): type: 'Normal' reason: 'Created' Created container test-admit-pod I0320 23:47:58.696418 2678 util.go:244] Event(v1.ObjectReference{Kind:"Pod", Namespace:"pidpressure-eviction-test-4657", Name:"test-admit-pod", UID:"d792efd9-36dd-494a-b248-793b855c2f4b", APIVersion:"v1", ResourceVersion:"1435", FieldPath:"spec.containers{test-admit-pod}"}): type: 'Normal' reason: 'Started' Started container test-admit-pod < Exit [AfterEach] TOP-LEVEL - test/e2e_node/eviction_test.go:620 @ 03/20/23 23:47:58.696 (2.08s) > Enter [AfterEach] when we run containers that should cause PIDPressure; PodDisruptionConditions enabled [NodeFeature:PodDisruptionConditions] - test/e2e_node/util.go:190 @ 03/20/23 23:47:58.696 STEP: Stopping the kubelet - test/e2e_node/util.go:200 @ 03/20/23 23:47:58.696 Mar 20 23:47:58.729: INFO: Get running kubelet with systemctl: UNIT LOAD ACTIVE SUB DESCRIPTION kubelet-20230320T225220.service loaded active running /tmp/node-e2e-20230320T225220/kubelet --kubeconfig /tmp/node-e2e-20230320T225220/kubeconfig --root-dir /var/lib/kubelet --v 4 --hostname-override tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20230320T225220/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization of SUB. SUB = The low-level unit activation state, values depend on unit type. 1 loaded units listed. , kubelet-20230320T225220 W0320 23:47:58.793232 2678 util.go:477] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:55838->127.0.0.1:10248: read: connection reset by peer STEP: Starting the kubelet - test/e2e_node/util.go:216 @ 03/20/23 23:47:58.802 W0320 23:47:58.840710 2678 util.go:477] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused < Exit [AfterEach] when we run containers that should cause PIDPressure; PodDisruptionConditions enabled [NodeFeature:PodDisruptionConditions] - test/e2e_node/util.go:190 @ 03/20/23 23:48:03.844 (5.148s) > Enter [AfterEach] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - test/e2e/framework/node/init/init.go:33 @ 03/20/23 23:48:03.844 Mar 20 23:48:03.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - test/e2e/framework/node/init/init.go:33 @ 03/20/23 23:48:03.845 (2ms) > Enter [DeferCleanup (Each)] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - test/e2e/framework/metrics/init/init.go:35 @ 03/20/23 23:48:03.845 < Exit [DeferCleanup (Each)] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - test/e2e/framework/metrics/init/init.go:35 @ 03/20/23 23:48:03.845 (0s) > Enter [DeferCleanup (Each)] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - dump namespaces | framework.go:209 @ 03/20/23 23:48:03.845 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/20/23 23:48:03.845 STEP: Collecting events from namespace "pidpressure-eviction-test-4657". - test/e2e/framework/debug/dump.go:42 @ 03/20/23 23:48:03.846 STEP: Found 6 events. - test/e2e/framework/debug/dump.go:46 @ 03/20/23 23:48:03.847 Mar 20 23:48:03.847: INFO: At 2023-03-20 23:45:54 +0000 UTC - event for fork-bomb-container-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 20 23:48:03.847: INFO: At 2023-03-20 23:45:55 +0000 UTC - event for fork-bomb-container-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Created: Created container fork-bomb-container-container Mar 20 23:48:03.847: INFO: At 2023-03-20 23:45:55 +0000 UTC - event for fork-bomb-container-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Started: Started container fork-bomb-container-container Mar 20 23:48:03.847: INFO: At 2023-03-20 23:47:57 +0000 UTC - event for test-admit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Mar 20 23:48:03.847: INFO: At 2023-03-20 23:47:57 +0000 UTC - event for test-admit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Created: Created container test-admit-pod Mar 20 23:48:03.847: INFO: At 2023-03-20 23:47:57 +0000 UTC - event for test-admit-pod: {kubelet tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64} Started: Started container test-admit-pod Mar 20 23:48:03.849: INFO: POD NODE PHASE GRACE CONDITIONS Mar 20 23:48:03.849: INFO: test-admit-pod tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:47:56 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:47:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:47:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 23:47:56 +0000 UTC }] Mar 20 23:48:03.849: INFO: Mar 20 23:48:03.859: INFO: Logging node info for node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 Mar 20 23:48:03.861: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 8a7dfb05-f333-4f40-9f20-a57ae3eaa3a4 1450 0 2023-03-20 22:54:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-20 22:54:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2023-03-20 23:47:59 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20869787648 0} {<nil>} 20380652Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3841228800 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18782808853 0} {<nil>} 18782808853 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3579084800 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-20 23:47:59 +0000 UTC,LastTransitionTime:2023-03-20 23:44:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-20 23:47:59 +0000 UTC,LastTransitionTime:2023-03-20 23:34:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-20 23:47:59 +0000 UTC,LastTransitionTime:2023-03-20 22:54:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-20 23:47:59 +0000 UTC,LastTransitionTime:2023-03-20 23:47:59 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.71,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:42e8e67821f5970370e2e1f0c1acea4f,SystemUUID:42e8e678-21f5-9703-70e2-e1f0c1acea4f,BootID:f0916da1-6860-4791-9f27-ed232e503da7,KernelVersion:6.1.11-200.fc37.x86_64,OSImage:Fedora CoreOS 37.20230218.3.0,ContainerRuntimeVersion:cri-o://1.26.0,KubeletVersion:v1.27.0-beta.0.29+c9ff2866682432,KubeProxyVersion:v1.27.0-beta.0.29+c9ff2866682432,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 registry.k8s.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 registry.k8s.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/node-perf/tf-wide-deep@sha256:91ab3b5ee22441c99370944e2e2cb32670db62db433611b4e3780bdee6a8e5a1 registry.k8s.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d7e74e6555abe4b001aadddc248447b472ae35ccbb2c21ca0febace6c4c6d7bb registry.k8s.io/e2e-test-images/node-perf/tf-wide-deep:1.3],SizeBytes:559664987,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/gluster@sha256:033a12fe65438751690b519cebd4135a3485771086bcf437212b7b886bb7956c registry.k8s.io/e2e-test-images/volume/gluster@sha256:c52e01956fec2cf5968b87be8f06ae740ea5d208a3b41fa2c7970b13cc515be5 registry.k8s.io/e2e-test-images/volume/gluster:1.3],SizeBytes:352430719,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9 registry.k8s.io/etcd:3.5.7-0],SizeBytes:297083935,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:034f77d52166fcacb81d6a6db10a4e24644c241896822e6525925859fec09f47 registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:272589700,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost@sha256:da18b4806cfa370df04f9c3faa7d654a22a80467dc4cab92bd1b22b4abe4d5aa registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:129622797,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd@sha256:21e720a020bf8d492b5dd2fe0f31a5205021176f505ecf35b10177f8bfd68980 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:128894228,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:0ce71ef6d759425d22b10e65b439749fe5d13377a188e2fc060b731cdb4e6901 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.7],SizeBytes:115035523,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/node-perf/npb-is@sha256:8285539c79625b192a5e33fc3d21edc1a7776fb9afe15fae3b5037a7a8020839 registry.k8s.io/e2e-test-images/node-perf/npb-is@sha256:f941079315f73b182b0f416134253ee87ab51162cbd2e9fcd31bbe726999a977 registry.k8s.io/e2e-test-images/node-perf/npb-is:1.2],SizeBytes:99663088,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/node-perf/npb-ep@sha256:90b5cfc5451428aad4dd6af9960640f2506804d35aa05e83c11bf0a46ac318c8 registry.k8s.io/e2e-test-images/node-perf/npb-ep@sha256:a37ad3a2ccb2b8aa7ced0b7c884888d2cef953cfc9a158e3e8b914d52147091b registry.k8s.io/e2e-test-images/node-perf/npb-ep:1.2],SizeBytes:99661552,},ContainerImage{Names:[gcr.io/cadvisor/cadvisor@sha256:89e6137f068ded2e9a3a012ce71260b9afc57a19305842aa1074239841a539a7 gcr.io/cadvisor/cadvisor:v0.43.0],SizeBytes:87971088,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:dfa235c2d64c29405f40489cf631193b27bec6dcf13cfee9824e449f6ddac051 registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:43877486,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-device-plugin@sha256:3dd0413e5a78f1c2a6484f168ba3daf23ebb0b1141897237e9559db6c5f7101f registry.k8s.io/e2e-test-images/sample-device-plugin@sha256:e84f6ca27c51ddedf812637dd2bcf771ad69fdca1173e5690c372370d0f93c40 registry.k8s.io/e2e-test-images/sample-device-plugin:1.3],SizeBytes:41740418,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[registry.k8s.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:19251111,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx@sha256:c42b04e8cf71231fac5dbc833366f7ce2ae78ef8b9df4304fcb83edcd495f69f registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:17244936,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/ipc-utils@sha256:647d092bada3b46c449d875adf31d71c1dd29c244e9cca6a04fddf9d6bcac136 registry.k8s.io/e2e-test-images/ipc-utils@sha256:89fe5eb4c180571eb5c4e4179f1e9bc03497b1f50e45e4f720165617a603d737 registry.k8s.io/e2e-test-images/ipc-utils:1.3],SizeBytes:12251265,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[registry.k8s.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 registry.k8s.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox@sha256:4cdffea536d503c58d7e087bab34a43e63a11dcfa4132b5a1b838885f08fb730 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:1374155,},ContainerImage{Names:[registry.k8s.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10 registry.k8s.io/pause:3.9],SizeBytes:750414,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4 registry.k8s.io/pause:3.6],SizeBytes:690326,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 20 23:48:03.861: INFO: Logging kubelet events for node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 Mar 20 23:48:03.862: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 Mar 20 23:48:03.865: INFO: test-admit-pod started at 2023-03-20 23:47:56 +0000 UTC (0+1 container statuses recorded) Mar 20 23:48:03.865: INFO: Container test-admit-pod ready: true, restart count 0 W0320 23:48:03.866166 2678 metrics_grabber.go:111] Can't find any pods in namespace kube-system to grab metrics from Mar 20 23:48:03.881: INFO: Latency metrics for node tmp-node-e2e-ccf4f8b3-fedora-coreos-37-20230218-3-0-gcp-x86-64 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/20/23 23:48:03.882 (36ms) < Exit [DeferCleanup (Each)] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - dump namespaces | framework.go:209 @ 03/20/23 23:48:03.882 (36ms) > Enter [DeferCleanup (Each)] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - tear down framework | framework.go:206 @ 03/20/23 23:48:03.882 STEP: Destroying namespace "pidpressure-eviction-test-4657" for this suite. - test/e2e/framework/framework.go:351 @ 03/20/23 23:48:03.882 < Exit [DeferCleanup (Each)] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] - tear down framework | framework.go:206 @ 03/20/23 23:48:03.884 (3ms)
Filter through log files | View test history on testgrid
error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup -vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-gke-ubuntu --zone=us-west1-b --ssh-user=core --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[NodeFeature:Eviction\]" --test_args=--container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}" --test-timeout=5h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/crio/latest/image-config-cgrpv1.yaml: exit status 1
from junit_runner.xml
Filter through log files | View test history on testgrid
E2eNode Suite [It] [sig-node] ImageGCNoEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [It] [sig-node] LocalStorageCapacityIsolationMemoryBackedVolumeEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods
E2eNode Suite [It] [sig-node] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [It] [sig-node] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [It] [sig-node] PriorityMemoryEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [SynchronizedAfterSuite]
E2eNode Suite [SynchronizedBeforeSuite]
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest GetDeployer
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest test setup
E2eNode Suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity]
E2eNode Suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity]
E2eNode Suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running without AppArmor should reject a pod with an AppArmor profile
E2eNode Suite [It] [sig-node] CPU Manager Metrics [Serial][Feature:CPUManager] when querying /metrics should not report any pinning failures when the cpumanager allocation is expected to succeed
E2eNode Suite [It] [sig-node] CPU Manager Metrics [Serial][Feature:CPUManager] when querying /metrics should report pinning failures when the cpumanager allocation is known to fail
E2eNode Suite [It] [sig-node] CPU Manager Metrics [Serial][Feature:CPUManager] when querying /metrics should report zero pinning counters after a fresh restart
E2eNode Suite [It] [sig-node] CPU Manager [Serial] [Feature:CPUManager] With kubeconfig updated with static CPU Manager policy run the CPU Manager tests should assign CPUs as expected based on the Pod spec
E2eNode Suite [It] [sig-node] CPU Manager [Serial] [Feature:CPUManager] With kubeconfig updated with static CPU Manager policy run the CPU Manager tests should assign CPUs as expected with enhanced policy based on strict SMT alignment
E2eNode Suite [It] [sig-node] Checkpoint Container [NodeFeature:CheckpointContainer] will checkpoint a container out of a pod
E2eNode Suite [It] [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
E2eNode Suite [It] [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]
E2eNode Suite [It] [sig-node] ConfigMap should update ConfigMap successfully
E2eNode Suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart https hook properly [MinimumKubeletVersion:1.23] [NodeConformance]
E2eNode Suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop https hook properly [MinimumKubeletVersion:1.23] [NodeConformance]
E2eNode Suite [It] [sig-node] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000
E2eNode Suite [It] [sig-node] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup Kubelet's oom-score-adj should be -999
E2eNode Suite [It] [sig-node] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup burstable container's oom-score-adj should be between [2, 1000)
E2eNode Suite [It] [sig-node] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup container runtime's oom-score-adj should be -999
E2eNode Suite [It] [sig-node] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup guaranteed container's oom-score-adj should be -998
E2eNode Suite [It] [sig-node] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should be able to pull from private registry with credential provider [NodeConformance]
E2eNode Suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set [NodeConformance]
E2eNode Suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
E2eNode Suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
E2eNode Suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
E2eNode Suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
E2eNode Suite [It] [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct cri log path
E2eNode Suite [It] [sig-node] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct log path
E2eNode Suite [It] [sig-node] ContainerLogRotation [Slow] [Serial] [Disruptive] when a container generates a lot of log should be rotated and limited to a fixed amount of files
E2eNode Suite [It] [sig-node] Containers should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Containers should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod [Flaky] should be able to create and delete a critical pod
E2eNode Suite [It] [sig-node] Deleted pods handling [NodeConformance] Should transition to Failed phase a deleted pod if non-zero exit codes Restart policy Always
E2eNode Suite [It] [sig-node] Deleted pods handling [NodeConformance] Should transition to Failed phase a deleted pod if non-zero exit codes Restart policy Never
E2eNode Suite [It] [sig-node] Deleted pods handling [NodeConformance] Should transition to Failed phase a deleted pod if non-zero exit codes Restart policy OnFailure
E2eNode Suite [It] [sig-node] Deleted pods handling [NodeConformance] Should transition to Failed phase a pod which is deleted while pending
E2eNode Suite [It] [sig-node] Deleted pods handling [NodeConformance] Should transition to Succeeded phase a deleted pod when containers complete with 0 exit code Restart policy Always
E2eNode Suite [It] [sig-node] Deleted pods handling [NodeConformance] Should transition to Succeeded phase a deleted pod when containers complete with 0 exit code Restart policy Never
E2eNode Suite [It] [sig-node] Deleted pods handling [NodeConformance] Should transition to Succeeded phase a deleted pod when containers complete with 0 exit code Restart policy OnFailure
E2eNode Suite [It] [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval
E2eNode Suite [It] [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [It] [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [It] [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [It] [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [It] [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [It] [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [It] [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 90 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [It] [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 90 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [It] [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 90 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [It] [sig-node] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 90 pods with 0s interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [It] [sig-node] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 90 pods with 100ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [It] [sig-node] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 90 pods with 300ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [It] [sig-node] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods
E2eNode Suite [It] [sig-node] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [It] [sig-node] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 30 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [It] [sig-node] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 50 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [It] [sig-node] Device Manager [Serial] [Feature:DeviceManager][NodeFeature:DeviceManager] With SRIOV devices in the system should be able to recover V1 (aka pre-1.20) checkpoint data and reject pods before device re-registration
E2eNode Suite [It] [sig-node] Device Manager [Serial] [Feature:DeviceManager][NodeFeature:DeviceManager] With SRIOV devices in the system should be able to recover V1 (aka pre-1.20) checkpoint data and update topology info on device re-registration
E2eNode Suite [It] [sig-node] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin [Serial] [Disruptive] Can schedule a pod that requires a device
E2eNode Suite [It] [sig-node] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin [Serial] [Disruptive] Keeps device plugin assignments across pod and kubelet restarts
E2eNode Suite [It] [sig-node] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin [Serial] [Disruptive] Keeps device plugin assignments after the device plugin has been re-registered
E2eNode Suite [It] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide container's limits.hugepages-<pagesize> and requests.hugepages-<pagesize> as env vars
E2eNode Suite [It] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide default limits.hugepages-<pagesize> from node allocatable
E2eNode Suite [It] [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
E2eNode Suite [It] [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Ephemeral Containers [NodeConformance] will start an ephemeral container in an existing pod [Conformance]
E2eNode Suite [It] [sig-node] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [It] [sig-node] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [It] [sig-node] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [It] [sig-node] GracefulNodeShutdown [Serial] [NodeFeature:GracefulNodeShutdown] [NodeFeature:GracefulNodeShutdownBasedOnPodPriority] graceful node shutdown when PodDisruptionConditions are enabled [NodeFeature:PodDisruptionConditions] should add the DisruptionTarget pod failure condition to the evicted pods
E2eNode Suite [It] [sig-node] GracefulNodeShutdown [Serial] [NodeFeature:GracefulNodeShutdown] [NodeFeature:GracefulNodeShutdownBasedOnPodPriority] when gracefully shutting down after restart dbus, should be able to gracefully shutdown
E2eNode Suite [It] [sig-node] GracefulNodeShutdown [Serial] [NodeFeature:GracefulNodeShutdown] [NodeFeature:GracefulNodeShutdownBasedOnPodPriority] when gracefully shutting down should be able to gracefully shutdown pods with various grace periods
E2eNode Suite [It] [sig-node] GracefulNodeShutdown [Serial] [NodeFeature:GracefulNodeShutdown] [NodeFeature:GracefulNodeShutdownBasedOnPodPriority] when gracefully shutting down should be able to handle a cancelled shutdown
E2eNode Suite [It] [sig-node] GracefulNodeShutdown [Serial] [NodeFeature:GracefulNodeShutdown] [NodeFeature:GracefulNodeShutdownBasedOnPodPriority] when gracefully shutting down with Pod priority should be able to gracefully shutdown pods with various grace periods
E2eNode Suite [It] [sig-node] Hostname of Pod [NodeConformance] a pod configured to set FQDN as hostname will remain in Pending state generating FailedCreatePodSandBox events when the FQDN is longer than 64 bytes
E2eNode Suite [It] [sig-node] Hostname of Pod [NodeConformance] a pod with subdomain field has FQDN, hostname is shortname
E2eNode Suite [It] [sig-node] Hostname of Pod [NodeConformance] a pod with subdomain field has FQDN, when setHostnameAsFQDN is set to true, the FQDN is set as hostname
E2eNode Suite [It] [sig-node] Hostname of Pod [NodeConformance] a pod without FQDN is not affected by SetHostnameAsFQDN field
E2eNode Suite [It] [sig-node] Hostname of Pod [NodeConformance] a pod without subdomain field does not have FQDN
E2eNode Suite [It] [sig-node] HugePages [Serial] [Feature:HugePages][NodeSpecialFeature:HugePages] should add resources for new huge page sizes on kubelet restart
E2eNode Suite [It] [sig-node] HugePages [Serial] [Feature:HugePages][NodeSpecialFeature:HugePages] should remove resources for huge page sizes no longer supported
E2eNode Suite [It] [sig-node] HugePages [Serial] [Feature:HugePages][NodeSpecialFeature:HugePages] when start the pod with the resources requests that contain multiple hugepages resources should set correct hugetlb mount and limit under the container cgroup
E2eNode Suite [It] [sig-node] HugePages [Serial] [Feature:HugePages][NodeSpecialFeature:HugePages] when start the pod with the resources requests that contain only one hugepages resource with the backward compatible API should set correct hugetlb mount and limit under the container cgroup
E2eNode Suite [It] [sig-node] HugePages [Serial] [Feature:HugePages][NodeSpecialFeature:HugePages] when start the pod with the resources requests that contain only one hugepages resource with the new API should set correct hugetlb mount and limit under the container cgroup
E2eNode Suite [It] [sig-node] ImageCredentialProvider [Feature:KubeletCredentialProviders] should be able to create pod with image credentials fetched from external credential provider
E2eNode Suite [It] [sig-node] ImageID [NodeFeature: ImageID] should be set to the manifest digest (from RepoDigests) when available
E2eNode Suite [It] [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
E2eNode Suite [It] [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
E2eNode Suite [It] [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
E2eNode Suite [It] [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
E2eNode Suite [It] [sig-node] InodeEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [It] [sig-node] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a BestEffort Pod Pod containers should have been created under the BestEffort cgroup
E2eNode Suite [It] [sig-node] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Burstable Pod Pod containers should have been created under the Burstable cgroup
E2eNode Suite [It] [sig-node] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Guaranteed Pod Pod containers should have been created under the cgroup-root
E2eNode Suite [It] [sig-node] Kubelet Cgroup Manager QOS containers On enabling QOS cgroup hierarchy Top level QoS containers should have been created [NodeConformance]
E2eNode Suite [It] [sig-node] Kubelet PodOverhead handling [LinuxOnly] PodOverhead cgroup accounting On running pod with PodOverhead defined Pod cgroup should be sum of overhead and resource limits
E2eNode Suite [It] [sig-node] Kubelet Volume Manager Volume Manager On termination of pod with memory backed volume should remove the volume from the node [NodeConformance]
E2eNode Suite [It] [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Kubelet when scheduling an agnhost Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Lease lease API should be available [Conformance]
E2eNode Suite [It] [sig-node] LocalStorageCapacityIsolationFSQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: false) should eventually evict all of the correct pods
E2eNode Suite [It] [sig-node] LocalStorageCapacityIsolationFSQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: true) should eventually evict all of the correct pods
E2eNode Suite [It] [sig-node] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [It] [sig-node] Lock contention [Slow] [Disruptive] [NodeSpecialFeature:LockContention] Kubelet should stop when the test acquires the lock on lock file and restart once the lock is released
E2eNode Suite [It] [sig-node] Memory Manager [Disruptive] [Serial] [Feature:MemoryManager] with none policy should not report any memory data during request to pod resources GetAllocatableResources
E2eNode Suite [It] [sig-node] Memory Manager [Disruptive] [Serial] [Feature:MemoryManager] with none policy should not report any memory data during request to pod resources List
E2eNode Suite [It] [sig-node] Memory Manager [Disruptive] [Serial] [Feature:MemoryManager] with none policy should succeed to start the pod
E2eNode Suite [It] [sig-node] Memory Manager [Disruptive] [Serial] [Feature:MemoryManager] with static policy should report memory data during request to pod resources GetAllocatableResources
E2eNode Suite [It] [sig-node] Memory Manager [Disruptive] [Serial] [Feature:MemoryManager] with static policy when guaranteed pod has init and app containers should succeed to start the pod
E2eNode Suite [It] [sig-node] Memory Manager [Disruptive] [Serial] [Feature:MemoryManager] with static policy when guaranteed pod has only app containers should succeed to start the pod
E2eNode Suite [It] [sig-node] Memory Manager [Disruptive] [Serial] [Feature:MemoryManager] with static policy when guaranteed pod memory request is bigger than free memory on each NUMA node should be rejected
E2eNode Suite [It] [sig-node] Memory Manager [Disruptive] [Serial] [Feature:MemoryManager] with static policy when multiple guaranteed pods started should report memory data for each guaranteed pod and container during request to pod resources List
E2eNode Suite [It] [sig-node] Memory Manager [Disruptive] [Serial] [Feature:MemoryManager] with static policy when multiple guaranteed pods started should succeed to start all pods
E2eNode Suite [It] [sig-node] MemoryAllocatableEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [It] [sig-node] MirrorPod when create a mirror pod should be recreated when mirror pod forcibly deleted [NodeConformance]
E2eNode Suite [It] [sig-node] MirrorPod when create a mirror pod should be recreated when mirror pod gracefully deleted [NodeConformance]
E2eNode Suite [It] [sig-node] MirrorPod when create a mirror pod should be updated when static pod updated [NodeConformance]
E2eNode Suite [It] [sig-node] MirrorPod when create a mirror pod without changes should successfully recreate when file is removed and recreated [NodeConformance]
E2eNode Suite [It] [sig-node] MirrorPod when recreating a static pod it should launch successfully even if it temporarily failed termination due to volume failing to unmount [NodeConformance] [Serial]
E2eNode Suite [It] [sig-node] MirrorPodWithGracePeriod when create a mirror pod and the container runtime is temporarily down during pod termination [NodeConformance] [Serial] [Disruptive] the mirror pod should terminate successfully
E2eNode Suite [It] [sig-node] MirrorPodWithGracePeriod when create a mirror pod mirror pod termination should satisfy grace period when static pod is deleted [NodeConformance]
E2eNode Suite [It] [sig-node] MirrorPodWithGracePeriod when create a mirror pod mirror pod termination should satisfy grace period when static pod is updated [NodeConformance]
E2eNode Suite [It] [sig-node] MirrorPodWithGracePeriod when create a mirror pod should update a static pod when the static pod is updated multiple times during the graceful termination period [NodeConformance]
E2eNode Suite [It] [sig-node] Node Container Manager [Serial] Validate Node Allocatable [NodeFeature:NodeAllocatable] sets up the node and runs the test
E2eNode Suite [It] [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload
E2eNode Suite [It] [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload
E2eNode Suite [It] [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads TensorFlow workload
E2eNode Suite [It] [sig-node] NodeLease NodeLease should have OwnerReferences set
E2eNode Suite [It] [sig-node] NodeLease NodeLease the kubelet should create and update a lease in the kube-node-lease namespace
E2eNode Suite [It] [sig-node] NodeLease NodeLease the kubelet should report node status infrequently
E2eNode Suite [It] [sig-node] NodeProblemDetector [NodeFeature:NodeProblemDetector] [Serial] SystemLogMonitor should generate node condition and events for corresponding errors
E2eNode Suite [It] [sig-node] OOMKiller [LinuxOnly] [NodeConformance] The containers terminated by OOM killer should have the reason set to OOMKilled
E2eNode Suite [It] [sig-node] OSArchLabelReconciliation [Serial] [Slow] [Disruptive] Kubelet should reconcile the OS and Arch labels when restarted
E2eNode Suite [It] [sig-node] OSArchLabelReconciliation [Serial] [Slow] [Disruptive] Kubelet should reconcile the OS and Arch labels when running
E2eNode Suite [It] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] when querying /metrics [NodeConformance] should report the values for the podresources metrics
E2eNode Suite [It] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] with SRIOV devices in the system with CPU manager None policy should return the expected responses
E2eNode Suite [It] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] with SRIOV devices in the system with CPU manager Static policy should return the expected responses
E2eNode Suite [It] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] with a topology-unaware device plugin, which reports resources w/o hardware topology with CPU manager Static policy should return proper podresources the same as before the restart of kubelet
E2eNode Suite [It] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] with the builtin rate limit values should hit throttling when calling podresources List in a tight loop
E2eNode Suite [It] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] without SRIOV devices in the system with CPU manager None policy should return the expected responses
E2eNode Suite [It] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] without SRIOV devices in the system with CPU manager Static policy should return the expected responses
E2eNode Suite [It] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] without SRIOV devices in the system with disabled KubeletPodResourcesGetAllocatable feature gate should return the expected error with the feature gate disabled
E2eNode Suite [It] [sig-node] Pod conditions managed by Kubelet including PodHasNetwork condition [Serial] [Feature:PodHasNetwork] a pod failing to mount volumes and with init containers should report just the scheduled condition set
E2eNode Suite [It] [sig-node] Pod conditions managed by Kubelet including PodHasNetwork condition [Serial] [Feature:PodHasNetwork] a pod failing to mount volumes and without init containers should report scheduled and initialized conditions set
E2eNode Suite [It] [sig-node] Pod conditions managed by Kubelet including PodHasNetwork condition [Serial] [Feature:PodHasNetwork] a pod with init containers should report all conditions set in expected order after the pod is up
E2eNode Suite [It] [sig-node] Pod conditions managed by Kubelet including PodHasNetwork condition [Serial] [Feature:PodHasNetwork] a pod without init containers should report all conditions set in expected order after the pod is up
E2eNode Suite [It] [sig-node] Pod conditions managed by Kubelet without PodHasNetwork condition a pod failing to mount volumes and with init containers should report just the scheduled condition set
E2eNode Suite [It] [sig-node] Pod conditions managed by Kubelet without PodHasNetwork condition a pod failing to mount volumes and without init containers should report scheduled and initialized conditions set
E2eNode Suite [It] [sig-node] Pod conditions managed by Kubelet without PodHasNetwork condition a pod with init containers should report all conditions set in expected order after the pod is up
E2eNode Suite [It] [sig-node] Pod conditions managed by Kubelet without PodHasNetwork condition a pod without init containers should report all conditions set in expected order after the pod is up
E2eNode Suite [It] [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS
E2eNode Suite [It] [sig-node] PodPidsLimit [Serial] With config updated with pids limits should set pids.max for Pod
E2eNode Suite [It] [sig-node] PodTemplates should delete a collection of pod templates [Conformance]
E2eNode Suite [It] [sig-node] PodTemplates should replace a pod template [Conformance]
E2eNode Suite [It] [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]
E2eNode Suite [It] [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Pods should be updated [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
E2eNode Suite [It] [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Pods should delete a collection of pods [Conformance]
E2eNode Suite [It] [sig-node] Pods should get a host IP [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
E2eNode Suite [It] [sig-node] Pods should patch a pod status [Conformance]
E2eNode Suite [It] [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]
E2eNode Suite [It] [sig-node] Pods should support pod readiness gates [NodeConformance]
E2eNode Suite [It] [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
E2eNode Suite [It] [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it
E2eNode Suite [It] [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Probing container should *not* be restarted with a GRPC liveness probe [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe
E2eNode Suite [It] [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Probing container should be ready immediately after startupProbe succeeds
E2eNode Suite [It] [sig-node] Probing container should be restarted by liveness probe after startup probe enables it
E2eNode Suite [It] [sig-node] Probing container should be restarted startup probe fails
E2eNode Suite [It] [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Probing container should be restarted with a GRPC liveness probe [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout
E2eNode Suite [It] [sig-node] Probing container should be restarted with a local redirect http liveness probe
E2eNode Suite [It] [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
E2eNode Suite [It] [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Probing container should mark readiness on pods to false and disable liveness probes while pod is in progress of terminating
E2eNode Suite [It] [sig-node] Probing container should mark readiness on pods to false while pod is in progress of terminating when a pod has a readiness probe
E2eNode Suite [It] [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
E2eNode Suite [It] [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]
E2eNode Suite [It] [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]
E2eNode Suite [It] [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 0 pods per node [Benchmark]
E2eNode Suite [It] [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node
E2eNode Suite [It] [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node [Benchmark]
E2eNode Suite [It] [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 35 pods per node [Benchmark]
E2eNode Suite [It] [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 90 pods per node [Benchmark]
E2eNode Suite [It] [sig-node] ResourceMetricsAPI [NodeFeature:ResourceMetrics] when querying /resource/metrics should report resource usage through the resource metrics api
E2eNode Suite [It] [sig-node] Restart [Serial] [Slow] [Disruptive] Container Runtime Network should recover from ip leak
E2eNode Suite [It] [sig-node] Restart [Serial] [Slow] [Disruptive] Dbus should continue to run pods after a restart
E2eNode Suite [It] [sig-node] Restart [Serial] [Slow] [Disruptive] Kubelet should correctly account for terminated pods after restart
E2eNode Suite [It] [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]
E2eNode Suite [It] [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]
E2eNode Suite [It] [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
E2eNode Suite [It] [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] SeccompDefault [Serial] [Feature:SeccompDefault] [LinuxOnly] with SeccompDefault enabled should use the default seccomp profile when unspecified
E2eNode Suite [It] [sig-node] SeccompDefault [Serial] [Feature:SeccompDefault] [LinuxOnly] with SeccompDefault enabled should use unconfined when specified
E2eNode Suite [It] [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]
E2eNode Suite [It] [sig-node] Secrets should patch a secret [Conformance]
E2eNode Suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
E2eNode Suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
E2eNode Suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
E2eNode Suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
E2eNode Suite [It] [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [It] [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Security Context When creating a pod with HostUsers must create the user namespace if set to false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
E2eNode Suite [It] [sig-node] Security Context When creating a pod with HostUsers must not create the user namespace if set to true [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
E2eNode Suite [It] [sig-node] Security Context When creating a pod with HostUsers should mount all volumes with proper permissions with hostUsers=false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
E2eNode Suite [It] [sig-node] Security Context When creating a pod with HostUsers should set FSGroup to user inside the container with hostUsers=false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
E2eNode Suite [It] [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
E2eNode Suite [It] [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
E2eNode Suite [It] [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Security Context [NodeConformance][LinuxOnly] Container PID namespace sharing containers in pods using isolated PID namespaces should all receive PID 1
E2eNode Suite [It] [sig-node] Security Context [NodeConformance][LinuxOnly] Container PID namespace sharing processes in containers sharing a pod namespace should be able to see each other
E2eNode Suite [It] [sig-node] Security Context when creating a pod in the host IPC namespace should not show the shared memory ID in the non-hostIPC containers [NodeFeature:HostAccess]
E2eNode Suite [It] [sig-node] Security Context when creating a pod in the host IPC namespace should show the shared memory ID in the host IPC containers [NodeFeature:HostAccess]
E2eNode Suite [It] [sig-node] Security Context when creating a pod in the host PID namespace should not show its pid in the non-hostpid containers [NodeFeature:HostAccess]
E2eNode Suite [It] [sig-node] Security Context when creating a pod in the host PID namespace should show its pid in the host PID namespace [NodeFeature:HostAccess]
E2eNode Suite [It] [sig-node] Security Context when creating a pod in the host network namespace should listen on same port in the host network containers [NodeFeature:HostAccess]
E2eNode Suite [It] [sig-node] Security Context when creating a pod in the host network namespace shouldn't show the same port in the non-hostnetwork containers [NodeFeature:HostAccess]
E2eNode Suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
E2eNode Suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Summary API [NodeConformance] when querying /stats/summary should report resource usage through the stats api
E2eNode Suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
E2eNode Suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
E2eNode Suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
E2eNode Suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls with slashes as separator [MinimumKubeletVersion:1.23]
E2eNode Suite [It] [sig-node] SystemNodeCriticalPod [Slow] [Serial] [Disruptive] [NodeFeature:SystemNodeCriticalPod] when create a system-node-critical pod should not be evicted upon DiskPressure
E2eNode Suite [It] [sig-node] Topology Manager Metrics [Serial][NodeFeature:TopologyManager] when querying /metrics should not report any admission failures when the topology manager alignment is expected to succeed
E2eNode Suite [It] [sig-node] Topology Manager Metrics [Serial][NodeFeature:TopologyManager] when querying /metrics should report admission failures when the topology manager alignment is known to fail
E2eNode Suite [It] [sig-node] Topology Manager Metrics [Serial][NodeFeature:TopologyManager] when querying /metrics should report zero admission counters after a fresh restart
E2eNode Suite [It] [sig-node] Topology Manager [Serial] [NodeFeature:TopologyManager] With kubeconfig updated to static CPU Manager policy run the Topology Manager tests run Topology Manager node alignment test suite
E2eNode Suite [It] [sig-node] Topology Manager [Serial] [NodeFeature:TopologyManager] With kubeconfig updated to static CPU Manager policy run the Topology Manager tests run Topology Manager policy test suite
E2eNode Suite [It] [sig-node] Topology Manager [Serial] [NodeFeature:TopologyManager] With kubeconfig updated to static CPU Manager policy run the Topology Manager tests run the Topology Manager pod scope alignment test suite
E2eNode Suite [It] [sig-node] Unknown Pods [Serial] [Disruptive] when creating a API pod the api pod should be terminated and cleaned up due to becoming a unknown pod due to being force deleted while kubelet is not running
E2eNode Suite [It] [sig-node] Unknown Pods [Serial] [Disruptive] when creating a mirror pod the static pod should be terminated and cleaned up due to becoming a unknown pod due to being force deleted while kubelet is not running
E2eNode Suite [It] [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]
E2eNode Suite [It] [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
E2eNode Suite [It] [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
E2eNode Suite [It] [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]
E2eNode Suite [It] [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
E2eNode Suite [It] [sig-node] [Feature:StandaloneMode] when creating a static pod the pod should be running
E2eNode Suite [It] [sig-node] [NodeConformance] Containers Lifecycle should launch init container serially before a regular container
E2eNode Suite [It] [sig-node] [NodeConformance] Containers Lifecycle should not launch regular containers if an init container fails
E2eNode Suite [It] [sig-node] [NodeConformance] Containers Lifecycle should not launch second container before PostStart of the first container completed
E2eNode Suite [It] [sig-node] [NodeConformance] Containers Lifecycle should restart failing container when pod restartPolicy is Always
E2eNode Suite [It] [sig-node] [NodeConformance] Containers Lifecycle should run Init container to completion before call to PostStart of regular container
E2eNode Suite [It] [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [It] [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [It] [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [It] [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]
E2eNode Suite [It] [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Downward API [Serial] [Disruptive] [Feature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
E2eNode Suite [It] [sig-storage] Downward API [Serial] [Disruptive] [Feature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
E2eNode Suite [It] [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [It] [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [It] [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size
E2eNode Suite [It] [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
E2eNode Suite [It] [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
E2eNode Suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
E2eNode Suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
E2eNode Suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
E2eNode Suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
E2eNode Suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
E2eNode Suite [It] [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]
E2eNode Suite [It] [sig-storage] HostPath should support r/w [NodeConformance]
E2eNode Suite [It] [sig-storage] HostPath should support subPath [NodeConformance]
E2eNode Suite [It] [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [It] [sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [It] [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [It] [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [It] [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [It] [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [It] [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [It] [sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
E2eNode Suite [It] [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
E2eNode Suite [It] [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [It] [sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
E2eNode Suite [It] [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
E2eNode Suite [It] [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]
E2eNode Suite [It] [sig-storage] Volumes NFSv3 should be mountable for NFSv3
E2eNode Suite [It] [sig-storage] Volumes NFSv4 should be mountable for NFSv4