This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 7 failed / 79 succeeded
Started2021-09-24 06:48
Elapsed3h6m
Revision
Builder62567660-1d03-11ec-b447-c6820a71c367
infra-commit4699cdce8
job-versionv1.23.0-alpha.2.204+7bff8adaf683dc
kubetest-version
repok8s.io/kubernetes
repo-commit7bff8adaf683dc7e25b5548e2c16e7393ff8a036
repos{u'k8s.io/kubernetes': u'master'}
revisionv1.23.0-alpha.2.204+7bff8adaf683dc

Test Failures


E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval 1m43s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sDensity\s\[Serial\]\s\[Slow\]\screate\sa\sbatch\sof\spods\slatency\/resource\sshould\sbe\swithin\slimit\swhen\screate\s10\spods\swith\s0s\sinterval$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:104
Sep 24 08:38:59.624: CPU usage exceeding limits:
 node n1-standard-2-ubuntu-gke-2004-1-20-v20210923-3f3d3c3b:
 container "runtime": expected 95th% usage < 0.600; got 0.605
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:214
				
				Click to see stdout/stderrfrom junit_ubuntu_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [sig-node] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container 12m30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sGarbageCollect\s\[Serial\]\[NodeFeature\:GarbageCollect\]\sGarbage\sCollection\sTest\:\sMany\sPods\swith\sMany\sRestarting\sContainers\sShould\seventually\sgarbage\scollect\scontainers\swhen\swe\sexceed\sthe\snumber\sof\sdead\scontainers\sper\scontainer$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:171
Timed out after 600.000s.
Expected
    <*errors.errorString | 0xc000b6e880>: {
        s: "pod gc-test-pod-many-containers-many-restarts-two had container with restartcount 3.  Should have been at least 2",
    }
to be nil
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:183
				
				Click to see stdout/stderrfrom junit_ubuntu_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [sig-node] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container 12m38s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sGarbageCollect\s\[Serial\]\[NodeFeature\:GarbageCollect\]\sGarbage\sCollection\sTest\:\sMany\sPods\swith\sMany\sRestarting\sContainers\sShould\seventually\sgarbage\scollect\scontainers\swhen\swe\sexceed\sthe\snumber\sof\sdead\scontainers\sper\scontainer$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:171
Timed out after 600.001s.
Expected
    <*errors.errorString | 0xc0014fc2d0>: {
        s: "pod gc-test-pod-many-containers-many-restarts-one had container with restartcount 4.  Should have been at least 3",
    }
to be nil
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:183
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [sig-node] Memory Manager [Serial] [Feature:MemoryManager] with none policy should not report any memory data during request to pod resources GetAllocatableResources 1m5s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sMemory\sManager\s\[Serial\]\s\[Feature\:MemoryManager\]\swith\snone\spolicy\sshould\snot\sreport\sany\smemory\sdata\sduring\srequest\sto\spod\sresources\sGetAllocatableResources$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/memory_manager_test.go:327
Found more than one kubelet service running: "0 loaded units listed. Pass --all to see loaded but inactive units, too.\nTo show all installed unit files use 'systemctl list-unit-files'.\n"
Expected
    <int>: 0
not to equal
    <int>: 0
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/util.go:411
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [sig-node] Memory Manager [Serial] [Feature:MemoryManager] with static policy when multiple guaranteed pods started should succeed to start all pods 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sMemory\sManager\s\[Serial\]\s\[Feature\:MemoryManager\]\swith\sstatic\spolicy\swhen\smultiple\sguaranteed\spods\sstarted\sshould\ssucceed\sto\sstart\sall\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/memory_manager_test.go:327
Timed out after 30.000s.
Expected
    <*errors.errorString | 0xc00096e330>: {
        s: "expected hugepages 256, but found 234",
    }
to be nil
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/memory_manager_test.go:339
				
				Click to see stdout/stderrfrom junit_cos-stable1_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [sig-node] SystemNodeCriticalPod [Slow] [Serial] [Disruptive] [NodeFeature:SystemNodeCriticalPod] when create a system-node-critical pod should not be evicted upon DiskPressure 2m33s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sSystemNodeCriticalPod\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\s\[NodeFeature\:SystemNodeCriticalPod\]\swhen\screate\sa\ssystem\-node\-critical\spod\s\sshould\snot\sbe\sevicted\supon\sDiskPressure$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/system_node_critical_test.go:82
Unexpected error:
    <*errors.errorString | 0xc0013d84d0>: {
        s: "there are currently no ready, schedulable nodes in the cluster",
    }
    there are currently no ready, schedulable nodes in the cluster
occurred
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/util.go:328
				
				Click to see stdout/stderrfrom junit_ubuntu_01.xml

Filter through log files | View test history on testgrid


kubetest Node Tests 3h4m

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-gke-ubuntu-1-6-flaky --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeSpecialFeature\]|\[NodeAlphaFeature:.+\]|\[NodeAlphaFeature\]|\[NodeFeature:Eviction\]" --test_args=--feature-gates=DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/usr/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd*\"]}" --test-timeout=4h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/image-config-serial.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 79 Passed Tests

Show 632 Skipped Tests