This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 4 failed / 64 succeeded
Started2023-03-18 17:01
Elapsed1h38m
Revision
Builder912a8d0d-c5ae-11ed-93d6-92b4ce3fddda
infra-commitade17619a
job-versionv1.27.0-beta.0.22+fe91bc257b505e
kubetest-versionv20230222-b5208facd4
repok8s.io/kubernetes
repo-commitfe91bc257b505eb6057eb50b9c550a7c63e9fb91
repos{u'k8s.io/kubernetes': u'master'}
revisionv1.27.0-beta.0.22+fe91bc257b505e

Test Failures


E2eNode Suite [It] [sig-node] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin [Serial] [Disruptive] Keeps device plugin assignments across pod and kubelet restarts 5m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[It\]\s\[sig\-node\]\sDevice\sPlugin\s\[Feature\:DevicePluginProbe\]\[NodeFeature\:DevicePluginProbe\]\[Serial\]\sDevicePlugin\s\[Serial\]\s\[Disruptive\]\sKeeps\sdevice\splugin\sassignments\sacross\spod\sand\skubelet\srestarts$'
[FAILED] Timed out after 300.000s.
Expected success, but got an error:
    <*errors.errorString | 0xc0018573d0>: 
    expected v1alpha pod resources to be empty, but got non-empty resources: [&PodResources{Name:guaranteedef4da252-b7b0-4977-980b-1d0e5ce963f1,Namespace:kubelet-container-manager-7731,Containers:[]*ContainerResources{&ContainerResources{Name:guaranteedef4da252-b7b0-4977-980b-1d0e5ce963f1,Devices:[]*ContainerDevices{},},},}]
    {
        s: "expected v1alpha pod resources to be empty, but got non-empty resources: [&PodResources{Name:guaranteedef4da252-b7b0-4977-980b-1d0e5ce963f1,Namespace:kubelet-container-manager-7731,Containers:[]*ContainerResources{&ContainerResources{Name:guaranteedef4da252-b7b0-4977-980b-1d0e5ce963f1,Devices:[]*ContainerDevices{},},},}]",
    }
In [BeforeEach] at: test/e2e_node/device_plugin_test.go:110 @ 03/18/23 18:10:10.836

				
				Click to see stdout/stderrfrom junit_fedora01.xml

Find resources mentions in log files | View test history on testgrid


E2eNode Suite [It] [sig-node] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin [Serial] [Disruptive] Keeps device plugin assignments after the device plugin has been re-registered 5m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[It\]\s\[sig\-node\]\sDevice\sPlugin\s\[Feature\:DevicePluginProbe\]\[NodeFeature\:DevicePluginProbe\]\[Serial\]\sDevicePlugin\s\[Serial\]\s\[Disruptive\]\sKeeps\sdevice\splugin\sassignments\safter\sthe\sdevice\splugin\shas\sbeen\sre\-registered$'
[FAILED] Timed out after 300.001s.
Expected success, but got an error:
    <*errors.errorString | 0xc00201a390>: 
    expected v1alpha pod resources to be empty, but got non-empty resources: [&PodResources{Name:guaranteedef4da252-b7b0-4977-980b-1d0e5ce963f1,Namespace:kubelet-container-manager-7731,Containers:[]*ContainerResources{&ContainerResources{Name:guaranteedef4da252-b7b0-4977-980b-1d0e5ce963f1,Devices:[]*ContainerDevices{},},},}]
    {
        s: "expected v1alpha pod resources to be empty, but got non-empty resources: [&PodResources{Name:guaranteedef4da252-b7b0-4977-980b-1d0e5ce963f1,Namespace:kubelet-container-manager-7731,Containers:[]*ContainerResources{&ContainerResources{Name:guaranteedef4da252-b7b0-4977-980b-1d0e5ce963f1,Devices:[]*ContainerDevices{},},},}]",
    }
In [BeforeEach] at: test/e2e_node/device_plugin_test.go:110 @ 03/18/23 18:02:44.456

				
				Click to see stdout/stderrfrom junit_fedora01.xml

Find resources mentions in log files | View test history on testgrid


E2eNode Suite [It] [sig-node] MirrorPodWithGracePeriod when create a mirror pod and the container runtime is temporarily down during pod termination [NodeConformance] [Serial] [Disruptive] the mirror pod should terminate successfully 33s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[It\]\s\[sig\-node\]\sMirrorPodWithGracePeriod\swhen\screate\sa\smirror\spod\s\sand\sthe\scontainer\sruntime\sis\stemporarily\sdown\sduring\spod\stermination\s\[NodeConformance\]\s\[Serial\]\s\[Disruptive\]\sthe\smirror\spod\sshould\sterminate\ssuccessfully$'
[FAILED] Timed out after 5.000s.
Expected
    <string>: KubeletMetrics
to match keys: {
."kubelet_desired_pods"[kubelet_desired_pods{static=""}]:
	Expected
	    <string>: Sample
	to match fields: {
	.Value:
		Expected
		    <model.SampleValue>: 1
		to be ==
		    <int>: 0
	}
	
}
In [It] at: test/e2e_node/mirror_pod_grace_period_test.go:139 @ 03/18/23 18:14:30.229

				
				Click to see stdout/stderrfrom junit_fedora01.xml

Filter through log files | View test history on testgrid


kubetest Node Tests 1h37m

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup -vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-infra-e2e-boskos-015 --zone=us-west1-b --ssh-user=core --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --timeout=4h --focus="\[Serial\]" --skip="\[Flaky\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeSpecialFeature\]|\[NodeAlphaFeature:.+\]|\[NodeAlphaFeature\]|\[NodeFeature:Eviction\]" --test_args=--container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}" --test-timeout=4h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/crio/latest/image-config-cgrpv1-serial.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 64 Passed Tests

Show 350 Skipped Tests