This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 5 failed / 63 succeeded
Started2023-03-22 13:25
Elapsed1h42m
Revision
Builder0c8c10b5-c8b5-11ed-abe9-f26dd6892cca
infra-commit3c1dc7176
job-versionv1.27.0-beta.0.64+3cf9f66e90d560
kubetest-versionv20230321-850d5bc856
repok8s.io/kubernetes
repo-commit3cf9f66e90d560ac080687610933c712bcf37b39
repos{u'k8s.io/kubernetes': u'master'}
revisionv1.27.0-beta.0.64+3cf9f66e90d560

Test Failures


E2eNode Suite [It] [sig-node] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin [Serial] [Disruptive] Can schedule a pod that requires a device 5m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[It\]\s\[sig\-node\]\sDevice\sPlugin\s\[Feature\:DevicePluginProbe\]\[NodeFeature\:DevicePluginProbe\]\[Serial\]\sDevicePlugin\s\[Serial\]\s\[Disruptive\]\sCan\sschedule\sa\spod\sthat\srequires\sa\sdevice$'
[FAILED] Timed out after 300.000s.
Expected success, but got an error:
    <*errors.errorString | 0xc0013f7240>: 
    expected v1alpha pod resources to be empty, but got non-empty resources: [&PodResources{Name:test-admit-pod,Namespace:localstorage-quota-monitoring-test-1669,Containers:[]*ContainerResources{&ContainerResources{Name:test-admit-pod,Devices:[]*ContainerDevices{},},},}]
    {
        s: "expected v1alpha pod resources to be empty, but got non-empty resources: [&PodResources{Name:test-admit-pod,Namespace:localstorage-quota-monitoring-test-1669,Containers:[]*ContainerResources{&ContainerResources{Name:test-admit-pod,Devices:[]*ContainerDevices{},},},}]",
    }
In [BeforeEach] at: test/e2e_node/device_plugin_test.go:110 @ 03/22/23 15:03:03.369

There were additional failures detected after the initial failure. These are visible in the timeline

				
				Click to see stdout/stderrfrom junit_fedora01.xml

Find resources mentions in log files | View test history on testgrid


E2eNode Suite [It] [sig-node] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin [Serial] [Disruptive] Keeps device plugin assignments across pod and kubelet restarts 5m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[It\]\s\[sig\-node\]\sDevice\sPlugin\s\[Feature\:DevicePluginProbe\]\[NodeFeature\:DevicePluginProbe\]\[Serial\]\sDevicePlugin\s\[Serial\]\s\[Disruptive\]\sKeeps\sdevice\splugin\sassignments\sacross\spod\sand\skubelet\srestarts$'
[FAILED] Timed out after 300.000s.
Expected success, but got an error:
    <*errors.errorString | 0xc000be0f50>: 
    expected v1alpha pod resources to be empty, but got non-empty resources: [&PodResources{Name:test-admit-pod,Namespace:localstorage-quota-monitoring-test-1669,Containers:[]*ContainerResources{&ContainerResources{Name:test-admit-pod,Devices:[]*ContainerDevices{},},},}]
    {
        s: "expected v1alpha pod resources to be empty, but got non-empty resources: [&PodResources{Name:test-admit-pod,Namespace:localstorage-quota-monitoring-test-1669,Containers:[]*ContainerResources{&ContainerResources{Name:test-admit-pod,Devices:[]*ContainerDevices{},},},}]",
    }
In [BeforeEach] at: test/e2e_node/device_plugin_test.go:110 @ 03/22/23 13:58:26.7

There were additional failures detected after the initial failure. These are visible in the timeline

				
				Click to see stdout/stderrfrom junit_fedora01.xml

Find resources mentions in log files | View test history on testgrid


E2eNode Suite [It] [sig-node] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin [Serial] [Disruptive] Keeps device plugin assignments after the device plugin has been re-registered 5m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[It\]\s\[sig\-node\]\sDevice\sPlugin\s\[Feature\:DevicePluginProbe\]\[NodeFeature\:DevicePluginProbe\]\[Serial\]\sDevicePlugin\s\[Serial\]\s\[Disruptive\]\sKeeps\sdevice\splugin\sassignments\safter\sthe\sdevice\splugin\shas\sbeen\sre\-registered$'
[FAILED] Timed out after 300.001s.
Expected success, but got an error:
    <*errors.errorString | 0xc000c6dda0>: 
    expected v1alpha pod resources to be empty, but got non-empty resources: [&PodResources{Name:test-admit-pod,Namespace:localstorage-quota-monitoring-test-1669,Containers:[]*ContainerResources{&ContainerResources{Name:test-admit-pod,Devices:[]*ContainerDevices{},},},}]
    {
        s: "expected v1alpha pod resources to be empty, but got non-empty resources: [&PodResources{Name:test-admit-pod,Namespace:localstorage-quota-monitoring-test-1669,Containers:[]*ContainerResources{&ContainerResources{Name:test-admit-pod,Devices:[]*ContainerDevices{},},},}]",
    }
In [BeforeEach] at: test/e2e_node/device_plugin_test.go:110 @ 03/22/23 14:03:26.748

There were additional failures detected after the initial failure. These are visible in the timeline

				
				Click to see stdout/stderrfrom junit_fedora01.xml

Find resources mentions in log files | View test history on testgrid


E2eNode Suite [It] [sig-node] MirrorPodWithGracePeriod when create a mirror pod and the container runtime is temporarily down during pod termination [NodeConformance] [Serial] [Disruptive] the mirror pod should terminate successfully 33s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[It\]\s\[sig\-node\]\sMirrorPodWithGracePeriod\swhen\screate\sa\smirror\spod\s\sand\sthe\scontainer\sruntime\sis\stemporarily\sdown\sduring\spod\stermination\s\[NodeConformance\]\s\[Serial\]\s\[Disruptive\]\sthe\smirror\spod\sshould\sterminate\ssuccessfully$'
[FAILED] Timed out after 5.000s.
Expected
    <string>: KubeletMetrics
to match keys: {
."kubelet_desired_pods"[kubelet_desired_pods{static=""}]:
	Expected
	    <string>: Sample
	to match fields: {
	.Value:
		Expected
		    <model.SampleValue>: 1
		to be ==
		    <int>: 0
	}
	
}
In [It] at: test/e2e_node/mirror_pod_grace_period_test.go:139 @ 03/22/23 14:05:50.357

				
				Click to see stdout/stderrfrom junit_fedora01.xml

Filter through log files | View test history on testgrid


kubetest Node Tests 1h41m

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup -vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-infra-e2e-boskos-059 --zone=us-west1-b --ssh-user=core --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --timeout=4h --focus="\[Serial\]" --skip="\[Flaky\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeSpecialFeature\]|\[NodeAlphaFeature:.+\]|\[NodeAlphaFeature\]|\[NodeFeature:Eviction\]" --test_args=--container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}" --test-timeout=4h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/crio/latest/image-config-cgrpv1-serial.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 63 Passed Tests

Show 350 Skipped Tests