This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 9 failed / 87 succeeded
Started2020-02-11 12:14
Elapsed4h26m
Revision
Buildergke-prow-default-pool-cf4891d4-5bng
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/624f5970-47be-4223-88f4-bad90592182e/targets/test'}}
podf10f7c67-4cc7-11ea-8c6e-1aa579f21cc7
resultstorehttps://source.cloud.google.com/results/invocations/624f5970-47be-4223-88f4-bad90592182e/targets/test
infra-commit5af14def3
job-versionv1.18.0-alpha.2.606+38acec9bbc955a
podf10f7c67-4cc7-11ea-8c6e-1aa579f21cc7
repok8s.io/kubernetes
repo-commit38acec9bbc955a33c3366dc6082df90d18229b6f
repos{u'k8s.io/kubernetes': u'master', u'github.com/containerd/cri': u'master'}
revisionv1.18.0-alpha.2.606+38acec9bbc955a

Test Failures


E2eNode Suite [k8s.io] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod 16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sCriticalPod\s\[Serial\]\s\[Disruptive\]\s\[NodeFeature\:CriticalPod\]\swhen\swe\sneed\sto\sadmit\sa\scritical\spod\sshould\sbe\sable\sto\screate\sand\sdelete\sa\scritical\spod$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:53
Unexpected error:
    <*errors.StatusError | 0xc000958b40>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "pods \"static-critical-pod\" not found",
            Reason: "NotFound",
            Details: {
                Name: "static-critical-pod",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    pods "static-critical-pod" not found
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:91
				
				Click to see stdout/stderrfrom junit_cos-stable_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod 22s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sCriticalPod\s\[Serial\]\s\[Disruptive\]\s\[NodeFeature\:CriticalPod\]\swhen\swe\sneed\sto\sadmit\sa\scritical\spod\sshould\sbe\sable\sto\screate\sand\sdelete\sa\scritical\spod$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:53
Unexpected error:
    <*errors.StatusError | 0xc001184e60>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "pods \"static-critical-pod\" not found",
            Reason: "NotFound",
            Details: {
                Name: "static-critical-pod",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    pods "static-critical-pod" not found
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:91
				
				Click to see stdout/stderrfrom junit_ubuntu_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 14m53s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sLocalStorageEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:470
Timed out after 600.000s.
Expected
    <*errors.errorString | 0xc0017efa80>: {
        s: "pods that should be evicted are still running",
    }
to be nil
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:492
				
				Click to see stdout/stderrfrom junit_cos-stable_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 14m24s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sLocalStorageSoftEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:470
Timed out after 600.000s.
Expected
    <*errors.errorString | 0xc001385dd0>: {
        s: "pods that should be evicted are still running",
    }
to be nil
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:492
				
				Click to see stdout/stderrfrom junit_cos-stable_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 14m32s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sPriorityLocalStorageEvictionOrdering\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:470
Timed out after 600.000s.
Expected
    <*errors.errorString | 0xc0011612d0>: {
        s: "pods that should be evicted are still running",
    }
to be nil
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:492
				
				Click to see stdout/stderrfrom junit_cos-stable_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] SystemNodeCriticalPod [Slow] [Serial] [Disruptive] [NodeFeature:SystemNodeCriticalPod] when create a system-node-critical pod should not be evicted upon DiskPressure 37s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sSystemNodeCriticalPod\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\s\[NodeFeature\:SystemNodeCriticalPod\]\swhen\screate\sa\ssystem\-node\-critical\spod\s\sshould\snot\sbe\sevicted\supon\sDiskPressure$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/system_node_critical_test.go:74
Unexpected error:
    <*errors.errorString | 0xc000e63940>: {
        s: "there are currently no ready, schedulable nodes in the cluster",
    }
    there are currently no ready, schedulable nodes in the cluster
occurred
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/util.go:366
				
				Click to see stdout/stderrfrom junit_ubuntu_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: recover to last-known-good version: status and events should match expectations 13m19s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\s\[Feature\:DynamicKubeletConfig\]\[NodeFeature\:DynamicKubeletConfig\]\[Serial\]\[Disruptive\]\s\supdate\sConfigMap\sin\-place\:\srecover\sto\slast\-known\-good\sversion\:\sstatus\sand\sevents\sshould\smatch\sexpectations$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:625
Timed out after 60.000s.
Expected
    <*errors.errorString | 0xc0007e1830>: {
        s: "checkConfigStatus: case intended last-known-good: expected LastKnownGood (*v1.NodeConfigSource)&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:dynamic-kubelet-config-test-in-place-lkg-bsdlq,UID:5101761c-30d1-4309-a156-a230a827fbd8,ResourceVersion:9667,KubeletConfigKey:kubelet,},} but got (*v1.NodeConfigSource)&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:dynamic-kubelet-config-test-in-place-lkg-bsdlq,UID:5101761c-30d1-4309-a156-a230a827fbd8,ResourceVersion:9707,KubeletConfigKey:kubelet,},}",
    }
to be nil
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:985
				
				Click to see stdout/stderrfrom junit_ubuntu_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: recover to last-known-good version: status and events should match expectations 13m37s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\s\[Feature\:DynamicKubeletConfig\]\[NodeFeature\:DynamicKubeletConfig\]\[Serial\]\[Disruptive\]\s\supdate\sConfigMap\sin\-place\:\srecover\sto\slast\-known\-good\sversion\:\sstatus\sand\sevents\sshould\smatch\sexpectations$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:625
Timed out after 60.000s.
Expected
    <*errors.errorString | 0xc000878580>: {
        s: "checkConfigStatus: case intended last-known-good: expected LastKnownGood (*v1.NodeConfigSource)&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:dynamic-kubelet-config-test-in-place-lkg-lqfnl,UID:8d6a4506-add7-4585-8292-25e48ed8f1d2,ResourceVersion:4396,KubeletConfigKey:kubelet,},} but got (*v1.NodeConfigSource)&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:dynamic-kubelet-config-test-in-place-lkg-lqfnl,UID:8d6a4506-add7-4585-8292-25e48ed8f1d2,ResourceVersion:4414,KubeletConfigKey:kubelet,},}",
    }
to be nil
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:985
				
				Click to see stdout/stderrfrom junit_cos-stable_01.xml

Filter through log files | View test history on testgrid


Node Tests 4h23m

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=cri-containerd-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeAlphaFeature:.+\]" --test_args=--feature-gates=DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/home/containerd/usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}" --test-timeout=5h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/cri-master/image-config.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 87 Passed Tests

Show 548 Skipped Tests