This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 5 failed / 264 succeeded
Started2019-11-21 07:54
Elapsed51m48s
Revision
Buildergke-prow-ssd-pool-1a225945-9tvq
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/c192cd86-4581-4cda-879d-287aacb1480c/targets/test'}}
podfbca56be-0c33-11ea-b26a-065b5133c63f
resultstorehttps://source.cloud.google.com/results/invocations/c192cd86-4581-4cda-879d-287aacb1480c/targets/test
infra-commit8bd3c2881
job-versionv1.18.0-alpha.0.1110+3c5dad61f72cc0
master_os_imagecos-77-12371-89-0
node_os_imagecos-77-12371-89-0
podfbca56be-0c33-11ea-b26a-065b5133c63f
revisionv1.18.0-alpha.0.1110+3c5dad61f72cc0

Test Failures


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node 2m49s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sgcepd\]\s\[Testpattern\:\sDynamic\sPV\s\(block\svolmode\)\]\smultiVolume\s\[Slow\]\sshould\saccess\sto\stwo\svolumes\swith\sdifferent\svolume\smode\sand\sretain\sdata\sacross\spod\srecreation\son\sthe\ssame\snode$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:202
Test Panicked
/usr/local/go/src/runtime/panic.go:199

Panic: runtime error: invalid memory address or nil pointer dereference

Full stack:
k8s.io/kubernetes/vendor/k8s.io/utils/exec.CodeExitError.Error(...)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/utils/exec/exec.go:237
k8s.io/kubernetes/vendor/github.com/onsi/gomega/matchers.(*HaveOccurredMatcher).NegatedFailureMessage(0x7d341a8, 0x4638b60, 0xc001c10820, 0x4a463f1, 0x1)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/matchers/have_occurred_matcher.go:34 +0xa8
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc001f410c0, 0x53142e0, 0x7d341a8, 0x0, 0xc001609020, 0x3, 0x3, 0xc001f410c0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:73 +0x23b
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc001f410c0, 0x53142e0, 0x7d341a8, 0xc001609020, 0x3, 0x3, 0x3ee4c00)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7
k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x52ac580, 0xc001c10820, 0xc001609020, 0x3, 0x3)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:46 +0xf5
k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:40
k8s.io/kubernetes/test/e2e/storage/utils.VerifyExecInPodSucceed(0xc0015c68c0, 0xc00171b800, 0xc00096f200, 0x78)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/utils.go:82 +0x40e
k8s.io/kubernetes/test/e2e/storage/utils.CheckReadFromPath(0xc0015c68c0, 0xc00171b800, 0xc0032a0f5a, 0x5, 0xc001a12f90, 0xc, 0x40, 0x15d91ea373360390)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/utils.go:667 +0x305
k8s.io/kubernetes/test/e2e/storage/testsuites.testAccessMultipleVolumes(0xc0015c68c0, 0x5423ae0, 0xc0020e2f20, 0xc001c0f270, 0x10, 0x0, 0x0, 0x0, 0xc00165d1c0, 0xc003444be0, ...)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:372 +0x82b
k8s.io/kubernetes/test/e2e/storage/testsuites.TestAccessMultipleVolumesAcrossPodRecreation(0xc0015c68c0, 0x5423ae0, 0xc0020e2f20, 0xc001c0f270, 0x10, 0x0, 0x0, 0x0, 0xc00165d1c0, 0xc003444be0, ...)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:411 +0x4e4
k8s.io/kubernetes/test/e2e/storage/testsuites.(*multiVolumeTestSuite).defineTests.func6()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:232 +0x484
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0028f6100)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a
k8s.io/kubernetes/test/e2e.TestE2E(0xc0028f6100)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:112 +0x2b
testing.tRunner(0xc0028f6100, 0x4c2fc20)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
				
				Click to see stdout/stderrfrom junit_06.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly] 2m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sblock\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\sfail\sif\ssubpath\sfile\sis\soutside\sthe\svolume\s\[Slow\]\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:251
Nov 21 08:02:20.754: Unexpected error:
    <*errors.errorString | 0xc003311f20>: {
        s: "unable to upgrade connection: container not found (\"agnhost\")",
    }
    unable to upgrade connection: container not found ("agnhost")
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:140
				
				Click to see stdout/stderrfrom junit_18.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly] 5m39s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sblockfs\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\ssupport\srestarting\scontainers\susing\sfile\sas\ssubpath\s\[Slow\]\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:330
Nov 21 08:05:55.768: while waiting for pod to be running
Unexpected error:
    <*errors.errorString | 0xc00009f960>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:818
				
				Click to see stdout/stderrfrom junit_23.xml

Find to mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node 45s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sblockfs\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(filesystem\svolmode\)\]\smultiVolume\s\[Slow\]\sshould\sconcurrently\saccess\sthe\ssingle\svolume\sfrom\spods\son\sthe\ssame\snode$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:293
Nov 21 08:02:01.235: Error getting Kubelet bootstrap-e2e-minion-group-nkjc metrics: the server is currently unable to handle the request (get nodes bootstrap-e2e-minion-group-nkjc:10250)
Unexpected error:
    <*errors.StatusError | 0xc0018803c0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "the server is currently unable to handle the request (get nodes bootstrap-e2e-minion-group-nkjc:10250)",
            Reason: "ServiceUnavailable",
            Details: {
                Name: "bootstrap-e2e-minion-group-nkjc:10250",
                Group: "",
                Kind: "nodes",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error trying to reach service: 'dial tcp 10.132.0.4:10250: i/o timeout'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    the server is currently unable to handle the request (get nodes bootstrap-e2e-minion-group-nkjc:10250)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:604
				
				Click to see stdout/stderrfrom junit_21.xml

Filter through log files | View test history on testgrid


Test 34m48s

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Slow\] --ginkgo.skip=\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 264 Passed Tests

Show 4563 Skipped Tests