This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 5 failed / 188 succeeded
Started2019-07-19 23:07
Elapsed7h11m
Revision
Buildergke-prow-ssd-pool-1a225945-66t6
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/40f5656b-867e-4d48-afe8-374034d4465a/targets/test'}}
podedc35abc-aa79-11e9-b82b-365474bd0c86
resultstorehttps://source.cloud.google.com/results/invocations/40f5656b-867e-4d48-afe8-374034d4465a/targets/test
infra-commit3cdd71722
job-versionv1.14.5-beta.0.1+7936da50c68f42
master_os_imagecos-73-11647-163-0
node_os_imagecos-73-11647-163-0
podedc35abc-aa79-11e9-b82b-365474bd0c86
revisionv1.14.5-beta.0.1+7936da50c68f42

Test Failures


Cluster upgrade apparmor-upgrade 21m35s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\sapparmor\-upgrade$'
Should be able to get pod
Unexpected error:
    <*errors.StatusError | 0xc000aa8d20>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "pods \"test-apparmor-lf44d\" not found",
            Reason: "NotFound",
            Details: {
                Name: "test-apparmor-lf44d",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    pods "test-apparmor-lf44d" not found
occurred

k8s.io/kubernetes/test/e2e/upgrades.(*AppArmorUpgradeTest).verifyPodStillUp(0x808ddc0, 0xc000d5f040)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apparmor.go:89 +0x159
k8s.io/kubernetes/test/e2e/upgrades.(*AppArmorUpgradeTest).Test(0x808ddc0, 0xc000d5f040, 0xc002e1a8a0, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apparmor.go:73 +0x5a
k8s.io/kubernetes/test/e2e/lifecycle.(*chaosMonkeyAdapter).Test(0xc002dc5740, 0xc002e11580)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:397 +0x309
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc002e11580, 0xc002da7a10)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x76
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xa7
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover 11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sRestart\s\[Disruptive\]\sshould\srestart\sall\snodes\sand\sensure\sall\snodes\sand\spods\srecover$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/restart.go:50
Jul 20 01:28:19.433: At least one pod wasn't running and ready or succeeded at test start.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/restart.go:76
				
				Click to see stdout/stderrfrom junit_skew01.xml

Find wasnt mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] 28m59s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sUpgrade\s\[Feature\:Upgrade\]\scluster\supgrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:ClusterUpgrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:136
Should be able to get pod
Unexpected error:
    <*errors.StatusError | 0xc000aa8d20>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "pods \"test-apparmor-lf44d\" not found",
            Reason: "NotFound",
            Details: {
                Name: "test-apparmor-lf44d",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    pods "test-apparmor-lf44d" not found
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apparmor.go:89
				
				Click to see stdout/stderrfrom junit_upgrade01.xml

Filter through log files | View test history on testgrid


SkewTest 6h19m

error during kubetest --test --test_args=--ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\] --report-dir=/workspace/_artifacts --disable-log-dump=true --report-prefix=skew --check-version-skew=false: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


UpgradeTest 29m14s

error during kubetest --test --test_args=--ginkgo.focus=\[Feature:ClusterUpgrade\] --upgrade-target=ci/k8s-beta --upgrade-image=gci --report-dir=/workspace/_artifacts --disable-log-dump=true --report-prefix=upgrade --check-version-skew=false: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 188 Passed Tests

Show 8666 Skipped Tests