This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 5 failed / 188 succeeded
Started2019-07-12 08:19
Elapsed7h39m
Revision
Buildergke-prow-ssd-pool-1a225945-7nsk
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/1d818a89-dc70-44af-b9aa-9b960bed8d5b/targets/test'}}
podbc701b36-a47d-11e9-8217-96c43017ab5b
resultstorehttps://source.cloud.google.com/results/invocations/1d818a89-dc70-44af-b9aa-9b960bed8d5b/targets/test
infra-commit04c2406cc
job-versionv1.14.5-beta.0.1+7936da50c68f42
master_os_imagecos-73-11647-163-0
node_os_imagecos-73-11647-163-0
podbc701b36-a47d-11e9-8217-96c43017ab5b
revisionv1.14.5-beta.0.1+7936da50c68f42

Test Failures


Cluster upgrade apparmor-upgrade 19m15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\sapparmor\-upgrade$'
Should be able to get pod
Unexpected error:
    <*errors.StatusError | 0xc000f12fa0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "pods \"test-apparmor-mwqht\" not found",
            Reason: "NotFound",
            Details: {
                Name: "test-apparmor-mwqht",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    pods "test-apparmor-mwqht" not found
occurred

k8s.io/kubernetes/test/e2e/upgrades.(*AppArmorUpgradeTest).verifyPodStillUp(0x808ddc0, 0xc000b5a8c0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apparmor.go:89 +0x159
k8s.io/kubernetes/test/e2e/upgrades.(*AppArmorUpgradeTest).Test(0x808ddc0, 0xc000b5a8c0, 0xc0000c7860, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apparmor.go:73 +0x5a
k8s.io/kubernetes/test/e2e/lifecycle.(*chaosMonkeyAdapter).Test(0xc00346d600, 0xc001cec3c0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:397 +0x309
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc001cec3c0, 0xc001b2ff20)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x76
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xa7
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Kubelet [Serial] [Slow] [k8s.io] [sig-node] regular resource usage tracking resource tracking for 100 pods per node 53m14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sKubelet\s\[Serial\]\s\[Slow\]\s\[k8s\.io\]\s\[sig\-node\]\sregular\sresource\susage\stracking\sresource\stracking\sfor\s100\spods\sper\snode$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 12 15:45:51.372: Couldn't delete ns: "kubelet-perf-2193": namespace kubelet-perf-2193 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace kubelet-perf-2193 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] 27m39s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sUpgrade\s\[Feature\:Upgrade\]\scluster\supgrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:ClusterUpgrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:136
Should be able to get pod
Unexpected error:
    <*errors.StatusError | 0xc000f12fa0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "pods \"test-apparmor-mwqht\" not found",
            Reason: "NotFound",
            Details: {
                Name: "test-apparmor-mwqht",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    pods "test-apparmor-mwqht" not found
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apparmor.go:89
				
				Click to see stdout/stderrfrom junit_upgrade01.xml

Filter through log files | View test history on testgrid


SkewTest 6h51m

error during kubetest --test --test_args=--ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\] --report-dir=/workspace/_artifacts --disable-log-dump=true --report-prefix=skew --check-version-skew=false: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


UpgradeTest 27m57s

error during kubetest --test --test_args=--ginkgo.focus=\[Feature:ClusterUpgrade\] --upgrade-target=ci/k8s-beta --upgrade-image=gci --report-dir=/workspace/_artifacts --disable-log-dump=true --report-prefix=upgrade --check-version-skew=false: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 188 Passed Tests

Show 8666 Skipped Tests