This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 9 failed / 158 succeeded
Started2019-11-19 22:10
Elapsed6h7m
Revision
Buildergke-prow-ssd-pool-1a225945-q00f
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/45a80f3f-537d-4ffc-8366-d44ff59fe53f/targets/test'}}
pod53f97694-0b19-11ea-b5e9-3289c6e090ac
resultstorehttps://source.cloud.google.com/results/invocations/45a80f3f-537d-4ffc-8366-d44ff59fe53f/targets/test
infra-commitca3411f8f
job-versionv1.16.4-beta.0.3+c0f31a4ef6304d
master_os_image
node_os_imagecos-77-12371-89-0
pod53f97694-0b19-11ea-b5e9-3289c6e090ac
revisionv1.16.4-beta.0.3+c0f31a4ef6304d

Test Failures


Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout 2m12s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sNetwork\sPartition\s\[Disruptive\]\s\[Slow\]\s\[k8s\.io\]\sPods\sshould\sreturn\sto\srunning\sand\sready\sstate\safter\snetwork\spartition\sis\shealed\sAll\spods\son\sthe\sunreachable\snode\sshould\sbe\smarked\sas\sNotReady\supon\sthe\snode\sturn\sNotReady\sAND\sall\spods\sshould\sbe\smark\sback\sto\sReady\swhen\sthe\snode\sget\sback\sto\sReady\sbefore\spod\seviction\stimeout$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/network_partition.go:141
Nov 20 03:02:03.503: Pods on node gke-test-b9c2cf519b-default-pool-43836234-k78f are not ready and running within 2m0s: gave up waiting for matching pods to be 'Running and Ready' after 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/network_partition.go:165
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Nodes [Disruptive] Resize [Slow] should be able to add nodes 12m24s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sNodes\s\[Disruptive\]\sResize\s\[Slow\]\sshould\sbe\sable\sto\sadd\snodes$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/resize_nodes.go:77
Nov 20 03:58:38.244: Unexpected error:
    <*errors.errorString | 0xc003a47da0>: {
        s: "1 / 17 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                       NODE                                           PHASE   GRACE CONDITIONS\nstackdriver-metadata-agent-cluster-level-58d7699fbf-56w6f gke-test-b9c2cf519b-default-pool-43836234-k78f Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 22:53:43 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-20 03:51:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-20 03:51:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 22:53:43 +0000 UTC Reason: Message:}]\n",
    }
    1 / 17 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                       NODE                                           PHASE   GRACE CONDITIONS
    stackdriver-metadata-agent-cluster-level-58d7699fbf-56w6f gke-test-b9c2cf519b-default-pool-43836234-k78f Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 22:53:43 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-20 03:51:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-20 03:51:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 22:53:43 +0000 UTC Reason: Message:}]
    
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/resize_nodes.go:109
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover 5m12s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sRestart\s\[Disruptive\]\sshould\srestart\sall\snodes\sand\sensure\sall\snodes\sand\spods\srecover$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/restart.go:52
Nov 20 00:24:33.309: At least one pod wasn't running and ready or succeeded at test start.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/restart.go:78
				
				Click to see stdout/stderrfrom junit_01.xml

Find wasnt mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] [Disruptive]NodeLease NodeLease deletion node lease should be deleted when corresponding node is deleted 13m11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\s\[Disruptive\]NodeLease\sNodeLease\sdeletion\snode\slease\sshould\sbe\sdeleted\swhen\scorresponding\snode\sis\sdeleted$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/node_lease.go:66
Nov 19 23:03:46.365: Expected
    <*errors.errorString | 0xc002979e80>: {
        s: "1 / 17 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                       NODE                                           PHASE   GRACE CONDITIONS\nstackdriver-metadata-agent-cluster-level-58d7699fbf-56w6f gke-test-b9c2cf519b-default-pool-43836234-k78f Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 22:53:43 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 22:56:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 22:56:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 22:53:43 +0000 UTC Reason: Message:}]\n",
    }
to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/node_lease.go:98
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation 6m13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-scheduling\]\sSchedulerPriorities\s\[Serial\]\sPod\sshould\savoid\snodes\sthat\shave\savoidPod\sannotation$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:76
Nov 20 03:18:21.223: Unexpected error:
    <*errors.errorString | 0xc002460d90>: {
        s: "1 / 17 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                       NODE                                           PHASE   GRACE CONDITIONS\nstackdriver-metadata-agent-cluster-level-58d7699fbf-56w6f gke-test-b9c2cf519b-default-pool-43836234-k78f Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 22:53:43 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-20 02:56:43 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-20 02:56:43 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 22:53:43 +0000 UTC Reason: Message:}]\n",
    }
    1 / 17 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                       NODE                                           PHASE   GRACE CONDITIONS
    stackdriver-metadata-agent-cluster-level-58d7699fbf-56w6f gke-test-b9c2cf519b-default-pool-43836234-k78f Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 22:53:43 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-20 02:56:43 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-20 02:56:43 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 22:53:43 +0000 UTC Reason: Message:}]
    
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:93
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate 6m13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-scheduling\]\sSchedulerPriorities\s\[Serial\]\sPod\sshould\sbe\spreferably\sscheduled\sto\snodes\spod\scan\stolerate$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:76
Nov 20 02:26:35.923: Unexpected error:
    <*errors.errorString | 0xc00341c640>: {
        s: "1 / 17 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                       NODE                                           PHASE   GRACE CONDITIONS\nstackdriver-metadata-agent-cluster-level-58d7699fbf-56w6f gke-test-b9c2cf519b-default-pool-43836234-k78f Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 22:53:43 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-20 01:59:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-20 01:59:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 22:53:43 +0000 UTC Reason: Message:}]\n",
    }
    1 / 17 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                       NODE                                           PHASE   GRACE CONDITIONS
    stackdriver-metadata-agent-cluster-level-58d7699fbf-56w6f gke-test-b9c2cf519b-default-pool-43836234-k78f Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 22:53:43 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-20 01:59:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-20 01:59:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 22:53:43 +0000 UTC Reason: Message:}]
    
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:93