This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 35 failed / 461 succeeded
Started2020-02-04 06:10
Elapsed15h15m
Revision
Buildergke-prow-default-pool-cf4891d4-wvhk
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/c9c55e19-5048-4d0c-8e02-f123d324600f/targets/test'}}
podebbea46b-4714-11ea-b8d7-32e01c04da64
resultstorehttps://source.cloud.google.com/results/invocations/c9c55e19-5048-4d0c-8e02-f123d324600f/targets/test
infra-commita98df39a6
job-versionv1.15.10-beta.0.15+e91de4083dbd87
master_os_imagecos-73-11647-163-0
node_os_imagecos-73-11647-163-0
podebbea46b-4714-11ea-b8d7-32e01c04da64
revisionv1.15.10-beta.0.15+e91de4083dbd87

Test Failures


Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] 10m7s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\swhen\screating\scontainers\swith\sAllowPrivilegeEscalation\sshould\snot\sallow\sprivilege\sescalation\swhen\sfalse\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:54:43.385: Couldn't delete ns: "security-context-test-6215": namespace security-context-test-6215 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace security-context-test-6215 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage][NodeFeature:VolumeSubpathEnvExpansion] 10m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sVariable\sExpansion\sshould\sallow\ssubstituting\svalues\sin\sa\svolume\ssubpath\s\[sig\-storage\]\[NodeFeature\:VolumeSubpathEnvExpansion\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 16:59:15.822: Couldn't delete ns: "var-expansion-3362": namespace var-expansion-3362 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace var-expansion-3362 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] 11m24s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\s\[k8s\.io\]\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\sperform\scanary\supdates\sand\sphased\srolling\supdates\sof\stemplate\smodifications\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 18:28:28.725: Couldn't delete ns: "statefulset-4222": namespace statefulset-4222 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace statefulset-4222 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects NO client request should support a client that connects, sends DATA, and disconnects 10m55s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sPort\sforwarding\s\[k8s\.io\]\sWith\sa\sserver\slistening\son\slocalhost\s\[k8s\.io\]\sthat\sexpects\sNO\sclient\srequest\sshould\ssupport\sa\sclient\sthat\sconnects\,\ssends\sDATA\,\sand\sdisconnects$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 17:10:11.267: Couldn't delete ns: "port-forwarding-8423": namespace port-forwarding-8423 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace port-forwarding-8423 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Nodes [Disruptive] Resize [Slow] should be able to delete nodes 20m37s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sNodes\s\[Disruptive\]\sResize\s\[Slow\]\sshould\sbe\sable\sto\sdelete\snodes$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/resize_nodes.go:74
Unexpected error:
    <*errors.errorString | 0xc00370e6d0>: {
        s: "7 / 28 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                     NODE                            PHASE   GRACE CONDITIONS\nevent-exporter-v0.2.5-5fd6f794f7-4m65v  bootstrap-e2e-minion-group-bf5x Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 12:55:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:05 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [event-exporter prometheus-to-sd-exporter]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [event-exporter prometheus-to-sd-exporter]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 12:55:27 +0000 UTC Reason: Message:}]\nfluentd-gcp-scaler-6848d689fb-nxvwz     bootstrap-e2e-minion-group-bf5x Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 12:55:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:05 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [fluentd-gcp-scaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [fluentd-gcp-scaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 12:55:27 +0000 UTC Reason: Message:}]\nheapster-v1.6.0-beta.1-6cf46d596d-lqr9k bootstrap-e2e-minion-group-xnlw Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 06:28:40 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [heapster heapster-nanny]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [heapster heapster-nanny]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 06:28:40 +0000 UTC Reason: Message:}]\nkube-dns-autoscaler-584b9b9fff-t4gqz    bootstrap-e2e-minion-group-xnlw Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 06:28:40 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 06:28:40 +0000 UTC Reason: Message:}]\nkubernetes-dashboard-7dffd7df8d-vrk7x   bootstrap-e2e-minion-group-xnlw Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 06:28:40 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kubernetes-dashboard]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kubernetes-dashboard]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 06:28:40 +0000 UTC Reason: Message:}]\nl7-default-backend-678889f899-cwzsr     bootstrap-e2e-minion-group-bf5x Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 12:55:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [default-http-backend]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [default-http-backend]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 12:55:27 +0000 UTC Reason: Message:}]\nmetrics-server-v0.3.6-7bb686969b-xfgcc  bootstrap-e2e-minion-group-xnlw Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 06:28:58 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metrics-server metrics-server-nanny]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metrics-server metrics-server-nanny]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 06:28:58 +0000 UTC Reason: Message:}]\n",
    }
    7 / 28 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                     NODE                            PHASE   GRACE CONDITIONS
    event-exporter-v0.2.5-5fd6f794f7-4m65v  bootstrap-e2e-minion-group-bf5x Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 12:55:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:05 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [event-exporter prometheus-to-sd-exporter]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [event-exporter prometheus-to-sd-exporter]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 12:55:27 +0000 UTC Reason: Message:}]
    fluentd-gcp-scaler-6848d689fb-nxvwz     bootstrap-e2e-minion-group-bf5x Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 12:55:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:05 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [fluentd-gcp-scaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [fluentd-gcp-scaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 12:55:27 +0000 UTC Reason: Message:}]
    heapster-v1.6.0-beta.1-6cf46d596d-lqr9k bootstrap-e2e-minion-group-xnlw Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 06:28:40 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [heapster heapster-nanny]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [heapster heapster-nanny]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 06:28:40 +0000 UTC Reason: Message:}]
    kube-dns-autoscaler-584b9b9fff-t4gqz    bootstrap-e2e-minion-group-xnlw Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 06:28:40 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 06:28:40 +0000 UTC Reason: Message:}]
    kubernetes-dashboard-7dffd7df8d-vrk7x   bootstrap-e2e-minion-group-xnlw Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 06:28:40 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kubernetes-dashboard]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kubernetes-dashboard]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 06:28:40 +0000 UTC Reason: Message:}]
    l7-default-backend-678889f899-cwzsr     bootstrap-e2e-minion-group-bf5x Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 12:55:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [default-http-backend]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [default-http-backend]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 12:55:27 +0000 UTC Reason: Message:}]
    metrics-server-v0.3.6-7bb686969b-xfgcc  bootstrap-e2e-minion-group-xnlw Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 06:28:58 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metrics-server metrics-server-nanny]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 15:09:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metrics-server metrics-server-nanny]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 06:28:58 +0000 UTC Reason: Message:}]
    
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/resize_nodes.go:106
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover 17m14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sRestart\s\[Disruptive\]\sshould\srestart\sall\snodes\sand\sensure\sall\snodes\sand\spods\srecover$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/restart.go:86
Feb  4 15:15:32.228: At least one pod wasn't running and ready after the restart.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/restart.go:115