This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 29 failed / 538 succeeded
Started2019-11-16 01:49
Elapsed15h6m
Revision
Buildergke-prow-ssd-pool-1a225945-w688
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/5555c518-5ccf-458d-9714-4a80c2a1715e/targets/test'}}
pod3700b755-0813-11ea-be88-5a2ed842773b
resultstorehttps://source.cloud.google.com/results/invocations/5555c518-5ccf-458d-9714-4a80c2a1715e/targets/test
infra-commit9a3c983f8
job-versionv1.15.7-beta.0.1+54260e2be0c03f
master_os_image
node_os_imagecos-77-12371-89-0
pod3700b755-0813-11ea-be88-5a2ed842773b
revisionv1.15.7-beta.0.1+54260e2be0c03f

Test Failures


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] 10m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\son\sterminated\scontainer\sshould\sreport\stermination\smessage\s\[LinuxOnly\]\sfrom\slog\soutput\sif\sTerminationMessagePolicy\sFallbackToLogsOnError\sis\sset\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 16 15:51:49.383: Couldn't delete ns: "container-runtime-3021": namespace container-runtime-3021 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace container-runtime-3021 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] EquivalenceCache [Serial] validates pod affinity works properly when new replica pod is scheduled 6m10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sEquivalenceCache\s\[Serial\]\svalidates\spod\saffinity\sworks\sproperly\swhen\snew\sreplica\spod\sis\sscheduled$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/equivalence_cache_predicates.go:52
Unexpected error:
    <*errors.errorString | 0xc0053a3ca0>: {
        s: "1 / 19 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                       NODE                                         PHASE   GRACE CONDITIONS\nstackdriver-metadata-agent-cluster-level-76bffb776b-79mk4 gke-bootstrap-e2e-default-pool-50841eb7-rn8f Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 04:46:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 05:27:56 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 05:27:56 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 04:46:08 +0000 UTC Reason: Message:}]\n",
    }
    1 / 19 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                       NODE                                         PHASE   GRACE CONDITIONS
    stackdriver-metadata-agent-cluster-level-76bffb776b-79mk4 gke-bootstrap-e2e-default-pool-50841eb7-rn8f Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 04:46:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 05:27:56 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 05:27:56 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 04:46:08 +0000 UTC Reason: Message:}]
    
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/equivalence_cache_predicates.go:74
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] 10m5s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\ssupport\sremote\scommand\sexecution\sover\swebsockets\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 16 14:55:32.588: Couldn't delete ns: "pods-1958": namespace pods-1958 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace pods-1958 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] 10m5s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sSecurity\sContext\sshould\ssupport\spod\.Spec\.SecurityContext\.SupplementalGroups\s\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 16 13:31:06.076: Couldn't delete ns: "security-context-3611": namespace security-context-3611 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace security-context-3611 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. 10m20s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sResourceQuota\sshould\screate\sa\sResourceQuota\sand\scapture\sthe\slife\sof\sa\ssecret\.$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 16 14:02:19.045: Couldn't delete ns: "resourcequota-9011": namespace resourcequota-9011 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace resourcequota-9011 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. 10m17s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sResourceQuota\sshould\sverify\sResourceQuota\swith\sterminating\sscopes\.$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 16 13:21:00.674: Couldn't delete ns: "resourcequota-2545": namespace resourcequota-2545 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace resourcequota-2545 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout 2m8s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sNetwork\sPartition\s\[Disruptive\]\s\[Slow\]\s\[k8s\.io\]\sPods\sshould\sreturn\sto\srunning\sand\sready\sstate\safter\snetwork\spartition\sis\shealed\sAll\spods\son\sthe\sunreachable\snode\sshould\sbe\smarked\sas\sNotReady\supon\sthe\snode\sturn\sNotReady\sAND\sall\spods\sshould\sbe\smark\sback\sto\sReady\swhen\sthe\snode\sget\sback\sto\sReady\sbefore\spod\seviction\stimeout$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/network_partition.go:137
Nov 16 12:28:25.169: Pods on node gke-bootstrap-e2e-default-pool-50841eb7-rn8f are not ready and running within 2m0s: gave up waiting for matching pods to be 'Running and Ready' after 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/network_partition.go:161
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] 12m58s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sNetwork\sPartition\s\[Disruptive\]\s\[Slow\]\s\[k8s\.io\]\s\[StatefulSet\]\sshould\scome\sback\sup\sif\snode\sgoes\sdown\s\[Slow\]\s\[Disruptive\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 16 13:10:43.275: Couldn't delete ns: "network-partition-1888": namespace network-partition-1888 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace network-partition-1888 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] ReplicationController light Should scale from 1 pod to 2 pods 15m58s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\sReplicationController\slight\sShould\sscale\sfrom\s1\spod\sto\s2\spods$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:70
timeout waiting 15m0s for 2 replicas
Unexpected error:
    <*errors.errorString | 0xc00029f8a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability 15m59s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\s\[Serial\]\s\[Slow\]\sReplicationController\sShould\sscale\sfrom\s5\spods\sto\s3\spods\sand\sfrom\s3\sto\s1\sand\sverify\sdecision\sstability$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:64
timeout waiting 15m0s for 3 replicas
Unexpected error:
    <*errors.errorString | 0xc00029f8a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] 10m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sKubectl\sapi\-versions\sshould\scheck\sif\sv1\sis\sin\savailable\sapi\sversions\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 16 15:31:37.987: Couldn't delete ns: "kubectl-5999": namespace kubectl-5999 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace kubectl-5999 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client [k8s.io] Kubectl copy should copy a file from a running Pod 10m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sKubectl\scopy\sshould\scopy\sa\sfile\sfrom\sa\srunning\sPod$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 16 14:12:23.610: Couldn't delete ns: "kubectl-3815": namespace kubectl-3815 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace kubectl-3815 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] 10m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sProxy\sserver\sshould\ssupport\s\-\-unix\-socket\=\/path\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 16 14:32:30.689: Couldn't delete ns: "kubectl-4947": namespace kubectl-4947 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace kubectl-4947 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Nodes [Disruptive] Resize [Slow] should be able to add nodes 12m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sNodes\s\[Disruptive\]\sResize\s\[Slow\]\sshould\sbe\sable\sto\sadd\snodes$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/resize_nodes.go:74
Unexpected error:
    <*errors.errorString | 0xc007caab70>: {
        s: "3 / 19 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                       NODE                                         PHASE   GRACE CONDITIONS\nfluentd-gcp-v3.1.1-lbvnb                                  gke-bootstrap-e2e-default-pool-50841eb7-w7bf Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 01:57:34 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 11:05:07 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 02:09:45 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 01:57:34 +0000 UTC Reason: Message:}]\nprometheus-to-sd-65rlj                                    gke-bootstrap-e2e-default-pool-50841eb7-w7bf Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 01:56:30 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 11:05:07 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 01:56:33 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 01:56:30 +0000 UTC Reason: Message:}]\nstackdriver-metadata-agent-cluster-level-76bffb776b-79mk4 gke-bootstrap-e2e-default-pool-50841eb7-rn8f Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 04:46:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 11:44:44 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 11:44:44 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 04:46:08 +0000 UTC Reason: Message:}]\n",
    }
    3 / 19 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                       NODE                                         PHASE   GRACE CONDITIONS
    fluentd-gcp-v3.1.1-lbvnb                                  gke-bootstrap-e2e-default-pool-50841eb7-w7bf Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 01:57:34 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 11:05:07 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 02:09:45 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 01:57:34 +0000 UTC Reason: Message:}]
    prometheus-to-sd-65rlj                                    gke-bootstrap-e2e-default-pool-50841eb7-w7bf Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 01:56:30 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 11:05:07 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 01:56:33 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 01:56:30 +0000 UTC Reason: Message:}]
    stackdriver-metadata-agent-cluster-level-76bffb776b-79mk4 gke-bootstrap-e2e-default-pool-50841eb7-rn8f Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 04:46:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 11:44:44 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 11:44:44 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 04:46:08 +0000 UTC Reason: Message:}]
    
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/resize_nodes.go:106
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Nodes [Disruptive] Resize [Slow] should be able to delete nodes 14m13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sNodes\s\[Disruptive\]\sResize\s\[Slow\]\sshould\sbe\sable\sto\sdelete\snodes$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/resize_nodes.go:74
Unexpected error:
    <*errors.errorString | 0xc00157d6f0>: {
        s: "1 / 19 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                       NODE                                         PHASE   GRACE CONDITIONS\nstackdriver-metadata-agent-cluster-level-76bffb776b-79mk4 gke-bootstrap-e2e-default-pool-50841eb7-rn8f Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 04:46:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 04:52:12 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 04:52:12 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 04:46:08 +0000 UTC Reason: Message:}]\n",
    }
    1 / 19 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                       NODE                                         PHASE   GRACE CONDITIONS
    stackdriver-metadata-agent-cluster-level-76bffb776b-79mk4 gke-bootstrap-e2e-default-pool-50841eb7-rn8f Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 04:46:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 04:52:12 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 04:52:12 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 04:46:08 +0000 UTC Reason: Message:}]
    
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/resize_nodes.go:106