This job view page is being replaced by Spyglass soon. Check out the new job view.
PRBobyMCbobs: Promote pod PreemptionExecutionPath verification
ResultFAILURE
Tests 0 failed / 72 succeeded
Started2020-01-15 02:28
Elapsed12m3s
Revision2c72904fc5c20e80eb1a10d428e28b43b38fed7e
Refs 83378

No Test Failures!


Show 72 Passed Tests

Error lines from build-log.txt

... skipping 56 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 155: bogus-expected-to-fail: command not found
!!! [0115 02:33:38] Call tree:
!!! [0115 02:33:38]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0115 02:33:38]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0115 02:33:38]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:131 juLog(...)
!!! [0115 02:33:38]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:159 record_command(...)
!!! [0115 02:33:38]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0115 02:33:38] Running kubeadm tests
+++ [0115 02:33:47] Building go targets for linux/amd64:
    cmd/kubeadm
+++ [0115 02:34:49] Running tests without code coverage
{"Time":"2020-01-15T02:36:40.773520942Z","Action":"output","Package":"k8s.io/kubernetes/cmd/kubeadm/test/cmd","Output":"ok  \tk8s.io/kubernetes/cmd/kubeadm/test/cmd\t59.481s\n"}
✓  cmd/kubeadm/test/cmd (59.482s)
... skipping 302 lines ...
+++ [0115 02:38:57] Building kube-controller-manager
+++ [0115 02:39:05] Building go targets for linux/amd64:
    cmd/kube-controller-manager
+++ [0115 02:39:46] Starting controller-manager
Flag --port has been deprecated, see --secure-port instead.
I0115 02:39:47.350987   54754 serving.go:313] Generated self-signed cert in-memory
W0115 02:39:47.884166   54754 authentication.go:409] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0115 02:39:47.884233   54754 authentication.go:267] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0115 02:39:47.884245   54754 authentication.go:291] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0115 02:39:47.884265   54754 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0115 02:39:47.884280   54754 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0115 02:39:47.884315   54754 controllermanager.go:161] Version: v1.18.0-alpha.1.729+6af91bdd050ca6
I0115 02:39:47.885488   54754 secure_serving.go:178] Serving securely on [::]:10257
I0115 02:39:47.885946   54754 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I0115 02:39:47.886001   54754 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...
I0115 02:39:47.886531   54754 tlsconfig.go:241] Starting DynamicServingCertificateController
... skipping 90 lines ...
I0115 02:39:48.420179   54754 shared_informer.go:206] Waiting for caches to sync for certificate-csrapproving
I0115 02:39:48.420212   54754 node_lifecycle_controller.go:423] Controller is using taint based evictions.
I0115 02:39:48.420288   54754 taint_manager.go:162] Sending events to api server.
I0115 02:39:48.420363   54754 node_lifecycle_controller.go:520] Controller will reconcile labels.
I0115 02:39:48.420385   54754 controllermanager.go:533] Started "nodelifecycle"
I0115 02:39:48.420611   54754 node_lifecycle_controller.go:77] Sending events to api server
E0115 02:39:48.420636   54754 core.go:231] failed to start cloud node lifecycle controller: no cloud provider provided
W0115 02:39:48.420645   54754 controllermanager.go:525] Skipping "cloud-node-lifecycle"
W0115 02:39:48.421186   54754 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0115 02:39:48.421236   54754 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0115 02:39:48.421261   54754 controllermanager.go:533] Started "persistentvolume-binder"
I0115 02:39:48.421587   54754 controllermanager.go:533] Started "pvc-protection"
I0115 02:39:48.421900   54754 controllermanager.go:533] Started "disruption"
E0115 02:39:48.423332   54754 core.go:90] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0115 02:39:48.423409   54754 controllermanager.go:525] Skipping "service"
W0115 02:39:48.423447   54754 controllermanager.go:525] Skipping "ttl-after-finished"
I0115 02:39:48.424011   54754 node_lifecycle_controller.go:554] Starting node controller
I0115 02:39:48.424091   54754 shared_informer.go:206] Waiting for caches to sync for taint
I0115 02:39:48.424123   54754 disruption.go:330] Starting disruption controller
I0115 02:39:48.424150   54754 shared_informer.go:206] Waiting for caches to sync for disruption
... skipping 55 lines ...
I0115 02:39:48.969636   54754 shared_informer.go:206] Waiting for caches to sync for expand
I0115 02:39:48.969659   54754 pv_protection_controller.go:81] Starting PV protection controller
I0115 02:39:48.969663   54754 shared_informer.go:206] Waiting for caches to sync for PV protection
I0115 02:39:48.969692   54754 ttl_controller.go:116] Starting TTL controller
I0115 02:39:48.969696   54754 shared_informer.go:206] Waiting for caches to sync for TTL
I0115 02:39:48.988663   54754 graph_builder.go:282] GraphBuilder running
W0115 02:39:48.999608   54754 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
I0115 02:39:49.021726   54754 shared_informer.go:213] Caches are synced for certificate-csrapproving 
I0115 02:39:49.060106   54754 shared_informer.go:213] Caches are synced for ClusterRoleAggregator 
I0115 02:39:49.070477   54754 shared_informer.go:213] Caches are synced for TTL 
I0115 02:39:49.071174   54754 shared_informer.go:213] Caches are synced for PV protection 
I0115 02:39:49.073126   54754 shared_informer.go:213] Caches are synced for expand 
E0115 02:39:49.105375   54754 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
E0115 02:39:49.105668   54754 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0115 02:39:49.254929   54754 shared_informer.go:213] Caches are synced for job 
I0115 02:39:49.255404   54754 shared_informer.go:213] Caches are synced for endpoint 
I0115 02:39:49.257723   54754 shared_informer.go:213] Caches are synced for attach detach 
I0115 02:39:49.259252   54754 shared_informer.go:213] Caches are synced for ReplicationController 
I0115 02:39:49.262955   54754 shared_informer.go:213] Caches are synced for GC 
I0115 02:39:49.269758   54754 shared_informer.go:213] Caches are synced for stateful set 
... skipping 99 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0115 02:39:54] Creating namespace namespace-1579055994-6099
namespace/namespace-1579055994-6099 created
Context "test" modified.
+++ [0115 02:39:54] Testing RESTMapper
+++ [0115 02:39:55] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
... skipping 83 lines ...
(Brbac.sh:50: Successful get clusterrole/resourcename-reader {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(Brbac.sh:51: Successful get clusterrole/resourcename-reader {{range.rules}}{{range.resourceNames}}{{.}}:{{end}}{{end}}: foo:
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:53: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:54: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
{"component":"entrypoint","file":"prow/entrypoint/run.go:168","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","time":"2020-01-15T02:40:05Z"}