This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 23 succeeded
Started2020-01-11 19:55
Elapsed33m57s
Revision
Buildergke-prow-default-pool-cf4891d4-s6x6
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/267a493d-f532-4c08-a7da-078126f14d7b/targets/test'}}
pod2db18fb3-34ac-11ea-9fef-d200904e1a96
resultstorehttps://source.cloud.google.com/results/invocations/267a493d-f532-4c08-a7da-078126f14d7b/targets/test
infra-commitb82ca85d5
job-versionv1.14.11-beta.1
master_os_imagecos-beta-73-11647-64-0
node_os_imagecos-beta-73-11647-64-0
pod2db18fb3-34ac-11ea-9fef-d200904e1a96
revisionv1.14.11-beta.1

Test Failures


diffResources 0.00s

Error: 1 leaked resources
+default-route-0db49b9544ede869  default  10.178.0.0/20  default                   1000
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 23 Passed Tests

Show 3579 Skipped Tests

Error lines from build-log.txt

... skipping 15 lines ...
I0111 19:55:12.949] process 47 exited with code 0 after 0.0m
I0111 19:55:12.950] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0111 19:55:12.950] Root: /workspace
I0111 19:55:12.950] cd to /workspace
I0111 19:55:12.950] Configure environment...
I0111 19:55:12.951] Call:  git show -s --format=format:%ct HEAD
W0111 19:55:12.956] fatal: not a git repository (or any of the parent directories): .git
I0111 19:55:12.956] process 60 exited with code 128 after 0.0m
W0111 19:55:12.957] Unable to print commit date for HEAD
I0111 19:55:12.957] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0111 19:55:13.549] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0111 19:55:13.903] process 61 exited with code 0 after 0.0m
I0111 19:55:13.904] Call:  gcloud config get-value account
... skipping 313 lines ...
W0111 19:59:30.997] Trying to find master named 'test-4b7fa88c7e-master'
W0111 19:59:30.997] Looking for address 'test-4b7fa88c7e-master-ip'
W0111 19:59:31.965] Using master: test-4b7fa88c7e-master (external IP: 35.230.126.111)
I0111 19:59:32.066] Waiting up to 300 seconds for cluster initialization.
I0111 19:59:32.066] 
I0111 19:59:32.066]   This will continually check to see if the API for kubernetes is reachable.
I0111 19:59:32.067]   This may time out if there was some uncaught error during start up.
I0111 19:59:32.067] 
I0111 20:00:39.061] ................Kubernetes cluster created.
I0111 20:00:39.240] Cluster "gce-gci-upg-1-3-1-4-ctl-skew_test-4b7fa88c7e" set.
I0111 20:00:39.429] User "gce-gci-upg-1-3-1-4-ctl-skew_test-4b7fa88c7e" set.
I0111 20:00:39.640] Context "gce-gci-upg-1-3-1-4-ctl-skew_test-4b7fa88c7e" created.
I0111 20:00:39.864] Switched to context "gce-gci-upg-1-3-1-4-ctl-skew_test-4b7fa88c7e".
... skipping 19 lines ...
I0111 20:01:15.166] NAME                                STATUS                     ROLES    AGE   VERSION
I0111 20:01:15.167] test-4b7fa88c7e-master              Ready,SchedulingDisabled   <none>   9s    v1.14.11-beta.1
I0111 20:01:15.167] test-4b7fa88c7e-minion-group-54sb   Ready                      <none>   3s    v1.14.11-beta.1
I0111 20:01:15.167] test-4b7fa88c7e-minion-group-s983   Ready                      <none>   3s    v1.14.11-beta.1
I0111 20:01:15.168] test-4b7fa88c7e-minion-group-z49l   Ready                      <none>   4s    v1.14.11-beta.1
I0111 20:01:15.568] Validate output:
I0111 20:01:15.933] NAME                 STATUS    MESSAGE             ERROR
I0111 20:01:15.933] scheduler            Healthy   ok                  
I0111 20:01:15.934] etcd-0               Healthy   {"health":"true"}   
I0111 20:01:15.934] etcd-1               Healthy   {"health":"true"}   
I0111 20:01:15.934] controller-manager   Healthy   ok                  
I0111 20:01:15.941] Cluster validation succeeded
W0111 20:01:16.041] Done, listing cluster services:
... skipping 102 lines ...
I0111 20:01:48.166] 
I0111 20:01:53.483] Jan 11 20:01:53.483: INFO: cluster-master-image: cos-beta-73-11647-64-0
I0111 20:01:53.484] Jan 11 20:01:53.483: INFO: cluster-node-image: cos-beta-73-11647-64-0
I0111 20:01:53.484] Jan 11 20:01:53.483: INFO: >>> kubeConfig: /workspace/.kube/config
I0111 20:01:53.487] Jan 11 20:01:53.487: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
I0111 20:01:53.664] Jan 11 20:01:53.664: INFO: Waiting up to 10m0s for all pods (need at least 8) in namespace 'kube-system' to be running and ready
I0111 20:01:53.828] Jan 11 20:01:53.828: INFO: The status of Pod fluentd-gcp-v3.2.0-5rxkq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 20:01:53.829] Jan 11 20:01:53.828: INFO: The status of Pod l7-lb-controller-v1.2.3-test-4b7fa88c7e-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 20:01:53.829] Jan 11 20:01:53.828: INFO: 26 / 28 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
I0111 20:01:53.830] Jan 11 20:01:53.828: INFO: expected 9 pod replicas in namespace 'kube-system', 9 are Running and Ready.
I0111 20:01:53.830] Jan 11 20:01:53.828: INFO: POD                                             NODE                               PHASE    GRACE  CONDITIONS
I0111 20:01:53.830] Jan 11 20:01:53.828: INFO: fluentd-gcp-v3.2.0-5rxkq                        test-4b7fa88c7e-minion-group-s983  Running  60s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:01:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:01:53 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:01:53 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:01:12 +0000 UTC  }]
I0111 20:01:53.831] Jan 11 20:01:53.828: INFO: l7-lb-controller-v1.2.3-test-4b7fa88c7e-master  test-4b7fa88c7e-master             Pending         []
I0111 20:01:53.831] Jan 11 20:01:53.828: INFO: 
I0111 20:01:55.946] Jan 11 20:01:55.946: INFO: The status of Pod fluentd-gcp-v3.2.0-dkv87 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 20:01:55.947] Jan 11 20:01:55.946: INFO: The status of Pod l7-lb-controller-v1.2.3-test-4b7fa88c7e-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 20:01:55.947] Jan 11 20:01:55.946: INFO: 26 / 28 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
I0111 20:01:55.947] Jan 11 20:01:55.946: INFO: expected 9 pod replicas in namespace 'kube-system', 9 are Running and Ready.
I0111 20:01:55.948] Jan 11 20:01:55.946: INFO: POD                                             NODE                               PHASE    GRACE  CONDITIONS
I0111 20:01:55.948] Jan 11 20:01:55.946: INFO: fluentd-gcp-v3.2.0-dkv87                        test-4b7fa88c7e-minion-group-s983  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:01:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:01:54 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:01:54 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:01:54 +0000 UTC  }]
I0111 20:01:55.949] Jan 11 20:01:55.946: INFO: l7-lb-controller-v1.2.3-test-4b7fa88c7e-master  test-4b7fa88c7e-master             Pending         []
I0111 20:01:55.949] Jan 11 20:01:55.946: INFO: 
I0111 20:01:57.947] Jan 11 20:01:57.946: INFO: The status of Pod l7-lb-controller-v1.2.3-test-4b7fa88c7e-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 20:01:57.947] Jan 11 20:01:57.946: INFO: 27 / 28 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
I0111 20:01:57.948] Jan 11 20:01:57.946: INFO: expected 9 pod replicas in namespace 'kube-system', 9 are Running and Ready.
I0111 20:01:57.948] Jan 11 20:01:57.946: INFO: POD                                             NODE                    PHASE    GRACE  CONDITIONS
I0111 20:01:57.948] Jan 11 20:01:57.946: INFO: l7-lb-controller-v1.2.3-test-4b7fa88c7e-master  test-4b7fa88c7e-master  Pending         []
I0111 20:01:57.948] Jan 11 20:01:57.946: INFO: 
I0111 20:01:59.948] Jan 11 20:01:59.948: INFO: The status of Pod fluentd-gcp-v3.2.0-gcx97 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 20:01:59.949] Jan 11 20:01:59.948: INFO: 27 / 28 pods in namespace 'kube-system' are running and ready (6 seconds elapsed)
I0111 20:01:59.949] Jan 11 20:01:59.948: INFO: expected 9 pod replicas in namespace 'kube-system', 9 are Running and Ready.
I0111 20:01:59.950] Jan 11 20:01:59.948: INFO: POD                       NODE                               PHASE    GRACE  CONDITIONS
I0111 20:01:59.950] Jan 11 20:01:59.948: INFO: fluentd-gcp-v3.2.0-gcx97  test-4b7fa88c7e-minion-group-54sb  Running  60s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:01:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:01:59 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:01:59 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:01:13 +0000 UTC  }]
I0111 20:01:59.950] Jan 11 20:01:59.948: INFO: 
I0111 20:02:01.948] Jan 11 20:02:01.947: INFO: The status of Pod fluentd-gcp-v3.2.0-gcx97 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 20:02:01.948] Jan 11 20:02:01.947: INFO: 27 / 28 pods in namespace 'kube-system' are running and ready (8 seconds elapsed)
I0111 20:02:01.948] Jan 11 20:02:01.947: INFO: expected 9 pod replicas in namespace 'kube-system', 9 are Running and Ready.
I0111 20:02:01.949] Jan 11 20:02:01.947: INFO: POD                       NODE                               PHASE    GRACE  CONDITIONS
I0111 20:02:01.950] Jan 11 20:02:01.947: INFO: fluentd-gcp-v3.2.0-gcx97  test-4b7fa88c7e-minion-group-54sb  Running  60s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:01:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:01:59 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:01:59 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:01:13 +0000 UTC  }]
I0111 20:02:01.950] Jan 11 20:02:01.947: INFO: 
I0111 20:02:03.943] Jan 11 20:02:03.942: INFO: The status of Pod fluentd-gcp-v3.2.0-c8f6s is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 20:02:03.943] Jan 11 20:02:03.942: INFO: 27 / 28 pods in namespace 'kube-system' are running and ready (10 seconds elapsed)
I0111 20:02:03.944] Jan 11 20:02:03.942: INFO: expected 9 pod replicas in namespace 'kube-system', 9 are Running and Ready.
I0111 20:02:03.944] Jan 11 20:02:03.942: INFO: POD                       NODE                               PHASE    GRACE  CONDITIONS
I0111 20:02:03.945] Jan 11 20:02:03.942: INFO: fluentd-gcp-v3.2.0-c8f6s  test-4b7fa88c7e-minion-group-54sb  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:02:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:02:02 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:02:02 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:02:02 +0000 UTC  }]
I0111 20:02:03.945] Jan 11 20:02:03.942: INFO: 
I0111 20:02:05.942] Jan 11 20:02:05.941: INFO: 28 / 28 pods in namespace 'kube-system' are running and ready (12 seconds elapsed)
... skipping 2032 lines ...
I0111 20:19:21.618] Jan 11 20:19:21.617: INFO: namespace reboot-5328 deletion completed in 7.426260839s
I0111 20:19:21.621] •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJan 11 20:19:21.619: INFO: Running AfterSuite actions on all nodes
I0111 20:19:21.621] Jan 11 20:19:21.620: INFO: Running AfterSuite actions on node 1
I0111 20:19:21.621] Jan 11 20:19:21.620: INFO: Skipping dumping logs from cluster
I0111 20:19:21.621] 
I0111 20:19:21.622] Ran 6 of 3585 Specs in 1053.455 seconds
I0111 20:19:21.639] SUCCESS! -- 6 Passed | 0 Failed | 0 Pending | 3579 Skipped PASS
I0111 20:19:21.659] 
I0111 20:19:21.659] Ginkgo ran 1 suite in 17m34.852209176s
I0111 20:19:21.659] Test Suite Passed
I0111 20:19:21.665] Checking for custom logdump instances, if any
I0111 20:19:21.672] Sourcing kube-util.sh
I0111 20:19:21.733] Detecting project
... skipping 12 lines ...
W0111 20:19:59.852] 
W0111 20:19:59.852] Specify --start=43054 in the next get-serial-port-output invocation to get only the new output starting from here.
W0111 20:20:03.160] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0111 20:20:03.238] scp: /var/log/fluentd.log*: No such file or directory
W0111 20:20:03.239] scp: /var/log/kubelet.cov*: No such file or directory
W0111 20:20:03.239] scp: /var/log/startupscript.log*: No such file or directory
W0111 20:20:03.242] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0111 20:20:03.342] Dumping logs from nodes locally to '/workspace/_artifacts'
I0111 20:20:03.342] Detecting nodes in the cluster
I0111 20:20:44.136] Changing logfiles to be world-readable for download
I0111 20:20:44.221] Changing logfiles to be world-readable for download
I0111 20:20:44.468] Changing logfiles to be world-readable for download
I0111 20:20:47.652] Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from test-4b7fa88c7e-minion-group-z49l
... skipping 6 lines ...
W0111 20:20:49.526] 
W0111 20:20:49.526] Specify --start=191195 in the next get-serial-port-output invocation to get only the new output starting from here.
W0111 20:20:50.986] scp: /var/log/fluentd.log*: No such file or directory
W0111 20:20:50.986] scp: /var/log/node-problem-detector.log*: No such file or directory
W0111 20:20:50.986] scp: /var/log/kubelet.cov*: No such file or directory
W0111 20:20:50.987] scp: /var/log/startupscript.log*: No such file or directory
W0111 20:20:50.989] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0111 20:20:51.280] scp: /var/log/fluentd.log*: No such file or directory
W0111 20:20:51.281] scp: /var/log/node-problem-detector.log*: No such file or directory
W0111 20:20:51.281] scp: /var/log/kubelet.cov*: No such file or directory
W0111 20:20:51.281] scp: /var/log/startupscript.log*: No such file or directory
W0111 20:20:51.286] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0111 20:20:51.556] scp: /var/log/fluentd.log*: No such file or directory
W0111 20:20:51.556] scp: /var/log/node-problem-detector.log*: No such file or directory
W0111 20:20:51.557] scp: /var/log/kubelet.cov*: No such file or directory
W0111 20:20:51.557] scp: /var/log/startupscript.log*: No such file or directory
W0111 20:20:51.559] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0111 20:20:54.896] INSTANCE_GROUPS=test-4b7fa88c7e-minion-group
W0111 20:20:54.897] NODE_NAMES=test-4b7fa88c7e-minion-group-54sb test-4b7fa88c7e-minion-group-s983 test-4b7fa88c7e-minion-group-z49l
I0111 20:20:55.980] Failures for test-4b7fa88c7e-minion-group
W0111 20:20:56.924] 2020/01/11 20:20:56 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 1m35.262025235s
W0111 20:20:56.925] 2020/01/11 20:20:56 e2e.go:456: Listing resources...
W0111 20:20:56.925] 2020/01/11 20:20:56 process.go:153: Running: ./cluster/gce/list-resources.sh
... skipping 24 lines ...
I0111 20:21:20.554] Bringing down cluster
W0111 20:21:23.045] Deleting Managed Instance Group...
W0111 20:24:01.528] ..................................Deleted [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-1-4-ctl-skew/zones/us-west1-b/instanceGroupManagers/test-4b7fa88c7e-minion-group].
W0111 20:24:01.528] done.
W0111 20:24:05.451] Deleted [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-1-4-ctl-skew/global/instanceTemplates/test-4b7fa88c7e-minion-template].
W0111 20:24:12.474] Deleted [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-1-4-ctl-skew/global/instanceTemplates/test-4b7fa88c7e-windows-node-template].
I0111 20:24:17.204] {"message":"Internal Server Error"}Removing etcd replica, name: test-4b7fa88c7e-master, port: 2379, result: 0
I0111 20:24:18.896] {"message":"Internal Server Error"}Removing etcd replica, name: test-4b7fa88c7e-master, port: 4002, result: 0
W0111 20:24:24.004] Updated [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-1-4-ctl-skew/zones/us-west1-b/instances/test-4b7fa88c7e-master].
W0111 20:26:46.451] Deleted [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-1-4-ctl-skew/zones/us-west1-b/instances/test-4b7fa88c7e-master].
W0111 20:27:02.647] Deleted [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-1-4-ctl-skew/global/firewalls/test-4b7fa88c7e-master-https].
W0111 20:27:03.311] Deleted [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-1-4-ctl-skew/global/firewalls/test-4b7fa88c7e-master-etcd].
W0111 20:27:04.299] Deleted [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-1-4-ctl-skew/global/firewalls/test-4b7fa88c7e-minion-all].
W0111 20:27:10.072] Deleted [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-3-1-4-ctl-skew/regions/us-west1/addresses/test-4b7fa88c7e-master-ip].
... skipping 28 lines ...
W0111 20:28:57.475] Listed 0 items.
W0111 20:28:58.003] Listed 0 items.
W0111 20:28:58.056] 2020/01/11 20:28:58 process.go:155: Step './cluster/gce/list-resources.sh' finished in 9.51999353s
W0111 20:28:58.057] 2020/01/11 20:28:58 process.go:153: Running: diff -sw -U0 -F^\[.*\]$ /workspace/_artifacts/gcp-resources-before.txt /workspace/_artifacts/gcp-resources-after.txt
W0111 20:28:58.058] 2020/01/11 20:28:58 process.go:155: Step 'diff -sw -U0 -F^\[.*\]$ /workspace/_artifacts/gcp-resources-before.txt /workspace/_artifacts/gcp-resources-after.txt' finished in 1.664084ms
W0111 20:28:58.059] 2020/01/11 20:28:58 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0111 20:28:59.455] 2020/01/11 20:28:59 main.go:316: Something went wrong: encountered 1 errors: [Error: 1 leaked resources
W0111 20:28:59.455] +default-route-0db49b9544ede869  default  10.178.0.0/20  default                   1000]
W0111 20:28:59.459] Traceback (most recent call last):
W0111 20:28:59.460]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0111 20:28:59.460]     main(parse_args())
W0111 20:28:59.460]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0111 20:28:59.460]     mode.start(runner_args)
W0111 20:28:59.460]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0111 20:28:59.460]     check_env(env, self.command, *args)
W0111 20:28:59.461]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0111 20:28:59.461]     subprocess.check_call(cmd, env=env)
W0111 20:28:59.461]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0111 20:28:59.461]     raise CalledProcessError(retcode, cmd)
W0111 20:28:59.462] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--provider=gce', '--cluster=test-4b7fa88c7e', '--gcp-network=test-4b7fa88c7e', '--check-leaked-resources', '--gcp-zone=us-west1-b', '--gcp-node-image=gci', '--extract=ci/k8s-stable3', '--timeout=180m', '--test_args=--ginkgo.focus=\\[Feature:Reboot\\] --minStartupPods=8')' returned non-zero exit status 1
E0111 20:28:59.471] Command failed
I0111 20:28:59.471] process 268 exited with code 1 after 33.7m
E0111 20:28:59.471] FAIL: ci-kubernetes-e2e-gce-cos-k8sstable3-reboot
I0111 20:28:59.472] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0111 20:28:59.966] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0111 20:29:00.019] process 11374 exited with code 0 after 0.0m
I0111 20:29:00.019] Call:  gcloud config get-value account
I0111 20:29:00.342] process 11387 exited with code 0 after 0.0m
I0111 20:29:00.342] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0111 20:29:00.342] Upload result and artifacts...
I0111 20:29:00.342] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-cos-k8sstable3-reboot/1216086104600481803
I0111 20:29:00.343] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-cos-k8sstable3-reboot/1216086104600481803/artifacts
W0111 20:29:01.185] CommandException: One or more URLs matched no objects.
E0111 20:29:01.303] Command failed
I0111 20:29:01.303] process 11400 exited with code 1 after 0.0m
W0111 20:29:01.303] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-cos-k8sstable3-reboot/1216086104600481803/artifacts not exist yet
I0111 20:29:01.304] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-cos-k8sstable3-reboot/1216086104600481803/artifacts
I0111 20:29:04.602] process 11545 exited with code 0 after 0.1m
I0111 20:29:04.603] Call:  git rev-parse HEAD
W0111 20:29:04.607] fatal: not a git repository (or any of the parent directories): .git
E0111 20:29:04.608] Command failed
I0111 20:29:04.608] process 12202 exited with code 128 after 0.0m
I0111 20:29:04.608] Call:  git rev-parse HEAD
I0111 20:29:04.612] process 12203 exited with code 0 after 0.0m
I0111 20:29:04.613] Call:  gsutil stat gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-cos-k8sstable3-reboot/jobResultsCache.json
I0111 20:29:05.645] process 12204 exited with code 0 after 0.0m
I0111 20:29:05.646] Call:  gsutil -q cat 'gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-cos-k8sstable3-reboot/jobResultsCache.json#1578688279156604'
... skipping 8 lines ...