This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 24 succeeded
Started2019-05-16 18:18
Elapsed1h5m
Revision
Buildergke-prow-containerd-pool-99179761-8nmq
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/bfb95613-5607-4a86-9e5c-1e06a6547c27/targets/test'}}
podd111a3d3-7806-11e9-80db-0a580a6c069d
resultstorehttps://source.cloud.google.com/results/invocations/bfb95613-5607-4a86-9e5c-1e06a6547c27/targets/test
infra-commit8a91f16c7
job-versionv1.12.9-beta.0.41+8c3d5963f4d793
master_os_image
node_os_imageubuntu-gke-1804-d1809-0-v20190514
podd111a3d3-7806-11e9-80db-0a580a6c069d
revisionv1.12.9-beta.0.41+8c3d5963f4d793

No Test Failures!


Show 24 Passed Tests

Show 2006 Skipped Tests

Error lines from build-log.txt

... skipping 12 lines ...
I0516 18:18:25.565] process 44 exited with code 0 after 0.0m
I0516 18:18:25.565] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0516 18:18:25.566] Root: /workspace
I0516 18:18:25.566] cd to /workspace
I0516 18:18:25.566] Configure environment...
I0516 18:18:25.567] Call:  git show -s --format=format:%ct HEAD
W0516 18:18:25.575] fatal: Not a git repository (or any of the parent directories): .git
I0516 18:18:25.575] process 56 exited with code 128 after 0.0m
W0516 18:18:25.576] Unable to print commit date for HEAD
I0516 18:18:25.576] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0516 18:18:26.532] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0516 18:18:26.928] process 57 exited with code 0 after 0.0m
I0516 18:18:26.929] Call:  gcloud config get-value account
... skipping 472 lines ...
I0516 18:28:48.696] W0516 18:28:48.696567    1749 gce.go:467] No network name or URL specified.
I0516 18:28:51.413] May 16 18:28:51.413: INFO: cluster-master-image: 
I0516 18:28:51.414] May 16 18:28:51.413: INFO: cluster-node-image: ubuntu-gke-1804-d1809-0-v20190514
I0516 18:28:51.414] May 16 18:28:51.413: INFO: >>> kubeConfig: /tmp/gke-kubecfg213184170
I0516 18:28:51.417] May 16 18:28:51.416: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
I0516 18:28:51.623] May 16 18:28:51.623: INFO: Waiting up to 10m0s for all pods (need at least 8) in namespace 'kube-system' to be running and ready
I0516 18:28:52.083] May 16 18:28:52.083: INFO: The status of Pod fluentd-gcp-v3.2.0-kt27n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0516 18:28:52.083] May 16 18:28:52.083: INFO: 16 / 17 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
I0516 18:28:52.083] May 16 18:28:52.083: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready.
I0516 18:28:52.084] May 16 18:28:52.083: INFO: POD                       NODE                                            PHASE    GRACE  CONDITIONS
I0516 18:28:52.084] May 16 18:28:52.083: INFO: fluentd-gcp-v3.2.0-kt27n  gke-test-0101dc0a7b-default-pool-1a5c38d4-mjh5  Running  60s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-05-16 18:28:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-05-16 18:28:40 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-05-16 18:28:40 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-05-16 18:28:02 +0000 UTC  }]
I0516 18:28:52.084] May 16 18:28:52.083: INFO: 
I0516 18:28:54.216] May 16 18:28:54.215: INFO: The status of Pod fluentd-gcp-v3.2.0-dwwmj is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0516 18:28:54.216] May 16 18:28:54.216: INFO: 16 / 17 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
I0516 18:28:54.216] May 16 18:28:54.216: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready.
I0516 18:28:54.217] May 16 18:28:54.216: INFO: POD                       NODE                                            PHASE    GRACE  CONDITIONS
I0516 18:28:54.217] May 16 18:28:54.216: INFO: fluentd-gcp-v3.2.0-dwwmj  gke-test-0101dc0a7b-default-pool-1a5c38d4-mjh5  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-05-16 18:28:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-05-16 18:28:52 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-05-16 18:28:52 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-05-16 18:28:52 +0000 UTC  }]
I0516 18:28:54.217] May 16 18:28:54.216: INFO: 
I0516 18:28:56.215] May 16 18:28:56.215: INFO: 17 / 17 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
... skipping 2247 lines ...
I0516 19:15:05.318] ------------------------------
I0516 19:15:05.321] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 16 19:15:05.319: INFO: Running AfterSuite actions on all node
I0516 19:15:05.321] May 16 19:15:05.319: INFO: Running AfterSuite actions on node 1
I0516 19:15:05.321] May 16 19:15:05.319: INFO: Skipping dumping logs from cluster
I0516 19:15:05.321] 
I0516 19:15:05.321] Ran 6 of 2012 Specs in 2776.822 seconds
I0516 19:15:05.332] SUCCESS! -- 6 Passed | 0 Failed | 0 Pending | 2006 Skipped PASS
I0516 19:15:05.352] 
I0516 19:15:05.353] Ginkgo ran 1 suite in 46m17.55949365s
I0516 19:15:05.353] Test Suite Passed
I0516 19:15:05.425] Checking for custom logdump instances, if any
I0516 19:15:05.435] Using 'use_custom_instance_list' with gke, skipping check for LOG_DUMP_SSH_KEY and LOG_DUMP_SSH_USER
W0516 19:15:05.536] 2019/05/16 19:15:05 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.focus=\[Feature:Reboot\] --minStartupPods=8 --num-nodes=3 --report-dir=/workspace/_artifacts --disable-log-dump=true' finished in 46m17.976501877s
... skipping 37 lines ...
W0516 19:15:54.959] #          See https://cloud.google.com/kubernetes-engine/docs/concepts/node-images#modifications
W0516 19:15:54.959] #          for more information.
W0516 19:15:54.959] ##############################################################################
W0516 19:15:55.475] scp: /var/log/fluentd.log*: No such file or directory
W0516 19:15:55.475] scp: /var/log/node-problem-detector.log*: No such file or directory
W0516 19:15:55.475] scp: /var/log/kubelet.cov*: No such file or directory
W0516 19:15:55.574] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0516 19:15:55.731] ##############################################################################
W0516 19:15:55.731] # WARNING: Any changes on the boot disk of the node must be made via
W0516 19:15:55.731] #          DaemonSet in order to preserve them across node (re)creations.
W0516 19:15:55.731] #          Node will be (re)created during manual-upgrade, auto-upgrade,
W0516 19:15:55.731] #          auto-repair or auto-scaling.
W0516 19:15:55.732] #          See https://cloud.google.com/kubernetes-engine/docs/concepts/node-images#modifications
W0516 19:15:55.732] #          for more information.
W0516 19:15:55.732] ##############################################################################
W0516 19:15:56.232] scp: /var/log/fluentd.log*: No such file or directory
W0516 19:15:56.233] scp: /var/log/node-problem-detector.log*: No such file or directory
W0516 19:15:56.233] scp: /var/log/kubelet.cov*: No such file or directory
W0516 19:15:56.238] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0516 19:15:56.397] ##############################################################################
W0516 19:15:56.397] # WARNING: Any changes on the boot disk of the node must be made via
W0516 19:15:56.398] #          DaemonSet in order to preserve them across node (re)creations.
W0516 19:15:56.398] #          Node will be (re)created during manual-upgrade, auto-upgrade,
W0516 19:15:56.398] #          auto-repair or auto-scaling.
W0516 19:15:56.398] #          See https://cloud.google.com/kubernetes-engine/docs/concepts/node-images#modifications
W0516 19:15:56.398] #          for more information.
W0516 19:15:56.398] ##############################################################################
W0516 19:15:56.899] scp: /var/log/fluentd.log*: No such file or directory
W0516 19:15:56.899] scp: /var/log/node-problem-detector.log*: No such file or directory
W0516 19:15:56.899] scp: /var/log/kubelet.cov*: No such file or directory
W0516 19:15:56.906] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0516 19:15:57.033] 2019/05/16 19:15:57 process.go:155: Step 'bash -c 
W0516 19:15:57.033] function log_dump_custom_get_instances() {
W0516 19:15:57.033]   if [[ $1 == "master" ]]; then
W0516 19:15:57.033]     return 0
W0516 19:15:57.033]   fi
W0516 19:15:57.033] 
... skipping 48 lines ...
W0516 19:21:07.720] Listed 0 items.
W0516 19:21:08.342] Listed 0 items.
W0516 19:21:08.409] 2019/05/16 19:21:08 process.go:155: Step './cluster/gce/list-resources.sh' finished in 11.099506474s
W0516 19:21:08.410] 2019/05/16 19:21:08 process.go:153: Running: diff -sw -U0 -F^\[.*\]$ /workspace/_artifacts/gcp-resources-before.txt /workspace/_artifacts/gcp-resources-after.txt
W0516 19:21:08.412] 2019/05/16 19:21:08 process.go:155: Step 'diff -sw -U0 -F^\[.*\]$ /workspace/_artifacts/gcp-resources-before.txt /workspace/_artifacts/gcp-resources-after.txt' finished in 1.840044ms
W0516 19:21:08.412] 2019/05/16 19:21:08 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0516 19:23:38.429] 2019/05/16 19:23:38 main.go:309: [Boskos] Fail To Release: 1 error occurred:
W0516 19:23:38.429] 
W0516 19:23:38.429] * Post http://boskos.test-pods.svc.cluster.local./release?name=k8s-jkns-gke-reboot-1-3&dest=dirty&owner=ci-kubernetes-e2e-gke-ubuntu2-k8sstable3-reboot: dial tcp 10.63.250.132:80: i/o timeout, kubetest err: <nil>
W0516 19:23:38.435] Traceback (most recent call last):
W0516 19:23:38.436]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0516 19:23:38.464]     main(parse_args())
W0516 19:23:38.464]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
... skipping 2 lines ...
W0516 19:23:38.465]     check_env(env, self.command, *args)
W0516 19:23:38.465]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0516 19:23:38.465]     subprocess.check_call(cmd, env=env)
W0516 19:23:38.465]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0516 19:23:38.465]     raise CalledProcessError(retcode, cmd)
W0516 19:23:38.466] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=gke', '--provider=gke', '--cluster=test-0101dc0a7b', '--gcp-network=test-0101dc0a7b', '--check-leaked-resources', '--gcp-zone=us-west1-b', '--gcp-cloud-sdk=gs://cloud-sdk-testing/ci/staging', '--gke-environment=test', '--image-family=pipeline-2', '--image-project=ubuntu-os-gke-cloud-devel', '--gcp-node-image=custom', '--extract=ci/k8s-stable3', '--timeout=180m', '--test_args=--ginkgo.focus=\\[Feature:Reboot\\] --minStartupPods=8')' returned non-zero exit status 1
E0516 19:23:38.482] Command failed
I0516 19:23:38.482] process 259 exited with code 1 after 65.1m
E0516 19:23:38.482] FAIL: ci-kubernetes-e2e-gke-ubuntu2-k8sstable3-reboot
I0516 19:23:38.483] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0516 19:23:39.197] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0516 19:23:39.301] process 3100 exited with code 0 after 0.0m
I0516 19:23:39.301] Call:  gcloud config get-value account
I0516 19:23:39.687] process 3112 exited with code 0 after 0.0m
I0516 19:23:39.687] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0516 19:23:39.688] Upload result and artifacts...
I0516 19:23:39.688] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-ubuntu2-k8sstable3-reboot/1129088504886726658
I0516 19:23:39.688] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-ubuntu2-k8sstable3-reboot/1129088504886726658/artifacts
W0516 19:23:41.320] CommandException: One or more URLs matched no objects.
E0516 19:23:41.538] Command failed
I0516 19:23:41.539] process 3124 exited with code 1 after 0.0m
W0516 19:23:41.539] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-ubuntu2-k8sstable3-reboot/1129088504886726658/artifacts not exist yet
I0516 19:23:41.539] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-ubuntu2-k8sstable3-reboot/1129088504886726658/artifacts
I0516 19:23:44.355] process 3266 exited with code 0 after 0.0m
I0516 19:23:44.356] Call:  git rev-parse HEAD
W0516 19:23:44.363] fatal: Not a git repository (or any of the parent directories): .git
E0516 19:23:44.364] Command failed
I0516 19:23:44.365] process 3904 exited with code 128 after 0.0m
I0516 19:23:44.365] Call:  git rev-parse HEAD
I0516 19:23:44.421] process 3905 exited with code 0 after 0.0m
I0516 19:23:44.421] Call:  gsutil stat gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-ubuntu2-k8sstable3-reboot/jobResultsCache.json
I0516 19:23:45.847] process 3906 exited with code 0 after 0.0m
I0516 19:23:45.848] Call:  gsutil -q cat 'gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-ubuntu2-k8sstable3-reboot/jobResultsCache.json#1558012942307510'
... skipping 8 lines ...