This job view page is being replaced by Spyglass soon. Check out the new job view.
PRsanchezl: kubernetes-sigs --> sigs.k8s.io
ResultFAILURE
Tests 1 failed / 15 succeeded
Started2019-11-06 23:15
Elapsed16m26s
Revision2c1ffeaf2f1d3e719f47da33e6397763e1c238f6
Refs 46
job-versionv1.18.0-alpha.0.351+9d708b02031b5c
revisionv1.18.0-alpha.0.351+9d708b02031b5c

Test Failures


2.79s

error during ../test/e2e/test-fully-automated.sh --skip=\[Disruptive\]: exit status 2
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 15 Passed Tests

Error lines from build-log.txt

... skipping 336 lines ...
Trying to find master named 'bootstrap-e2e-master'
Looking for address 'bootstrap-e2e-master-ip'
Using master: bootstrap-e2e-master (external IP: 35.238.54.111; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

...............Kubernetes cluster created.
Cluster "kubernetes-gci-ingress-1-3_bootstrap-e2e" set.
User "kubernetes-gci-ingress-1-3_bootstrap-e2e" set.
Context "kubernetes-gci-ingress-1-3_bootstrap-e2e" created.
Switched to context "kubernetes-gci-ingress-1-3_bootstrap-e2e".
... skipping 112 lines ...
find ./manifests.local -type f -exec sed -i -e "s|NAMESPACE|kube-system|g" {} \;
CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o cmd/initializer/initializer ./cmd/initializer
cmd/initializer/initializer.go:10:2: cannot find package "sigs.k8s.io/kube-storage-version-migrator/cmd/initializer/app" in any of:
	/home/prow/go/src/github.com/kubernetes-sigs/kube-storage-version-migrator/vendor/sigs.k8s.io/kube-storage-version-migrator/cmd/initializer/app (vendor tree)
	/usr/local/go/src/sigs.k8s.io/kube-storage-version-migrator/cmd/initializer/app (from $GOROOT)
	/home/prow/go/src/sigs.k8s.io/kube-storage-version-migrator/cmd/initializer/app (from $GOPATH)
make: *** [Makefile:34: all-containers] Error 1
!!! Error in ../test/e2e/test-fully-automated.sh:55
  Error in ../test/e2e/test-fully-automated.sh:55. 'make push-all' exited with status 2
Call stack:
  1: ../test/e2e/test-fully-automated.sh:55 main(...)
Exiting with status 1
Deleting images
eval ""gcloud container images delete" gcr.io/kubernetes-gci-ingress-1-3/storage-version-migration-initializer:v7175b48"
ERROR: (gcloud.container.images.delete) [gcr.io/kubernetes-gci-ingress-1-3/storage-version-migration-initializer:v7175b48] is not a valid name. Expected tag in the form "base:tag" or "tag" or digest in the form "sha256:<digest>"
make: *** [Makefile:64: delete-all-images] Error 1
2019/11/06 23:21:37 process.go:155: Step '../test/e2e/test-fully-automated.sh --skip=\[Disruptive\]' finished in 2.793238372s
2019/11/06 23:21:37 e2e.go:534: Dumping logs locally to: /logs/artifacts
2019/11/06 23:21:37 process.go:153: Running: ./cluster/log-dump/log-dump.sh /logs/artifacts
Checking for custom logdump instances, if any
Sourcing kube-util.sh
Detecting project
... skipping 9 lines ...

Specify --start=45970 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-93jr

Specify --start=49725 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=bootstrap-e2e-minion-group
NODE_NAMES=bootstrap-e2e-minion-group-93jr
Failures for bootstrap-e2e-minion-group (if any):
2019/11/06 23:22:52 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts' finished in 1m14.197283101s
2019/11/06 23:22:52 e2e.go:456: Listing resources...
2019/11/06 23:22:52 process.go:153: Running: ./cluster/gce/list-resources.sh
... skipping 68 lines ...
Listed 0 items.
Listed 0 items.
2019/11/06 23:31:49 process.go:155: Step './cluster/gce/list-resources.sh' finished in 10.566483913s
2019/11/06 23:31:49 process.go:153: Running: diff -sw -U0 -F^\[.*\]$ /logs/artifacts/gcp-resources-before.txt /logs/artifacts/gcp-resources-after.txt
2019/11/06 23:31:49 process.go:155: Step 'diff -sw -U0 -F^\[.*\]$ /logs/artifacts/gcp-resources-before.txt /logs/artifacts/gcp-resources-after.txt' finished in 1.573447ms
2019/11/06 23:31:49 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2019/11/06 23:31:58 main.go:319: Something went wrong: encountered 1 errors: [error during ../test/e2e/test-fully-automated.sh --skip=\[Disruptive\]: exit status 2]
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 778, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 626, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 262, in start
... skipping 24 lines ...