This job view page is being replaced by Spyglass soon. Check out the new job view.
PRandyzhangx: upgrade api-version to fix azure file AuthorizationFailure
ResultFAILURE
Tests 1 failed / 14 succeeded
Started2019-11-21 01:03
Elapsed25m47s
Revision6b73c9f8c21ccc01ba41aad214ae5abec0d449bc
Refs 85475
job-versionv1.18.0-alpha.0.1110+ae60862015cf61
master_os_imagecos-77-12371-89-0
node_os_imagecos-77-12371-89-0
revisionv1.18.0-alpha.0.1110+ae60862015cf61

Test Failures


kubectl version 1m10s

error starting ./cluster/kubectl.sh --match-server-version=false version: exec: already started
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 14 Passed Tests

Show 4814 Skipped Tests

Error lines from build-log.txt

... skipping 139 lines ...
INFO: 4425 processes: 4341 remote cache hit, 27 processwrapper-sandbox, 57 remote.
INFO: Build completed successfully, 4490 total actions
INFO: Build completed successfully, 4490 total actions
make: Leaving directory '/home/prow/go/src/k8s.io/kubernetes'
2019/11/21 01:08:49 process.go:155: Step 'make -C /home/prow/go/src/k8s.io/kubernetes bazel-release' finished in 4m45.123919087s
2019/11/21 01:08:49 util.go:265: Flushing memory.
2019/11/21 01:09:08 util.go:275: flushMem error (page cache): exit status 1
2019/11/21 01:09:08 process.go:153: Running: /home/prow/go/src/k8s.io/release/push-build.sh --nomock --verbose --noupdatelatest --bucket=kubernetes-release-pull --ci --gcs-suffix=/pull-kubernetes-e2e-gce-storage-slow --allow-dup
push-build.sh: BEGIN main on 7a71883f-0bfa-11ea-bb11-0a04a03f6314 Thu Nov 21 01:09:09 UTC 2019

$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
Starting local Bazel server and connecting to it...
INFO: Invocation ID: 8ecc7a12-5a0d-4ec9-9246-3ab897bd42f9
... skipping 841 lines ...
Trying to find master named 'e2e-618dc4e781-1f3e5-master'
Looking for address 'e2e-618dc4e781-1f3e5-master-ip'
Using master: e2e-618dc4e781-1f3e5-master (external IP: 34.83.195.55; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

.............Kubernetes cluster created.
Cluster "k8s-jkns-gce-serial-1-6_e2e-618dc4e781-1f3e5" set.
User "k8s-jkns-gce-serial-1-6_e2e-618dc4e781-1f3e5" set.
Context "k8s-jkns-gce-serial-1-6_e2e-618dc4e781-1f3e5" created.
Switched to context "k8s-jkns-gce-serial-1-6_e2e-618dc4e781-1f3e5".
... skipping 19 lines ...
NAME                                     STATUS                     ROLES    AGE   VERSION
e2e-618dc4e781-1f3e5-master              Ready,SchedulingDisabled   <none>   18s   v1.18.0-alpha.0.1110+ae60862015cf61
e2e-618dc4e781-1f3e5-minion-group-cv29   Ready                      <none>   18s   v1.18.0-alpha.0.1110+ae60862015cf61
e2e-618dc4e781-1f3e5-minion-group-pt7c   Ready                      <none>   19s   v1.18.0-alpha.0.1110+ae60862015cf61
e2e-618dc4e781-1f3e5-minion-group-slp0   Ready                      <none>   18s   v1.18.0-alpha.0.1110+ae60862015cf61
Validate output:
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
Cluster validation succeeded
Done, listing cluster services:
... skipping 69 lines ...
Client Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.0-alpha.0.1110+ae60862015cf61", GitCommit:"ae60862015cf61523176879d28a57ae243ad3c91", GitTreeState:"clean", BuildDate:"2019-11-21T01:04:53Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.0-alpha.0.1110+ae60862015cf61", GitCommit:"ae60862015cf61523176879d28a57ae243ad3c91", GitTreeState:"clean", BuildDate:"2019-11-21T01:04:53Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
2019/11/21 01:18:53 process.go:155: Step './hack/e2e-internal/e2e-status.sh' finished in 370.241321ms
2019/11/21 01:18:53 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false version
Unable to connect to the server: dial tcp 34.83.195.55:443: i/o timeout
2019/11/21 01:19:24 process.go:155: Step './cluster/kubectl.sh --match-server-version=false version' finished in 30.1746078s
2019/11/21 01:19:24 e2e.go:334: Failed to reach api. Sleeping for 10 seconds before retrying... ([./cluster/kubectl.sh --match-server-version=false version])
2019/11/21 01:19:34 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false version
2019/11/21 01:19:34 process.go:155: Step './cluster/kubectl.sh --match-server-version=false version' finished in 11.383µs
2019/11/21 01:19:34 e2e.go:334: Failed to reach api. Sleeping for 10 seconds before retrying... ([./cluster/kubectl.sh --match-server-version=false version])
2019/11/21 01:19:44 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false version
2019/11/21 01:19:44 process.go:155: Step './cluster/kubectl.sh --match-server-version=false version' finished in 18.346µs
2019/11/21 01:19:44 e2e.go:334: Failed to reach api. Sleeping for 10 seconds before retrying... ([./cluster/kubectl.sh --match-server-version=false version])
2019/11/21 01:19:54 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false version
2019/11/21 01:19:54 process.go:155: Step './cluster/kubectl.sh --match-server-version=false version' finished in 6.779µs
2019/11/21 01:19:54 e2e.go:334: Failed to reach api. Sleeping for 10 seconds before retrying... ([./cluster/kubectl.sh --match-server-version=false version])
2019/11/21 01:20:04 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false version
2019/11/21 01:20:04 process.go:155: Step './cluster/kubectl.sh --match-server-version=false version' finished in 8.909µs
2019/11/21 01:20:04 process.go:153: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=\[sig-storage\].*\[Slow\] --ginkgo.skip=\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]| --minStartupPods=8 --report-dir=/logs/artifacts --disable-log-dump=true
Setting up for KUBERNETES_PROVIDER="gce".
Project: k8s-jkns-gce-serial-1-6
Network Project: k8s-jkns-gce-serial-1-6
... skipping 250 lines ...
Nov 21 01:20:20.912: INFO: Running AfterSuite actions on all nodes
Nov 21 01:20:21.018: INFO: Running AfterSuite actions on node 1
Nov 21 01:20:21.018: INFO: Skipping dumping logs from cluster


Ran 0 of 4814 Specs in 12.109 seconds
SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 4814 Skipped


Ginkgo ran 1 suite in 15.394543568s
Test Suite Passed
2019/11/21 01:20:21 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.focus=\[sig-storage\].*\[Slow\] --ginkgo.skip=\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]| --minStartupPods=8 --report-dir=/logs/artifacts --disable-log-dump=true' finished in 16.901300443s
2019/11/21 01:20:21 e2e.go:534: Dumping logs locally to: /logs/artifacts
... skipping 5 lines ...
Network Project: k8s-jkns-gce-serial-1-6
Zone: us-west1-b
Dumping logs from master locally to '/logs/artifacts'
Trying to find master named 'e2e-618dc4e781-1f3e5-master'
Looking for address 'e2e-618dc4e781-1f3e5-master-ip'
Using master: e2e-618dc4e781-1f3e5-master (external IP: 34.83.195.55; internal IP: (not set))
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
Changing logfiles to be world-readable for download
Copying 'kube-apiserver.log kube-apiserver-audit.log kube-scheduler.log kube-controller-manager.log etcd.log etcd-events.log glbc.log cluster-autoscaler.log kube-addon-manager.log fluentd.log kubelet.cov startupscript.log' from e2e-618dc4e781-1f3e5-master

Specify --start=47250 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from e2e-618dc4e781-1f3e5-minion-group-pt7c
... skipping 6 lines ...

Specify --start=48892 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=e2e-618dc4e781-1f3e5-minion-group
NODE_NAMES=e2e-618dc4e781-1f3e5-minion-group-cv29 e2e-618dc4e781-1f3e5-minion-group-pt7c e2e-618dc4e781-1f3e5-minion-group-slp0
Failures for e2e-618dc4e781-1f3e5-minion-group (if any):
2019/11/21 01:22:23 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts' finished in 2m2.662641655s
2019/11/21 01:22:23 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
Project: k8s-jkns-gce-serial-1-6
... skipping 42 lines ...
Property "users.k8s-jkns-gce-serial-1-6_e2e-618dc4e781-1f3e5-basic-auth" unset.
Property "contexts.k8s-jkns-gce-serial-1-6_e2e-618dc4e781-1f3e5" unset.
Cleared config for k8s-jkns-gce-serial-1-6_e2e-618dc4e781-1f3e5 from /workspace/.kube/config
Done
2019/11/21 01:29:43 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 7m19.896607445s
2019/11/21 01:29:43 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2019/11/21 01:29:45 main.go:319: Something went wrong: encountered 1 errors: [error starting ./cluster/kubectl.sh --match-server-version=false version: exec: already started]
Traceback (most recent call last):
  File "../test-infra/scenarios/kubernetes_e2e.py", line 778, in <module>
    main(parse_args())
  File "../test-infra/scenarios/kubernetes_e2e.py", line 626, in main
    mode.start(runner_args)
  File "../test-infra/scenarios/kubernetes_e2e.py", line 262, in start
... skipping 7 lines ...