This job view page is being replaced by Spyglass soon. Check out the new job view.
PRzhucan: e2e: add e2e test to node expand volume with secret
ResultFAILURE
Tests 1 failed / 16 succeeded
Started2023-02-07 05:38
Elapsed38m28s
Revision
Builderb07b3df9-a6a9-11ed-adff-0ab91a0dec8c
Refs master:e944fc28
115451:3bb41f1b
infra-commit574fd7d7c
job-versionv1.27.0-alpha.1.276+3290cc18518846
kubetest-versionv20230127-9396ca613c
repok8s.io/kubernetes
repo-commit3290cc1851884638e2900c2252e7c85606869289
repos{u'k8s.io/kubernetes': u'master:e944fc28ca33eae09e3466e23f809721534a020f,115451:3bb41f1b4c997f41ea1891682ab7ce01f9ad2811', u'k8s.io/release': u'master'}
revisionv1.27.0-alpha.1.276+3290cc18518846

Test Failures


kubetest Test 2.19s

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=CSI.*(\[Serial\]|\[Disruptive\]) --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[Slow\] --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 16 Passed Tests

Error lines from build-log.txt

... skipping 716 lines ...
W0207 06:05:01.781] Looking for address 'e2e-1c128f1ace-930d0-master-ip'
W0207 06:05:01.781] Using master: e2e-1c128f1ace-930d0-master (external IP: 35.247.92.223; internal IP: (not set))
I0207 06:05:01.882] Group is stable
I0207 06:05:01.882] Waiting up to 300 seconds for cluster initialization.
I0207 06:05:06.814] 
I0207 06:05:06.815]   This will continually check to see if the API for kubernetes is reachable.
I0207 06:05:06.815]   This may time out if there was some uncaught error during start up.
I0207 06:05:06.815] 
I0207 06:05:45.748] ..........Kubernetes cluster created.
I0207 06:05:45.913] Cluster "e2e-gce-gci-ci-serial_e2e-1c128f1ace-930d0" set.
I0207 06:05:46.063] User "e2e-gce-gci-ci-serial_e2e-1c128f1ace-930d0" set.
I0207 06:05:46.203] Context "e2e-gce-gci-ci-serial_e2e-1c128f1ace-930d0" created.
I0207 06:05:46.355] Switched to context "e2e-gce-gci-ci-serial_e2e-1c128f1ace-930d0".
... skipping 23 lines ...
I0207 06:06:26.652] e2e-1c128f1ace-930d0-minion-group-1m2n   Ready                      <none>   12s   v1.27.0-alpha.1.276+3290cc18518846
I0207 06:06:26.652] e2e-1c128f1ace-930d0-minion-group-9l00   Ready                      <none>   13s   v1.27.0-alpha.1.276+3290cc18518846
I0207 06:06:26.652] e2e-1c128f1ace-930d0-minion-group-9vrl   Ready                      <none>   13s   v1.27.0-alpha.1.276+3290cc18518846
I0207 06:06:26.653] Validate output:
W0207 06:06:26.853] Warning: v1 ComponentStatus is deprecated in v1.19+
W0207 06:06:26.859] Done, listing cluster services:
I0207 06:06:26.959] NAME                 STATUS    MESSAGE                         ERROR
I0207 06:06:27.125] etcd-1               Healthy   {"health":"true","reason":""}   
I0207 06:06:27.125] etcd-0               Healthy   {"health":"true","reason":""}   
I0207 06:06:27.125] controller-manager   Healthy   ok                              
I0207 06:06:27.125] scheduler            Healthy   ok                              
I0207 06:06:27.125] Cluster validation succeeded
I0207 06:06:27.125] Kubernetes control plane is running at https://35.247.92.223
... skipping 97 lines ...
I0207 06:06:48.903] open /workspace/_artifacts/platforms_linux_amd64_ginkgo_report.json: no such file or directory
I0207 06:06:48.903] Could not open /workspace/_artifacts/platforms_linux_amd64_ginkgo_report.xml:
I0207 06:06:48.903] open /workspace/_artifacts/platforms_linux_amd64_ginkgo_report.xml: no such file or directory
I0207 06:06:48.904] 
I0207 06:06:48.904] Ginkgo ran 1 suite in 348.289719ms
I0207 06:06:48.904] 
I0207 06:06:48.904] Test Suite Failed
I0207 06:06:48.904] Checking for custom logdump instances, if any
I0207 06:06:48.907] ----------------------------------------------------------------------------------------------------
I0207 06:06:48.908] k/k version of the log-dump.sh script is deprecated!
I0207 06:06:48.908] Please migrate your test job to use test-infra's repo version of log-dump.sh!
I0207 06:06:48.908] Migration steps can be found in the readme file.
I0207 06:06:48.908] ----------------------------------------------------------------------------------------------------
... skipping 20 lines ...
W0207 06:08:00.615] Specify --start=71090 in the next get-serial-port-output invocation to get only the new output starting from here.
W0207 06:08:00.616] scp: /var/log/cloud-controller-manager.log*: No such file or directory
W0207 06:08:00.944] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0207 06:08:01.127] scp: /var/log/fluentd.log*: No such file or directory
W0207 06:08:01.135] scp: /var/log/kubelet.cov*: No such file or directory
W0207 06:08:01.135] scp: /var/log/startupscript.log*: No such file or directory
W0207 06:08:01.135] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0207 06:08:01.472] Dumping logs from nodes locally to '/workspace/_artifacts'
I0207 06:09:10.564] Detecting nodes in the cluster
I0207 06:09:10.564] Changing logfiles to be world-readable for download
I0207 06:09:10.721] Changing logfiles to be world-readable for download
I0207 06:09:10.884] Changing logfiles to be world-readable for download
I0207 06:09:19.277] Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from e2e-1c128f1ace-930d0-minion-group-9l00
... skipping 6 lines ...
W0207 06:09:22.334] 
W0207 06:09:25.875] Specify --start=116523 in the next get-serial-port-output invocation to get only the new output starting from here.
W0207 06:09:25.876] scp: /var/log/fluentd.log*: No such file or directory
W0207 06:09:25.876] scp: /var/log/node-problem-detector.log*: No such file or directory
W0207 06:09:25.876] scp: /var/log/kubelet.cov*: No such file or directory
W0207 06:09:25.876] scp: /var/log/startupscript.log*: No such file or directory
W0207 06:09:25.885] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0207 06:09:26.097] scp: /var/log/fluentd.log*: No such file or directory
W0207 06:09:26.098] scp: /var/log/node-problem-detector.log*: No such file or directory
W0207 06:09:26.106] scp: /var/log/kubelet.cov*: No such file or directory
W0207 06:09:26.106] scp: /var/log/startupscript.log*: No such file or directory
W0207 06:09:26.106] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0207 06:09:26.404] scp: /var/log/fluentd.log*: No such file or directory
W0207 06:09:26.405] scp: /var/log/node-problem-detector.log*: No such file or directory
W0207 06:09:26.405] scp: /var/log/kubelet.cov*: No such file or directory
W0207 06:09:26.406] scp: /var/log/startupscript.log*: No such file or directory
W0207 06:09:26.417] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0207 06:09:37.326] INSTANCE_GROUPS=e2e-1c128f1ace-930d0-minion-group
I0207 06:09:40.517] Failures for e2e-1c128f1ace-930d0-minion-group (if any):
W0207 06:09:43.632] NODE_NAMES=e2e-1c128f1ace-930d0-minion-group-1m2n e2e-1c128f1ace-930d0-minion-group-9l00 e2e-1c128f1ace-930d0-minion-group-9vrl
W0207 06:09:43.632] 2023/02/07 06:09:43 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 2m54.730993382s
W0207 06:09:43.718] 2023/02/07 06:09:43 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
W0207 06:09:43.718] Project: e2e-gce-gci-ci-serial
... skipping 47 lines ...
I0207 06:16:59.444] Property "users.e2e-gce-gci-ci-serial_e2e-1c128f1ace-930d0-basic-auth" unset.
I0207 06:16:59.586] Property "contexts.e2e-gce-gci-ci-serial_e2e-1c128f1ace-930d0" unset.
I0207 06:16:59.590] Cleared config for e2e-gce-gci-ci-serial_e2e-1c128f1ace-930d0 from /workspace/.kube/config
I0207 06:16:59.590] Done
W0207 06:16:59.645] 2023/02/07 06:16:59 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 7m15.959747338s
W0207 06:16:59.645] 2023/02/07 06:16:59 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0207 06:16:59.645] 2023/02/07 06:16:59 main.go:328: Something went wrong: encountered 1 errors: [error during ./hack/ginkgo-e2e.sh --ginkgo.focus=CSI.*(\[Serial\]|\[Disruptive\]) --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[Slow\] --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1]
W0207 06:16:59.645] Traceback (most recent call last):
W0207 06:16:59.645]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 723, in <module>
W0207 06:16:59.645]     main(parse_args())
W0207 06:16:59.645]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 569, in main
W0207 06:16:59.646]     mode.start(runner_args)
W0207 06:16:59.646]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 228, in start
W0207 06:16:59.646]     check_env(env, self.command, *args)
W0207 06:16:59.646]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0207 06:16:59.646]     subprocess.check_call(cmd, env=env)
W0207 06:16:59.646]   File "/usr/lib/python3.9/subprocess.py", line 373, in check_call
W0207 06:16:59.646]     raise CalledProcessError(retcode, cmd)
W0207 06:16:59.646] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=quick', '--stage=gs://kubernetes-release-pull/ci/pull-kubernetes-e2e-gce-csi-serial', '--up', '--down', '--test', '--provider=gce', '--cluster=e2e-1c128f1ace-930d0', '--gcp-network=e2e-1c128f1ace-930d0', '--extract=local', '--gcp-node-image=gci', '--gcp-zone=us-west1-b', '--test_args=--ginkgo.focus=CSI.*(\\[Serial\\]|\\[Disruptive\\]) --ginkgo.skip=\\[Flaky\\]|\\[Feature:.+\\]|\\[Slow\\] --minStartupPods=8', '--timeout=150m')' returned non-zero exit status 1.
E0207 06:16:59.646] Command failed
I0207 06:16:59.646] process 663 exited with code 1 after 36.6m
E0207 06:16:59.646] FAIL: pull-kubernetes-e2e-gce-csi-serial
I0207 06:16:59.648] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0207 06:17:00.579] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0207 06:17:00.804] process 87896 exited with code 0 after 0.0m
I0207 06:17:00.805] Call:  gcloud config get-value account
I0207 06:17:01.837] process 87906 exited with code 0 after 0.0m
I0207 06:17:01.838] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0207 06:17:01.838] Upload result and artifacts...
I0207 06:17:01.838] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/115451/pull-kubernetes-e2e-gce-csi-serial/1622832217573036032
I0207 06:17:01.838] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/115451/pull-kubernetes-e2e-gce-csi-serial/1622832217573036032/artifacts
W0207 06:17:03.439] CommandException: One or more URLs matched no objects.
E0207 06:17:03.787] Command failed
I0207 06:17:03.787] process 87916 exited with code 1 after 0.0m
W0207 06:17:03.787] Remote dir gs://kubernetes-jenkins/pr-logs/pull/115451/pull-kubernetes-e2e-gce-csi-serial/1622832217573036032/artifacts not exist yet
I0207 06:17:03.788] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/115451/pull-kubernetes-e2e-gce-csi-serial/1622832217573036032/artifacts
I0207 06:17:07.267] process 88050 exited with code 0 after 0.1m
I0207 06:17:07.268] Call:  git rev-parse HEAD
I0207 06:17:07.310] process 88691 exited with code 0 after 0.0m
... skipping 21 lines ...