This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 13 succeeded
Started2023-01-12 18:10
Elapsed1h42m
Revisionrelease-1.5

Test Failures


capv-e2e Cluster creation with storage policy should create a cluster successfully 11m11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capv\-e2e\sCluster\screation\swith\sstorage\spolicy\sshould\screate\sa\scluster\ssuccessfully$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/storage_policy_test.go:57
Timed out after 600.000s.
No Control Plane machines came into existence. 
Expected
    <bool>: false
to be true
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.2/framework/controlplane_helpers.go:153
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 13 Passed Tests

Show 3 Skipped Tests

Error lines from build-log.txt

... skipping 721 lines ...
INFO: Waiting for correct number of replicas to exist
STEP: Scaling the MachineDeployment down to 1
INFO: Scaling machine deployment md-scale-0b4rp4/md-scale-2iw2ib-md-0 from 3 to 1 replicas
INFO: Waiting for correct number of replicas to exist
STEP: PASSED!
STEP: Dumping logs from the "md-scale-2iw2ib" workload cluster
Failed to get logs for machine md-scale-2iw2ib-ktfjf, cluster md-scale-0b4rp4/md-scale-2iw2ib: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for machine md-scale-2iw2ib-md-0-54c9bfbb97-skt5s, cluster md-scale-0b4rp4/md-scale-2iw2ib: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
STEP: Dumping all the Cluster API resources in the "md-scale-0b4rp4" namespace
STEP: Deleting cluster md-scale-0b4rp4/md-scale-2iw2ib
STEP: Deleting cluster md-scale-2iw2ib
INFO: Waiting for the Cluster md-scale-0b4rp4/md-scale-2iw2ib to be deleted
STEP: Waiting for cluster md-scale-2iw2ib to be deleted
STEP: Deleting namespace used for hosting the "md-scale" test spec
... skipping 119 lines ...
INFO: Waiting for rolling upgrade to start.
INFO: Waiting for MachineDeployment rolling upgrade to start
INFO: Waiting for rolling upgrade to complete.
INFO: Waiting for MachineDeployment rolling upgrade to complete
STEP: PASSED!
STEP: Dumping logs from the "md-rollout-d17aqo" workload cluster
Failed to get logs for machine md-rollout-d17aqo-2mlp9, cluster md-rollout-82v9pw/md-rollout-d17aqo: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for machine md-rollout-d17aqo-md-0-6b887575f4-pm5gd, cluster md-rollout-82v9pw/md-rollout-d17aqo: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
STEP: Dumping all the Cluster API resources in the "md-rollout-82v9pw" namespace
STEP: Deleting cluster md-rollout-82v9pw/md-rollout-d17aqo
STEP: Deleting cluster md-rollout-d17aqo
INFO: Waiting for the Cluster md-rollout-82v9pw/md-rollout-d17aqo to be deleted
STEP: Waiting for cluster md-rollout-d17aqo to be deleted
STEP: Deleting namespace used for hosting the "md-rollout" test spec
... skipping 52 lines ...
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
STEP: Checking all the machines controlled by quick-start-d25jq5-md-0 are in the "<None>" failure domain
INFO: Waiting for the machine pools to be provisioned
STEP: PASSED!
STEP: Dumping logs from the "quick-start-d25jq5" workload cluster
Failed to get logs for machine quick-start-d25jq5-dczhm, cluster quick-start-36bp9m/quick-start-d25jq5: dialing host IP address at 192.168.6.91: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Failed to get logs for machine quick-start-d25jq5-md-0-fcd46ddb9-q2dlt, cluster quick-start-36bp9m/quick-start-d25jq5: dialing host IP address at 192.168.6.92: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
STEP: Dumping all the Cluster API resources in the "quick-start-36bp9m" namespace
STEP: Deleting cluster quick-start-36bp9m/quick-start-d25jq5
STEP: Deleting cluster quick-start-d25jq5
INFO: Waiting for the Cluster quick-start-36bp9m/quick-start-d25jq5 to be deleted
STEP: Waiting for cluster quick-start-d25jq5 to be deleted
STEP: Deleting namespace used for hosting the "quick-start" test spec
... skipping 52 lines ...
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
STEP: Checking all the machines controlled by quick-start-wtzmlj-md-0 are in the "<None>" failure domain
INFO: Waiting for the machine pools to be provisioned
STEP: PASSED!
STEP: Dumping logs from the "quick-start-wtzmlj" workload cluster
Failed to get logs for machine quick-start-wtzmlj-ds5lf, cluster quick-start-n4mrhn/quick-start-wtzmlj: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for machine quick-start-wtzmlj-md-0-5fcd45cb5d-kqfgv, cluster quick-start-n4mrhn/quick-start-wtzmlj: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
STEP: Dumping all the Cluster API resources in the "quick-start-n4mrhn" namespace
STEP: Deleting cluster quick-start-n4mrhn/quick-start-wtzmlj
STEP: Deleting cluster quick-start-wtzmlj
INFO: Waiting for the Cluster quick-start-n4mrhn/quick-start-wtzmlj to be deleted
STEP: Waiting for cluster quick-start-wtzmlj to be deleted
STEP: Deleting namespace used for hosting the "quick-start" test spec
... skipping 60 lines ...
INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "quick-start-worker") to complete.
STEP: Rebasing the Cluster to a ClusterClass with a modified label for MachineDeployments and wait for changes to be applied to the MachineDeployment objects
INFO: Waiting for MachineDeployment rollout to complete.
INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "quick-start-worker") to complete.
STEP: PASSED!
STEP: Dumping logs from the "clusterclass-changes-dbjap4" workload cluster
Failed to get logs for machine clusterclass-changes-dbjap4-bpqfv-2clqc, cluster clusterclass-changes-ni63mi/clusterclass-changes-dbjap4: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for machine clusterclass-changes-dbjap4-md-0-kgmcg-68bf8c7f7b-g5984, cluster clusterclass-changes-ni63mi/clusterclass-changes-dbjap4: dialing host IP address at : dial tcp :22: connect: connection refused
Failed to get logs for machine clusterclass-changes-dbjap4-md-0-kgmcg-7cc684cd5c-hkkzg, cluster clusterclass-changes-ni63mi/clusterclass-changes-dbjap4: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
STEP: Dumping all the Cluster API resources in the "clusterclass-changes-ni63mi" namespace
STEP: Deleting cluster clusterclass-changes-ni63mi/clusterclass-changes-dbjap4
STEP: Deleting cluster clusterclass-changes-dbjap4
INFO: Waiting for the Cluster clusterclass-changes-ni63mi/clusterclass-changes-dbjap4 to be deleted
STEP: Waiting for cluster clusterclass-changes-dbjap4 to be deleted
STEP: Deleting namespace used for hosting the "clusterclass-changes" test spec
... skipping 136 lines ...
STEP: Waiting for deployment node-drain-vqtjzo-unevictable-workload/unevictable-pod-1cb to be available
STEP: Scale down the controlplane of the workload cluster and make sure that nodes running workload can be deleted even the draining process is blocked.
INFO: Scaling controlplane node-drain-vqtjzo/node-drain-fc9mnh from 3 to 1 replicas
INFO: Waiting for correct number of replicas to exist
STEP: PASSED!
STEP: Dumping logs from the "node-drain-fc9mnh" workload cluster
Failed to get logs for machine node-drain-fc9mnh-2bgzp, cluster node-drain-vqtjzo/node-drain-fc9mnh: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
STEP: Dumping all the Cluster API resources in the "node-drain-vqtjzo" namespace
STEP: Deleting cluster node-drain-vqtjzo/node-drain-fc9mnh
STEP: Deleting cluster node-drain-fc9mnh
INFO: Waiting for the Cluster node-drain-vqtjzo/node-drain-fc9mnh to be deleted
STEP: Waiting for cluster node-drain-fc9mnh to be deleted
STEP: Deleting namespace used for hosting the "node-drain" test spec
... skipping 50 lines ...
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
STEP: Checking all the machines controlled by quick-start-v5b5g7-md-0-dph5s are in the "<None>" failure domain
INFO: Waiting for the machine pools to be provisioned
STEP: PASSED!
STEP: Dumping logs from the "quick-start-v5b5g7" workload cluster
Failed to get logs for machine quick-start-v5b5g7-md-0-dph5s-5f5686c9ff-67ltl, cluster quick-start-jigno3/quick-start-v5b5g7: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for machine quick-start-v5b5g7-qc9lw-nvlsw, cluster quick-start-jigno3/quick-start-v5b5g7: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
STEP: Dumping all the Cluster API resources in the "quick-start-jigno3" namespace
STEP: Deleting cluster quick-start-jigno3/quick-start-v5b5g7
STEP: Deleting cluster quick-start-v5b5g7
INFO: Waiting for the Cluster quick-start-jigno3/quick-start-v5b5g7 to be deleted
STEP: Waiting for cluster quick-start-v5b5g7 to be deleted
STEP: Deleting namespace used for hosting the "quick-start" test spec
... skipping 58 lines ...
Patching MachineHealthCheck unhealthy condition to one of the nodes
INFO: Patching the node condition to the node
Waiting for remediation
Waiting until the node with unhealthy node condition is remediated
STEP: PASSED!
STEP: Dumping logs from the "mhc-remediation-3cgavr" workload cluster
Failed to get logs for machine mhc-remediation-3cgavr-gbtng, cluster mhc-remediation-x5vuag/mhc-remediation-3cgavr: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for machine mhc-remediation-3cgavr-md-0-85b786fcc7-7m5gz, cluster mhc-remediation-x5vuag/mhc-remediation-3cgavr: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
STEP: Dumping all the Cluster API resources in the "mhc-remediation-x5vuag" namespace
STEP: Deleting cluster mhc-remediation-x5vuag/mhc-remediation-3cgavr
STEP: Deleting cluster mhc-remediation-3cgavr
INFO: Waiting for the Cluster mhc-remediation-x5vuag/mhc-remediation-3cgavr to be deleted
STEP: Waiting for cluster mhc-remediation-3cgavr to be deleted
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
... skipping 60 lines ...
Patching MachineHealthCheck unhealthy condition to one of the nodes
INFO: Patching the node condition to the node
Waiting for remediation
Waiting until the node with unhealthy node condition is remediated
STEP: PASSED!
STEP: Dumping logs from the "mhc-remediation-4c99es" workload cluster
Failed to get logs for machine mhc-remediation-4c99es-j2wpd, cluster mhc-remediation-1e1ffv/mhc-remediation-4c99es: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for machine mhc-remediation-4c99es-md-0-5d679c6d89-cfxdc, cluster mhc-remediation-1e1ffv/mhc-remediation-4c99es: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for machine mhc-remediation-4c99es-q4bws, cluster mhc-remediation-1e1ffv/mhc-remediation-4c99es: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for machine mhc-remediation-4c99es-z9cb6, cluster mhc-remediation-1e1ffv/mhc-remediation-4c99es: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
STEP: Dumping all the Cluster API resources in the "mhc-remediation-1e1ffv" namespace
STEP: Deleting cluster mhc-remediation-1e1ffv/mhc-remediation-4c99es
STEP: Deleting cluster mhc-remediation-4c99es
INFO: Waiting for the Cluster mhc-remediation-1e1ffv/mhc-remediation-4c99es to be deleted
STEP: Waiting for cluster mhc-remediation-4c99es to be deleted
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
... skipping 70 lines ...

JUnit report was created: /logs/artifacts/junit.e2e_suite.1.xml


Summarizing 1 Failure:

[Fail] Cluster creation with storage policy [It] should create a cluster successfully 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.2/framework/controlplane_helpers.go:153

Ran 14 of 17 Specs in 5779.590 seconds
FAIL! -- 13 Passed | 1 Failed | 0 Pending | 3 Skipped
--- FAIL: TestE2E (5779.60s)
FAIL

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 7 lines ...

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=1.16.5


Ginkgo ran 1 suite in 1h37m20.606266803s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 3 lines ...
To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc

real	97m20.616s
user	6m17.826s
sys	1m25.154s
make: *** [Makefile:183: e2e] Error 1
Releasing IP claims
ipclaim.ipam.metal3.io "ip-claim-0ad0aee5c2e15fde16e754f42cf8ee7f9781afb8" deleted
ipclaim.ipam.metal3.io "workload-ip-claim-d4bcb4f81addc2e22bcfc0c1fdd825df4791b0a8" deleted
vpn
WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`.
Revoked credentials:
... skipping 13 lines ...