This job view page is being replaced by Spyglass soon.
Check out the new job view.
Error lines from build-log.txt
... skipping 175 lines ...
#18 exporting to image
#18 exporting layers
#18 exporting layers 0.4s done
#18 writing image sha256:4ca47a9c09da42a94f2c7058cde5f241e36608700bdf3f9fac1b10a5ef8c3db4 done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.4s
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
Activated service account credentials for: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com]
Copying file:///logs/artifacts/tempContainers/image.tar [Content-Type=application/x-tar]...
/ [0 files][ 0.0 B/ 74.6 MiB]
-
- [1 files][ 74.6 MiB/ 74.6 MiB]
\
Operation completed over 1 objects/74.6 MiB.
make -C /home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools ginkgo
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools'
... skipping 126 lines ...
#18 exporting to image
#18 exporting layers done
#18 writing image sha256:4ca47a9c09da42a94f2c7058cde5f241e36608700bdf3f9fac1b10a5ef8c3db4 done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.0s
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
make release-manifests
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make manifests STAGE=release MANIFEST_DIR=out PULL_POLICY=IfNotPresent IMAGE=gcr.io/cluster-api-provider-vsphere/release/manager:v1.6.0
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make generate-flavors FLAVOR_DIR=out
make[3]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
... skipping 243 lines ...
INFO: Waiting for the cluster infrastructure to be provisioned
[1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 01/30/23 05:28:40.9[0m
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by md-scale-eitfq5/md-scale-143uie to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/30/23 05:29:10.944[0m
[38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 [38;5;243m@ 01/30/23 05:39:10.946[0m
[1mSTEP:[0m Dumping logs from the "md-scale-143uie" workload cluster [38;5;243m@ 01/30/23 05:39:10.946[0m
Failed to get logs for Machine md-scale-143uie-kgwsp, Cluster md-scale-eitfq5/md-scale-143uie: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine md-scale-143uie-md-0-798b87d98b-sl4qb, Cluster md-scale-eitfq5/md-scale-143uie: dialing host IP address at : dial tcp :22: connect: connection refused
[1mSTEP:[0m Dumping all the Cluster API resources in the "md-scale-eitfq5" namespace [38;5;243m@ 01/30/23 05:39:13.219[0m
[1mSTEP:[0m Deleting cluster md-scale-eitfq5/md-scale-143uie [38;5;243m@ 01/30/23 05:39:13.488[0m
[1mSTEP:[0m Deleting cluster md-scale-143uie [38;5;243m@ 01/30/23 05:39:13.504[0m
INFO: Waiting for the Cluster md-scale-eitfq5/md-scale-143uie to be deleted
[1mSTEP:[0m Waiting for cluster md-scale-143uie to be deleted [38;5;243m@ 01/30/23 05:39:13.516[0m
[1mSTEP:[0m Deleting namespace used for hosting the "md-scale" test spec [38;5;243m@ 01/30/23 05:39:33.528[0m
INFO: Deleting namespace md-scale-eitfq5
[38;5;9m• [FAILED] [655.404 seconds][0m
[0mWhen testing MachineDeployment scale out/in [38;5;9m[1m[It] Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/md_scale.go:71[0m
[38;5;9m[FAILED] Timed out after 600.002s.
No Control Plane machines came into existence.
Expected
<bool>: false
to be true[0m
[38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m [38;5;243m@ 01/30/23 05:39:10.946[0m
[38;5;243m------------------------------[0m
... skipping 31 lines ...
INFO: Waiting for the cluster infrastructure to be provisioned
[1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 01/30/23 05:39:34.647[0m
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by capv-e2e-3dmh2k/storage-policy-qsfhup to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/30/23 05:39:54.687[0m
[38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 [38;5;243m@ 01/30/23 05:49:54.688[0m
[1mSTEP:[0m Dumping all the Cluster API resources in the "capv-e2e-3dmh2k" namespace [38;5;243m@ 01/30/23 05:49:54.688[0m
[1mSTEP:[0m cleaning up namespace: capv-e2e-3dmh2k [38;5;243m@ 01/30/23 05:49:55.004[0m
[1mSTEP:[0m Deleting cluster storage-policy-qsfhup [38;5;243m@ 01/30/23 05:49:55.024[0m
INFO: Waiting for the Cluster capv-e2e-3dmh2k/storage-policy-qsfhup to be deleted
[1mSTEP:[0m Waiting for cluster storage-policy-qsfhup to be deleted [38;5;243m@ 01/30/23 05:49:55.036[0m
[1mSTEP:[0m Deleting namespace used for hosting test spec [38;5;243m@ 01/30/23 05:50:05.047[0m
INFO: Deleting namespace capv-e2e-3dmh2k
[38;5;9m• [FAILED] [631.525 seconds][0m
[0mCluster creation with storage policy [38;5;9m[1m[It] should create a cluster successfully[0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/storage_policy_test.go:57[0m
[38;5;9m[FAILED] Timed out after 600.001s.
No Control Plane machines came into existence.
Expected
<bool>: false
to be true[0m
[38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m [38;5;243m@ 01/30/23 05:49:54.688[0m
[38;5;243m------------------------------[0m
... skipping 40 lines ...
INFO: Waiting for control plane to be ready
INFO: Waiting for control plane hw-upgrade-e2e-5wzeas/hw-upgrade-64jjbg to be ready (implies underlying nodes to be ready as well)
[1mSTEP:[0m Waiting for the control plane to be ready [38;5;243m@ 01/30/23 05:53:33.915[0m
[1mSTEP:[0m Checking all the control plane machines are in the expected failure domains [38;5;243m@ 01/30/23 05:53:33.92[0m
INFO: Waiting for the machine deployments to be provisioned
[1mSTEP:[0m Waiting for the workload nodes to exist [38;5;243m@ 01/30/23 05:53:33.941[0m
[38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/machinedeployment_helpers.go:131 [38;5;243m@ 01/30/23 06:03:33.942[0m
[1mSTEP:[0m Dumping all the Cluster API resources in the "hw-upgrade-e2e-5wzeas" namespace [38;5;243m@ 01/30/23 06:03:33.942[0m
[1mSTEP:[0m cleaning up namespace: hw-upgrade-e2e-5wzeas [38;5;243m@ 01/30/23 06:03:34.219[0m
[1mSTEP:[0m Deleting cluster hw-upgrade-64jjbg [38;5;243m@ 01/30/23 06:03:34.236[0m
INFO: Waiting for the Cluster hw-upgrade-e2e-5wzeas/hw-upgrade-64jjbg to be deleted
[1mSTEP:[0m Waiting for cluster hw-upgrade-64jjbg to be deleted [38;5;243m@ 01/30/23 06:03:34.248[0m
[1mSTEP:[0m Deleting namespace used for hosting test spec [38;5;243m@ 01/30/23 06:03:54.261[0m
INFO: Deleting namespace hw-upgrade-e2e-5wzeas
[38;5;9m• [FAILED] [829.213 seconds][0m
[0mHardware version upgrade [38;5;9m[1m[It] creates a cluster with VM hardware versions upgraded[0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/hardware_upgrade_test.go:57[0m
[38;5;9m[FAILED] Timed out after 600.000s.
Timed out waiting for 1 nodes to be created for MachineDeployment hw-upgrade-e2e-5wzeas/hw-upgrade-64jjbg-md-0
Expected
<int>: 0
to equal
<int>: 1[0m
[38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/machinedeployment_helpers.go:131[0m [38;5;243m@ 01/30/23 06:03:33.942[0m
... skipping 32 lines ...
INFO: Waiting for the cluster infrastructure to be provisioned
[1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 01/30/23 06:03:55.36[0m
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by node-drain-t48b19/node-drain-oy0h7g to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/30/23 06:04:15.402[0m
[38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 [38;5;243m@ 01/30/23 06:14:15.403[0m
[1mSTEP:[0m Dumping logs from the "node-drain-oy0h7g" workload cluster [38;5;243m@ 01/30/23 06:14:15.403[0m
Failed to get logs for Machine node-drain-oy0h7g-f4d5r, Cluster node-drain-t48b19/node-drain-oy0h7g: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine node-drain-oy0h7g-md-0-fbbb9bd5-n5229, Cluster node-drain-t48b19/node-drain-oy0h7g: dialing host IP address at : dial tcp :22: connect: connection refused
[1mSTEP:[0m Dumping all the Cluster API resources in the "node-drain-t48b19" namespace [38;5;243m@ 01/30/23 06:14:17.609[0m
[1mSTEP:[0m Deleting cluster node-drain-t48b19/node-drain-oy0h7g [38;5;243m@ 01/30/23 06:14:17.876[0m
[1mSTEP:[0m Deleting cluster node-drain-oy0h7g [38;5;243m@ 01/30/23 06:14:17.898[0m
INFO: Waiting for the Cluster node-drain-t48b19/node-drain-oy0h7g to be deleted
[1mSTEP:[0m Waiting for cluster node-drain-oy0h7g to be deleted [38;5;243m@ 01/30/23 06:14:17.912[0m
[1mSTEP:[0m Deleting namespace used for hosting the "node-drain" test spec [38;5;243m@ 01/30/23 06:14:37.928[0m
INFO: Deleting namespace node-drain-t48b19
[38;5;9m• [FAILED] [643.664 seconds][0m
[0mWhen testing node drain timeout [38;5;9m[1m[It] A node should be forcefully removed if it cannot be drained in time[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/node_drain_timeout.go:83[0m
[38;5;9m[FAILED] Timed out after 600.000s.
No Control Plane machines came into existence.
Expected
<bool>: false
to be true[0m
[38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m [38;5;243m@ 01/30/23 06:14:15.403[0m
[38;5;243m------------------------------[0m
... skipping 31 lines ...
INFO: Waiting for the cluster infrastructure to be provisioned
[1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 01/30/23 06:14:39.061[0m
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by quick-start-6i353f/quick-start-xfi3wj to be provisioned
[1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/30/23 06:15:29.124[0m
[38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 [38;5;243m@ 01/30/23 06:25:29.125[0m
[1mSTEP:[0m Dumping logs from the "quick-start-xfi3wj" workload cluster [38;5;243m@ 01/30/23 06:25:29.125[0m
Failed to get logs for Machine quick-start-xfi3wj-8lg7f, Cluster quick-start-6i353f/quick-start-xfi3wj: dialing host IP address at 192.168.6.134: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Failed to get logs for Machine quick-start-xfi3wj-md-0-7c44dd7b8b-dx8tj, Cluster quick-start-6i353f/quick-start-xfi3wj: dialing host IP address at : dial tcp :22: connect: connection refused
[1mSTEP:[0m Dumping all the Cluster API resources in the "quick-start-6i353f" namespace [38;5;243m@ 01/30/23 06:25:30.289[0m
[1mSTEP:[0m Deleting cluster quick-start-6i353f/quick-start-xfi3wj [38;5;243m@ 01/30/23 06:25:30.576[0m
[1mSTEP:[0m Deleting cluster quick-start-xfi3wj [38;5;243m@ 01/30/23 06:25:30.597[0m
INFO: Waiting for the Cluster quick-start-6i353f/quick-start-xfi3wj to be deleted
[1mSTEP:[0m Waiting for cluster quick-start-xfi3wj to be deleted [38;5;243m@ 01/30/23 06:25:30.61[0m
[38;5;214m[TIMEDOUT][0m in [AfterEach] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/quick_start.go:109 [38;5;243m@ 01/30/23 06:25:47.622[0m
[38;5;9m• [FAILED] [669.675 seconds][0m
[0mCluster creation with [Ignition] bootstrap [PR-Blocking] [38;5;9m[1m[It] Should create a workload cluster[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/quick_start.go:78[0m
[38;5;9m[FAILED] Timed out after 600.000s.
No Control Plane machines came into existence.
Expected
<bool>: false
to be true[0m
[38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m [38;5;243m@ 01/30/23 06:25:29.125[0m
... skipping 9 lines ...
[1mSTEP:[0m Cleaning up the vSphere session [38;5;243m@ 01/30/23 06:25:47.624[0m
[1mSTEP:[0m Tearing down the management cluster [38;5;243m@ 01/30/23 06:25:47.834[0m
[38;5;10m[SynchronizedAfterSuite] PASSED [1.566 seconds][0m
[38;5;243m------------------------------[0m
[38;5;9m[1mSummarizing 5 Failures:[0m
[38;5;9m[FAIL][0m [0mWhen testing MachineDeployment scale out/in [38;5;9m[1m[It] Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m
[38;5;9m[FAIL][0m [0mCluster creation with storage policy [38;5;9m[1m[It] should create a cluster successfully[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m
[38;5;9m[FAIL][0m [0mHardware version upgrade [38;5;9m[1m[It] creates a cluster with VM hardware versions upgraded[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/machinedeployment_helpers.go:131[0m
[38;5;9m[FAIL][0m [0mWhen testing node drain timeout [38;5;9m[1m[It] A node should be forcefully removed if it cannot be drained in time[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m
[38;5;9m[FAIL][0m [0mCluster creation with [Ignition] bootstrap [PR-Blocking] [38;5;9m[1m[It] Should create a workload cluster[0m
[38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154[0m
[38;5;9m[1mRan 5 of 17 Specs in 3545.258 seconds[0m
[38;5;9m[1mFAIL! - Suite Timeout Elapsed[0m -- [38;5;10m[1m0 Passed[0m | [38;5;9m[1m5 Failed[0m | [38;5;11m[1m1 Pending[0m | [38;5;14m[1m11 Skipped[0m
--- FAIL: TestE2E (3545.26s)
FAIL
Ginkgo ran 1 suite in 1h0m1.660855622s
Test Suite Failed
real 60m1.681s
user 5m48.457s
sys 1m9.216s
make: *** [Makefile:183: e2e] Error 1
Releasing IP claims
ipclaim.ipam.metal3.io "ip-claim-7e66c0bcc24c4ebc8f9c15d2bf88299ac2b16cee" deleted
ipclaim.ipam.metal3.io "workload-ip-claim-3695254e304148229b64c8b424282ba54eab9cd9" deleted
vpn
WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`.
Revoked credentials:
... skipping 13 lines ...