This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2023-01-30 05:21
Elapsed1h4m
Revisionmain

No Test Failures!


Error lines from build-log.txt

... skipping 175 lines ...
#18 exporting to image
#18 exporting layers
#18 exporting layers 0.4s done
#18 writing image sha256:4ca47a9c09da42a94f2c7058cde5f241e36608700bdf3f9fac1b10a5ef8c3db4 done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.4s
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
Activated service account credentials for: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com]
Copying file:///logs/artifacts/tempContainers/image.tar [Content-Type=application/x-tar]...
/ [0 files][    0.0 B/ 74.6 MiB]                                                
-
- [1 files][ 74.6 MiB/ 74.6 MiB]                                                
\
Operation completed over 1 objects/74.6 MiB.                                     
make -C /home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools ginkgo
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools'
... skipping 126 lines ...

#18 exporting to image
#18 exporting layers done
#18 writing image sha256:4ca47a9c09da42a94f2c7058cde5f241e36608700bdf3f9fac1b10a5ef8c3db4 done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.0s
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
make release-manifests
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make manifests STAGE=release MANIFEST_DIR=out PULL_POLICY=IfNotPresent IMAGE=gcr.io/cluster-api-provider-vsphere/release/manager:v1.6.0
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make generate-flavors FLAVOR_DIR=out
make[3]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
... skipping 243 lines ...

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 01/30/23 05:28:40.9
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by md-scale-eitfq5/md-scale-143uie to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/30/23 05:29:10.944
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/30/23 05:39:10.946
  STEP: Dumping logs from the "md-scale-143uie" workload cluster @ 01/30/23 05:39:10.946
Failed to get logs for Machine md-scale-143uie-kgwsp, Cluster md-scale-eitfq5/md-scale-143uie: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine md-scale-143uie-md-0-798b87d98b-sl4qb, Cluster md-scale-eitfq5/md-scale-143uie: dialing host IP address at : dial tcp :22: connect: connection refused
  STEP: Dumping all the Cluster API resources in the "md-scale-eitfq5" namespace @ 01/30/23 05:39:13.219
  STEP: Deleting cluster md-scale-eitfq5/md-scale-143uie @ 01/30/23 05:39:13.488
  STEP: Deleting cluster md-scale-143uie @ 01/30/23 05:39:13.504
  INFO: Waiting for the Cluster md-scale-eitfq5/md-scale-143uie to be deleted
  STEP: Waiting for cluster md-scale-143uie to be deleted @ 01/30/23 05:39:13.516
  STEP: Deleting namespace used for hosting the "md-scale" test spec @ 01/30/23 05:39:33.528
  INFO: Deleting namespace md-scale-eitfq5
• [FAILED] [655.404 seconds]
When testing MachineDeployment scale out/in [It] Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/md_scale.go:71

  [FAILED] Timed out after 600.002s.
  No Control Plane machines came into existence. 
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/30/23 05:39:10.946
------------------------------
... skipping 31 lines ...

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 01/30/23 05:39:34.647
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by capv-e2e-3dmh2k/storage-policy-qsfhup to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/30/23 05:39:54.687
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/30/23 05:49:54.688
  STEP: Dumping all the Cluster API resources in the "capv-e2e-3dmh2k" namespace @ 01/30/23 05:49:54.688
  STEP: cleaning up namespace: capv-e2e-3dmh2k @ 01/30/23 05:49:55.004
  STEP: Deleting cluster storage-policy-qsfhup @ 01/30/23 05:49:55.024
  INFO: Waiting for the Cluster capv-e2e-3dmh2k/storage-policy-qsfhup to be deleted
  STEP: Waiting for cluster storage-policy-qsfhup to be deleted @ 01/30/23 05:49:55.036
  STEP: Deleting namespace used for hosting test spec @ 01/30/23 05:50:05.047
  INFO: Deleting namespace capv-e2e-3dmh2k
• [FAILED] [631.525 seconds]
Cluster creation with storage policy [It] should create a cluster successfully
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/storage_policy_test.go:57

  [FAILED] Timed out after 600.001s.
  No Control Plane machines came into existence. 
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/30/23 05:49:54.688
------------------------------
... skipping 40 lines ...
  INFO: Waiting for control plane to be ready
  INFO: Waiting for control plane hw-upgrade-e2e-5wzeas/hw-upgrade-64jjbg to be ready (implies underlying nodes to be ready as well)
  STEP: Waiting for the control plane to be ready @ 01/30/23 05:53:33.915
  STEP: Checking all the control plane machines are in the expected failure domains @ 01/30/23 05:53:33.92
  INFO: Waiting for the machine deployments to be provisioned
  STEP: Waiting for the workload nodes to exist @ 01/30/23 05:53:33.941
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/machinedeployment_helpers.go:131 @ 01/30/23 06:03:33.942
  STEP: Dumping all the Cluster API resources in the "hw-upgrade-e2e-5wzeas" namespace @ 01/30/23 06:03:33.942
  STEP: cleaning up namespace: hw-upgrade-e2e-5wzeas @ 01/30/23 06:03:34.219
  STEP: Deleting cluster hw-upgrade-64jjbg @ 01/30/23 06:03:34.236
  INFO: Waiting for the Cluster hw-upgrade-e2e-5wzeas/hw-upgrade-64jjbg to be deleted
  STEP: Waiting for cluster hw-upgrade-64jjbg to be deleted @ 01/30/23 06:03:34.248
  STEP: Deleting namespace used for hosting test spec @ 01/30/23 06:03:54.261
  INFO: Deleting namespace hw-upgrade-e2e-5wzeas
• [FAILED] [829.213 seconds]
Hardware version upgrade [It] creates a cluster with VM hardware versions upgraded
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/hardware_upgrade_test.go:57

  [FAILED] Timed out after 600.000s.
  Timed out waiting for 1 nodes to be created for MachineDeployment hw-upgrade-e2e-5wzeas/hw-upgrade-64jjbg-md-0
  Expected
      <int>: 0
  to equal
      <int>: 1
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/machinedeployment_helpers.go:131 @ 01/30/23 06:03:33.942
... skipping 32 lines ...

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 01/30/23 06:03:55.36
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by node-drain-t48b19/node-drain-oy0h7g to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/30/23 06:04:15.402
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/30/23 06:14:15.403
  STEP: Dumping logs from the "node-drain-oy0h7g" workload cluster @ 01/30/23 06:14:15.403
Failed to get logs for Machine node-drain-oy0h7g-f4d5r, Cluster node-drain-t48b19/node-drain-oy0h7g: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine node-drain-oy0h7g-md-0-fbbb9bd5-n5229, Cluster node-drain-t48b19/node-drain-oy0h7g: dialing host IP address at : dial tcp :22: connect: connection refused
  STEP: Dumping all the Cluster API resources in the "node-drain-t48b19" namespace @ 01/30/23 06:14:17.609
  STEP: Deleting cluster node-drain-t48b19/node-drain-oy0h7g @ 01/30/23 06:14:17.876
  STEP: Deleting cluster node-drain-oy0h7g @ 01/30/23 06:14:17.898
  INFO: Waiting for the Cluster node-drain-t48b19/node-drain-oy0h7g to be deleted
  STEP: Waiting for cluster node-drain-oy0h7g to be deleted @ 01/30/23 06:14:17.912
  STEP: Deleting namespace used for hosting the "node-drain" test spec @ 01/30/23 06:14:37.928
  INFO: Deleting namespace node-drain-t48b19
• [FAILED] [643.664 seconds]
When testing node drain timeout [It] A node should be forcefully removed if it cannot be drained in time
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/node_drain_timeout.go:83

  [FAILED] Timed out after 600.000s.
  No Control Plane machines came into existence. 
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/30/23 06:14:15.403
------------------------------
... skipping 31 lines ...

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 01/30/23 06:14:39.061
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by quick-start-6i353f/quick-start-xfi3wj to be provisioned
  STEP: Waiting for one control plane node to exist @ 01/30/23 06:15:29.124
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/30/23 06:25:29.125
  STEP: Dumping logs from the "quick-start-xfi3wj" workload cluster @ 01/30/23 06:25:29.125
Failed to get logs for Machine quick-start-xfi3wj-8lg7f, Cluster quick-start-6i353f/quick-start-xfi3wj: dialing host IP address at 192.168.6.134: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Failed to get logs for Machine quick-start-xfi3wj-md-0-7c44dd7b8b-dx8tj, Cluster quick-start-6i353f/quick-start-xfi3wj: dialing host IP address at : dial tcp :22: connect: connection refused
  STEP: Dumping all the Cluster API resources in the "quick-start-6i353f" namespace @ 01/30/23 06:25:30.289
  STEP: Deleting cluster quick-start-6i353f/quick-start-xfi3wj @ 01/30/23 06:25:30.576
  STEP: Deleting cluster quick-start-xfi3wj @ 01/30/23 06:25:30.597
  INFO: Waiting for the Cluster quick-start-6i353f/quick-start-xfi3wj to be deleted
  STEP: Waiting for cluster quick-start-xfi3wj to be deleted @ 01/30/23 06:25:30.61
  [TIMEDOUT] in [AfterEach] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/quick_start.go:109 @ 01/30/23 06:25:47.622
• [FAILED] [669.675 seconds]
Cluster creation with [Ignition] bootstrap [PR-Blocking] [It] Should create a workload cluster
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/e2e/quick_start.go:78

  [FAILED] Timed out after 600.000s.
  No Control Plane machines came into existence. 
  Expected
      <bool>: false
  to be true
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154 @ 01/30/23 06:25:29.125

... skipping 9 lines ...
  STEP: Cleaning up the vSphere session @ 01/30/23 06:25:47.624
  STEP: Tearing down the management cluster @ 01/30/23 06:25:47.834
[SynchronizedAfterSuite] PASSED [1.566 seconds]
------------------------------

Summarizing 5 Failures:
  [FAIL] When testing MachineDeployment scale out/in [It] Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154
  [FAIL] Cluster creation with storage policy [It] should create a cluster successfully
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154
  [FAIL] Hardware version upgrade [It] creates a cluster with VM hardware versions upgraded
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/machinedeployment_helpers.go:131
  [FAIL] When testing node drain timeout [It] A node should be forcefully removed if it cannot be drained in time
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154
  [FAIL] Cluster creation with [Ignition] bootstrap [PR-Blocking] [It] Should create a workload cluster
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.0/framework/controlplane_helpers.go:154

Ran 5 of 17 Specs in 3545.258 seconds
FAIL! - Suite Timeout Elapsed -- 0 Passed | 5 Failed | 1 Pending | 11 Skipped
--- FAIL: TestE2E (3545.26s)
FAIL

Ginkgo ran 1 suite in 1h0m1.660855622s

Test Suite Failed

real	60m1.681s
user	5m48.457s
sys	1m9.216s
make: *** [Makefile:183: e2e] Error 1
Releasing IP claims
ipclaim.ipam.metal3.io "ip-claim-7e66c0bcc24c4ebc8f9c15d2bf88299ac2b16cee" deleted
ipclaim.ipam.metal3.io "workload-ip-claim-3695254e304148229b64c8b424282ba54eab9cd9" deleted
vpn
WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`.
Revoked credentials:
... skipping 13 lines ...