Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 13 succeeded |
Started | |
Elapsed | 1h41m |
Revision | release-1.5 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capv\-e2e\sCluster\screation\swith\sstorage\spolicy\sshould\screate\sa\scluster\ssuccessfully$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/storage_policy_test.go:57 Timed out after 600.001s. No Control Plane machines came into existence. Expected <bool>: false to be true /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.2/framework/controlplane_helpers.go:153from junit.e2e_suite.1.xml
�[1mSTEP�[0m: Creating a namespace for hosting the "capv-e2e" test spec INFO: Creating namespace capv-e2e-8ps93m INFO: Creating event watcher for namespace "capv-e2e-8ps93m" �[1mSTEP�[0m: creating a workload cluster INFO: Creating the workload cluster with name "storage-policy-twnpgz" using the "storage-policy" template (Kubernetes v1.23.5, 1 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster storage-policy-twnpgz --infrastructure (default) --kubernetes-version v1.23.5 --control-plane-machine-count 1 --worker-machine-count 0 --flavor storage-policy INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capv-e2e-8ps93m/storage-policy-twnpgz to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist �[1mSTEP�[0m: Dumping all the Cluster API resources in the "capv-e2e-8ps93m" namespace �[1mSTEP�[0m: cleaning up namespace: capv-e2e-8ps93m �[1mSTEP�[0m: Deleting cluster storage-policy-twnpgz INFO: Waiting for the Cluster capv-e2e-8ps93m/storage-policy-twnpgz to be deleted �[1mSTEP�[0m: Waiting for cluster storage-policy-twnpgz to be deleted �[1mSTEP�[0m: Deleting namespace used for hosting test spec INFO: Deleting namespace capv-e2e-8ps93m
Filter through log files | View test history on testgrid
capv-e2e Cluster Creation using Cluster API quick-start test [PR-Blocking] Should create a workload cluster
capv-e2e Cluster creation with [Ignition] bootstrap [PR-Blocking] Should create a workload cluster
capv-e2e Cluster creation with anti affined nodes should create a cluster with anti-affined nodes
capv-e2e ClusterAPI Machine Deployment Tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capv-e2e ClusterClass Creation using Cluster API quick-start test [PR-Blocking] [ClusterClass] Should create a workload cluster
capv-e2e DHCPOverrides configuration test when Creating a cluster with DHCPOverrides configured Only configures the network with the provided nameservers
capv-e2e Hardware version upgrade creates a cluster with VM hardware versions upgraded
capv-e2e Label nodes with ESXi host info creates a workload cluster whose nodes have the ESXi host info
capv-e2e When testing ClusterClass changes [ClusterClass] Should successfully rollout the managed topology upon changes to the ClusterClass
capv-e2e When testing MachineDeployment scale out/in Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capv-e2e When testing node drain timeout A node should be forcefully removed if it cannot be drained in time
capv-e2e When testing unhealthy machines remediation Should successfully trigger KCP remediation
capv-e2e When testing unhealthy machines remediation Should successfully trigger machine deployment remediation
capv-e2e Cluster creation with GPU devices as PCI passthrough [specialized-infra] should create the cluster with worker nodes having GPU cards added as PCI passthrough devices
capv-e2e ClusterAPI Upgrade Tests [clusterctl-Upgrade] Upgrading cluster from v1alpha4 to v1beta1 using clusterctl Should create a management cluster and then upgrade all the providers
capv-e2e When testing K8S conformance [Conformance] Should create a workload cluster and run kubetest
... skipping 581 lines ... Patching MachineHealthCheck unhealthy condition to one of the nodes INFO: Patching the node condition to the node Waiting for remediation Waiting until the node with unhealthy node condition is remediated [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "mhc-remediation-vm4dji" workload cluster Failed to get logs for machine mhc-remediation-vm4dji-md-0-6966c5c559-hcg9n, cluster mhc-remediation-q161mr/mhc-remediation-vm4dji: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine mhc-remediation-vm4dji-nfmck, cluster mhc-remediation-q161mr/mhc-remediation-vm4dji: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "mhc-remediation-q161mr" namespace [1mSTEP[0m: Deleting cluster mhc-remediation-q161mr/mhc-remediation-vm4dji [1mSTEP[0m: Deleting cluster mhc-remediation-vm4dji INFO: Waiting for the Cluster mhc-remediation-q161mr/mhc-remediation-vm4dji to be deleted [1mSTEP[0m: Waiting for cluster mhc-remediation-vm4dji to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "mhc-remediation" test spec ... skipping 60 lines ... Patching MachineHealthCheck unhealthy condition to one of the nodes INFO: Patching the node condition to the node Waiting for remediation Waiting until the node with unhealthy node condition is remediated [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "mhc-remediation-gh9xdz" workload cluster Failed to get logs for machine mhc-remediation-gh9xdz-h2vxd, cluster mhc-remediation-d8rdx5/mhc-remediation-gh9xdz: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine mhc-remediation-gh9xdz-lcgcp, cluster mhc-remediation-d8rdx5/mhc-remediation-gh9xdz: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine mhc-remediation-gh9xdz-md-0-7fc489cc46-2mrp2, cluster mhc-remediation-d8rdx5/mhc-remediation-gh9xdz: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine mhc-remediation-gh9xdz-vkg4z, cluster mhc-remediation-d8rdx5/mhc-remediation-gh9xdz: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "mhc-remediation-d8rdx5" namespace [1mSTEP[0m: Deleting cluster mhc-remediation-d8rdx5/mhc-remediation-gh9xdz [1mSTEP[0m: Deleting cluster mhc-remediation-gh9xdz INFO: Waiting for the Cluster mhc-remediation-d8rdx5/mhc-remediation-gh9xdz to be deleted [1mSTEP[0m: Waiting for cluster mhc-remediation-gh9xdz to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "mhc-remediation" test spec ... skipping 126 lines ... [1mSTEP[0m: Waiting for deployment node-drain-rvq5pt-unevictable-workload/unevictable-pod-wlh to be available [1mSTEP[0m: Scale down the controlplane of the workload cluster and make sure that nodes running workload can be deleted even the draining process is blocked. INFO: Scaling controlplane node-drain-rvq5pt/node-drain-cbfyks from 3 to 1 replicas INFO: Waiting for correct number of replicas to exist [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "node-drain-cbfyks" workload cluster Failed to get logs for machine node-drain-cbfyks-zjxqp, cluster node-drain-rvq5pt/node-drain-cbfyks: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "node-drain-rvq5pt" namespace [1mSTEP[0m: Deleting cluster node-drain-rvq5pt/node-drain-cbfyks [1mSTEP[0m: Deleting cluster node-drain-cbfyks INFO: Waiting for the Cluster node-drain-rvq5pt/node-drain-cbfyks to be deleted [1mSTEP[0m: Waiting for cluster node-drain-cbfyks to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "node-drain" test spec ... skipping 60 lines ... INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "quick-start-worker") to complete. [1mSTEP[0m: Rebasing the Cluster to a ClusterClass with a modified label for MachineDeployments and wait for changes to be applied to the MachineDeployment objects INFO: Waiting for MachineDeployment rollout to complete. INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "quick-start-worker") to complete. [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "clusterclass-changes-oeiihf" workload cluster Failed to get logs for machine clusterclass-changes-oeiihf-md-0-58gc7-67f7579b58-dxft2, cluster clusterclass-changes-z16ck0/clusterclass-changes-oeiihf: dialing host IP address at : dial tcp :22: connect: connection refused Failed to get logs for machine clusterclass-changes-oeiihf-md-0-58gc7-6c79cdd5d6-pqg97, cluster clusterclass-changes-z16ck0/clusterclass-changes-oeiihf: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine clusterclass-changes-oeiihf-pslgt-6fgmp, cluster clusterclass-changes-z16ck0/clusterclass-changes-oeiihf: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "clusterclass-changes-z16ck0" namespace [1mSTEP[0m: Deleting cluster clusterclass-changes-z16ck0/clusterclass-changes-oeiihf [1mSTEP[0m: Deleting cluster clusterclass-changes-oeiihf INFO: Waiting for the Cluster clusterclass-changes-z16ck0/clusterclass-changes-oeiihf to be deleted [1mSTEP[0m: Waiting for cluster clusterclass-changes-oeiihf to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "clusterclass-changes" test spec ... skipping 118 lines ... INFO: Waiting for correct number of replicas to exist [1mSTEP[0m: Scaling the MachineDeployment down to 1 INFO: Scaling machine deployment md-scale-rl63yf/md-scale-jbpf7f-md-0 from 3 to 1 replicas INFO: Waiting for correct number of replicas to exist [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "md-scale-jbpf7f" workload cluster Failed to get logs for machine md-scale-jbpf7f-jf4fg, cluster md-scale-rl63yf/md-scale-jbpf7f: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine md-scale-jbpf7f-md-0-74f69976d5-vsk9k, cluster md-scale-rl63yf/md-scale-jbpf7f: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "md-scale-rl63yf" namespace [1mSTEP[0m: Deleting cluster md-scale-rl63yf/md-scale-jbpf7f [1mSTEP[0m: Deleting cluster md-scale-jbpf7f INFO: Waiting for the Cluster md-scale-rl63yf/md-scale-jbpf7f to be deleted [1mSTEP[0m: Waiting for cluster md-scale-jbpf7f to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "md-scale" test spec ... skipping 52 lines ... INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist [1mSTEP[0m: Checking all the machines controlled by quick-start-do2st3-md-0 are in the "<None>" failure domain INFO: Waiting for the machine pools to be provisioned [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "quick-start-do2st3" workload cluster Failed to get logs for machine quick-start-do2st3-chwz5, cluster quick-start-yqdynr/quick-start-do2st3: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine quick-start-do2st3-md-0-5b6c4f5dc7-q2x2g, cluster quick-start-yqdynr/quick-start-do2st3: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-yqdynr" namespace [1mSTEP[0m: Deleting cluster quick-start-yqdynr/quick-start-do2st3 [1mSTEP[0m: Deleting cluster quick-start-do2st3 INFO: Waiting for the Cluster quick-start-yqdynr/quick-start-do2st3 to be deleted [1mSTEP[0m: Waiting for cluster quick-start-do2st3 to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "quick-start" test spec ... skipping 114 lines ... INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist [1mSTEP[0m: Checking all the machines controlled by quick-start-gvnz9x-md-0-vqw96 are in the "<None>" failure domain INFO: Waiting for the machine pools to be provisioned [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "quick-start-gvnz9x" workload cluster Failed to get logs for machine quick-start-gvnz9x-68mx6-pm7zq, cluster quick-start-c0vptn/quick-start-gvnz9x: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine quick-start-gvnz9x-md-0-vqw96-5f77586546-cv6m2, cluster quick-start-c0vptn/quick-start-gvnz9x: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-c0vptn" namespace [1mSTEP[0m: Deleting cluster quick-start-c0vptn/quick-start-gvnz9x [1mSTEP[0m: Deleting cluster quick-start-gvnz9x INFO: Waiting for the Cluster quick-start-c0vptn/quick-start-gvnz9x to be deleted [1mSTEP[0m: Waiting for cluster quick-start-gvnz9x to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "quick-start" test spec ... skipping 56 lines ... INFO: Waiting for rolling upgrade to start. INFO: Waiting for MachineDeployment rolling upgrade to start INFO: Waiting for rolling upgrade to complete. INFO: Waiting for MachineDeployment rolling upgrade to complete [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "md-rollout-d5ncym" workload cluster Failed to get logs for machine md-rollout-d5ncym-lbrnd, cluster md-rollout-68uj8q/md-rollout-d5ncym: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine md-rollout-d5ncym-md-0-68b84c8d45-z5bmm, cluster md-rollout-68uj8q/md-rollout-d5ncym: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "md-rollout-68uj8q" namespace [1mSTEP[0m: Deleting cluster md-rollout-68uj8q/md-rollout-d5ncym [1mSTEP[0m: Deleting cluster md-rollout-d5ncym INFO: Waiting for the Cluster md-rollout-68uj8q/md-rollout-d5ncym to be deleted [1mSTEP[0m: Waiting for cluster md-rollout-d5ncym to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "md-rollout" test spec ... skipping 52 lines ... INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist [1mSTEP[0m: Checking all the machines controlled by quick-start-rfz7e5-md-0 are in the "<None>" failure domain INFO: Waiting for the machine pools to be provisioned [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "quick-start-rfz7e5" workload cluster Failed to get logs for machine quick-start-rfz7e5-c2mcc, cluster quick-start-1spalk/quick-start-rfz7e5: dialing host IP address at 192.168.6.60: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain Failed to get logs for machine quick-start-rfz7e5-md-0-5b7954669f-qp5pw, cluster quick-start-1spalk/quick-start-rfz7e5: dialing host IP address at 192.168.6.124: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-1spalk" namespace [1mSTEP[0m: Deleting cluster quick-start-1spalk/quick-start-rfz7e5 [1mSTEP[0m: Deleting cluster quick-start-rfz7e5 INFO: Waiting for the Cluster quick-start-1spalk/quick-start-rfz7e5 to be deleted [1mSTEP[0m: Waiting for cluster quick-start-rfz7e5 to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "quick-start" test spec ... skipping 145 lines ... JUnit report was created: /logs/artifacts/junit.e2e_suite.1.xml [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mCluster creation with storage policy [0m[91m[1m[It] should create a cluster successfully [0m [37m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.2/framework/controlplane_helpers.go:153[0m [1m[91mRan 14 of 17 Specs in 5633.139 seconds[0m [1m[91mFAIL![0m -- [32m[1m13 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m3 Skipped[0m --- FAIL: TestE2E (5633.15s) FAIL [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [1m[38;5;10mGinkgo 2.0[0m is under active development and will introduce several new features, improvements, and a small handful of breaking changes. A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. [1mPlease give the RC a try and send us feedback![0m - To learn more, view the migration guide at [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md[0m ... skipping 7 lines ... [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=1.16.5[0m Ginkgo ran 1 suite in 1h35m12.651852077s Test Suite Failed [38;5;228mGinkgo 2.0 is coming soon![0m [38;5;228m==========================[0m [1m[38;5;10mGinkgo 2.0[0m is under active development and will introduce several new features, improvements, and a small handful of breaking changes. A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. [1mPlease give the RC a try and send us feedback![0m - To learn more, view the migration guide at [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md[0m ... skipping 3 lines ... To [1m[38;5;204msilence this notice[0m, set the environment variable: [1mACK_GINKGO_RC=true[0m Alternatively you can: [1mtouch $HOME/.ack-ginkgo-rc[0m real 95m12.666s user 7m38.931s sys 2m14.014s make: *** [Makefile:183: e2e] Error 1 Releasing IP claims ipclaim.ipam.metal3.io "ip-claim-a599ce6946a03c43796762ae25b216b28947f77d" deleted ipclaim.ipam.metal3.io "workload-ip-claim-6c3bdaed6fcae6982ddf8a76ebbda83343a6f720" deleted vpn WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`. Revoked credentials: ... skipping 13 lines ...