Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 13 succeeded |
Started | |
Elapsed | 1h42m |
Revision | release-1.5 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capv\-e2e\sCluster\screation\swith\sstorage\spolicy\sshould\screate\sa\scluster\ssuccessfully$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/storage_policy_test.go:57 Timed out after 600.000s. No Control Plane machines came into existence. Expected <bool>: false to be true /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.2/framework/controlplane_helpers.go:153from junit.e2e_suite.1.xml
�[1mSTEP�[0m: Creating a namespace for hosting the "capv-e2e" test spec INFO: Creating namespace capv-e2e-4podwu INFO: Creating event watcher for namespace "capv-e2e-4podwu" �[1mSTEP�[0m: creating a workload cluster INFO: Creating the workload cluster with name "storage-policy-r6pw4i" using the "storage-policy" template (Kubernetes v1.23.5, 1 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster storage-policy-r6pw4i --infrastructure (default) --kubernetes-version v1.23.5 --control-plane-machine-count 1 --worker-machine-count 0 --flavor storage-policy INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capv-e2e-4podwu/storage-policy-r6pw4i to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist �[1mSTEP�[0m: Dumping all the Cluster API resources in the "capv-e2e-4podwu" namespace �[1mSTEP�[0m: cleaning up namespace: capv-e2e-4podwu �[1mSTEP�[0m: Deleting cluster storage-policy-r6pw4i INFO: Waiting for the Cluster capv-e2e-4podwu/storage-policy-r6pw4i to be deleted �[1mSTEP�[0m: Waiting for cluster storage-policy-r6pw4i to be deleted �[1mSTEP�[0m: Deleting namespace used for hosting test spec INFO: Deleting namespace capv-e2e-4podwu
Filter through log files | View test history on testgrid
capv-e2e Cluster Creation using Cluster API quick-start test [PR-Blocking] Should create a workload cluster
capv-e2e Cluster creation with [Ignition] bootstrap [PR-Blocking] Should create a workload cluster
capv-e2e Cluster creation with anti affined nodes should create a cluster with anti-affined nodes
capv-e2e ClusterAPI Machine Deployment Tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capv-e2e ClusterClass Creation using Cluster API quick-start test [PR-Blocking] [ClusterClass] Should create a workload cluster
capv-e2e DHCPOverrides configuration test when Creating a cluster with DHCPOverrides configured Only configures the network with the provided nameservers
capv-e2e Hardware version upgrade creates a cluster with VM hardware versions upgraded
capv-e2e Label nodes with ESXi host info creates a workload cluster whose nodes have the ESXi host info
capv-e2e When testing ClusterClass changes [ClusterClass] Should successfully rollout the managed topology upon changes to the ClusterClass
capv-e2e When testing MachineDeployment scale out/in Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capv-e2e When testing node drain timeout A node should be forcefully removed if it cannot be drained in time
capv-e2e When testing unhealthy machines remediation Should successfully trigger KCP remediation
capv-e2e When testing unhealthy machines remediation Should successfully trigger machine deployment remediation
capv-e2e Cluster creation with GPU devices as PCI passthrough [specialized-infra] should create the cluster with worker nodes having GPU cards added as PCI passthrough devices
capv-e2e ClusterAPI Upgrade Tests [clusterctl-Upgrade] Upgrading cluster from v1alpha4 to v1beta1 using clusterctl Should create a management cluster and then upgrade all the providers
capv-e2e When testing K8S conformance [Conformance] Should create a workload cluster and run kubetest
... skipping 721 lines ... INFO: Waiting for correct number of replicas to exist [1mSTEP[0m: Scaling the MachineDeployment down to 1 INFO: Scaling machine deployment md-scale-0b4rp4/md-scale-2iw2ib-md-0 from 3 to 1 replicas INFO: Waiting for correct number of replicas to exist [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "md-scale-2iw2ib" workload cluster Failed to get logs for machine md-scale-2iw2ib-ktfjf, cluster md-scale-0b4rp4/md-scale-2iw2ib: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine md-scale-2iw2ib-md-0-54c9bfbb97-skt5s, cluster md-scale-0b4rp4/md-scale-2iw2ib: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "md-scale-0b4rp4" namespace [1mSTEP[0m: Deleting cluster md-scale-0b4rp4/md-scale-2iw2ib [1mSTEP[0m: Deleting cluster md-scale-2iw2ib INFO: Waiting for the Cluster md-scale-0b4rp4/md-scale-2iw2ib to be deleted [1mSTEP[0m: Waiting for cluster md-scale-2iw2ib to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "md-scale" test spec ... skipping 119 lines ... INFO: Waiting for rolling upgrade to start. INFO: Waiting for MachineDeployment rolling upgrade to start INFO: Waiting for rolling upgrade to complete. INFO: Waiting for MachineDeployment rolling upgrade to complete [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "md-rollout-d17aqo" workload cluster Failed to get logs for machine md-rollout-d17aqo-2mlp9, cluster md-rollout-82v9pw/md-rollout-d17aqo: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine md-rollout-d17aqo-md-0-6b887575f4-pm5gd, cluster md-rollout-82v9pw/md-rollout-d17aqo: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "md-rollout-82v9pw" namespace [1mSTEP[0m: Deleting cluster md-rollout-82v9pw/md-rollout-d17aqo [1mSTEP[0m: Deleting cluster md-rollout-d17aqo INFO: Waiting for the Cluster md-rollout-82v9pw/md-rollout-d17aqo to be deleted [1mSTEP[0m: Waiting for cluster md-rollout-d17aqo to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "md-rollout" test spec ... skipping 52 lines ... INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist [1mSTEP[0m: Checking all the machines controlled by quick-start-d25jq5-md-0 are in the "<None>" failure domain INFO: Waiting for the machine pools to be provisioned [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "quick-start-d25jq5" workload cluster Failed to get logs for machine quick-start-d25jq5-dczhm, cluster quick-start-36bp9m/quick-start-d25jq5: dialing host IP address at 192.168.6.91: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain Failed to get logs for machine quick-start-d25jq5-md-0-fcd46ddb9-q2dlt, cluster quick-start-36bp9m/quick-start-d25jq5: dialing host IP address at 192.168.6.92: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-36bp9m" namespace [1mSTEP[0m: Deleting cluster quick-start-36bp9m/quick-start-d25jq5 [1mSTEP[0m: Deleting cluster quick-start-d25jq5 INFO: Waiting for the Cluster quick-start-36bp9m/quick-start-d25jq5 to be deleted [1mSTEP[0m: Waiting for cluster quick-start-d25jq5 to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "quick-start" test spec ... skipping 52 lines ... INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist [1mSTEP[0m: Checking all the machines controlled by quick-start-wtzmlj-md-0 are in the "<None>" failure domain INFO: Waiting for the machine pools to be provisioned [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "quick-start-wtzmlj" workload cluster Failed to get logs for machine quick-start-wtzmlj-ds5lf, cluster quick-start-n4mrhn/quick-start-wtzmlj: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine quick-start-wtzmlj-md-0-5fcd45cb5d-kqfgv, cluster quick-start-n4mrhn/quick-start-wtzmlj: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-n4mrhn" namespace [1mSTEP[0m: Deleting cluster quick-start-n4mrhn/quick-start-wtzmlj [1mSTEP[0m: Deleting cluster quick-start-wtzmlj INFO: Waiting for the Cluster quick-start-n4mrhn/quick-start-wtzmlj to be deleted [1mSTEP[0m: Waiting for cluster quick-start-wtzmlj to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "quick-start" test spec ... skipping 60 lines ... INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "quick-start-worker") to complete. [1mSTEP[0m: Rebasing the Cluster to a ClusterClass with a modified label for MachineDeployments and wait for changes to be applied to the MachineDeployment objects INFO: Waiting for MachineDeployment rollout to complete. INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "quick-start-worker") to complete. [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "clusterclass-changes-dbjap4" workload cluster Failed to get logs for machine clusterclass-changes-dbjap4-bpqfv-2clqc, cluster clusterclass-changes-ni63mi/clusterclass-changes-dbjap4: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine clusterclass-changes-dbjap4-md-0-kgmcg-68bf8c7f7b-g5984, cluster clusterclass-changes-ni63mi/clusterclass-changes-dbjap4: dialing host IP address at : dial tcp :22: connect: connection refused Failed to get logs for machine clusterclass-changes-dbjap4-md-0-kgmcg-7cc684cd5c-hkkzg, cluster clusterclass-changes-ni63mi/clusterclass-changes-dbjap4: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "clusterclass-changes-ni63mi" namespace [1mSTEP[0m: Deleting cluster clusterclass-changes-ni63mi/clusterclass-changes-dbjap4 [1mSTEP[0m: Deleting cluster clusterclass-changes-dbjap4 INFO: Waiting for the Cluster clusterclass-changes-ni63mi/clusterclass-changes-dbjap4 to be deleted [1mSTEP[0m: Waiting for cluster clusterclass-changes-dbjap4 to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "clusterclass-changes" test spec ... skipping 136 lines ... [1mSTEP[0m: Waiting for deployment node-drain-vqtjzo-unevictable-workload/unevictable-pod-1cb to be available [1mSTEP[0m: Scale down the controlplane of the workload cluster and make sure that nodes running workload can be deleted even the draining process is blocked. INFO: Scaling controlplane node-drain-vqtjzo/node-drain-fc9mnh from 3 to 1 replicas INFO: Waiting for correct number of replicas to exist [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "node-drain-fc9mnh" workload cluster Failed to get logs for machine node-drain-fc9mnh-2bgzp, cluster node-drain-vqtjzo/node-drain-fc9mnh: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "node-drain-vqtjzo" namespace [1mSTEP[0m: Deleting cluster node-drain-vqtjzo/node-drain-fc9mnh [1mSTEP[0m: Deleting cluster node-drain-fc9mnh INFO: Waiting for the Cluster node-drain-vqtjzo/node-drain-fc9mnh to be deleted [1mSTEP[0m: Waiting for cluster node-drain-fc9mnh to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "node-drain" test spec ... skipping 50 lines ... INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist [1mSTEP[0m: Checking all the machines controlled by quick-start-v5b5g7-md-0-dph5s are in the "<None>" failure domain INFO: Waiting for the machine pools to be provisioned [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "quick-start-v5b5g7" workload cluster Failed to get logs for machine quick-start-v5b5g7-md-0-dph5s-5f5686c9ff-67ltl, cluster quick-start-jigno3/quick-start-v5b5g7: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine quick-start-v5b5g7-qc9lw-nvlsw, cluster quick-start-jigno3/quick-start-v5b5g7: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-jigno3" namespace [1mSTEP[0m: Deleting cluster quick-start-jigno3/quick-start-v5b5g7 [1mSTEP[0m: Deleting cluster quick-start-v5b5g7 INFO: Waiting for the Cluster quick-start-jigno3/quick-start-v5b5g7 to be deleted [1mSTEP[0m: Waiting for cluster quick-start-v5b5g7 to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "quick-start" test spec ... skipping 58 lines ... Patching MachineHealthCheck unhealthy condition to one of the nodes INFO: Patching the node condition to the node Waiting for remediation Waiting until the node with unhealthy node condition is remediated [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "mhc-remediation-3cgavr" workload cluster Failed to get logs for machine mhc-remediation-3cgavr-gbtng, cluster mhc-remediation-x5vuag/mhc-remediation-3cgavr: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine mhc-remediation-3cgavr-md-0-85b786fcc7-7m5gz, cluster mhc-remediation-x5vuag/mhc-remediation-3cgavr: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "mhc-remediation-x5vuag" namespace [1mSTEP[0m: Deleting cluster mhc-remediation-x5vuag/mhc-remediation-3cgavr [1mSTEP[0m: Deleting cluster mhc-remediation-3cgavr INFO: Waiting for the Cluster mhc-remediation-x5vuag/mhc-remediation-3cgavr to be deleted [1mSTEP[0m: Waiting for cluster mhc-remediation-3cgavr to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "mhc-remediation" test spec ... skipping 60 lines ... Patching MachineHealthCheck unhealthy condition to one of the nodes INFO: Patching the node condition to the node Waiting for remediation Waiting until the node with unhealthy node condition is remediated [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "mhc-remediation-4c99es" workload cluster Failed to get logs for machine mhc-remediation-4c99es-j2wpd, cluster mhc-remediation-1e1ffv/mhc-remediation-4c99es: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine mhc-remediation-4c99es-md-0-5d679c6d89-cfxdc, cluster mhc-remediation-1e1ffv/mhc-remediation-4c99es: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine mhc-remediation-4c99es-q4bws, cluster mhc-remediation-1e1ffv/mhc-remediation-4c99es: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine mhc-remediation-4c99es-z9cb6, cluster mhc-remediation-1e1ffv/mhc-remediation-4c99es: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "mhc-remediation-1e1ffv" namespace [1mSTEP[0m: Deleting cluster mhc-remediation-1e1ffv/mhc-remediation-4c99es [1mSTEP[0m: Deleting cluster mhc-remediation-4c99es INFO: Waiting for the Cluster mhc-remediation-1e1ffv/mhc-remediation-4c99es to be deleted [1mSTEP[0m: Waiting for cluster mhc-remediation-4c99es to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "mhc-remediation" test spec ... skipping 70 lines ... JUnit report was created: /logs/artifacts/junit.e2e_suite.1.xml [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mCluster creation with storage policy [0m[91m[1m[It] should create a cluster successfully [0m [37m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.2/framework/controlplane_helpers.go:153[0m [1m[91mRan 14 of 17 Specs in 5779.590 seconds[0m [1m[91mFAIL![0m -- [32m[1m13 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m3 Skipped[0m --- FAIL: TestE2E (5779.60s) FAIL [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [1m[38;5;10mGinkgo 2.0[0m is under active development and will introduce several new features, improvements, and a small handful of breaking changes. A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. [1mPlease give the RC a try and send us feedback![0m - To learn more, view the migration guide at [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md[0m ... skipping 7 lines ... [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=1.16.5[0m Ginkgo ran 1 suite in 1h37m20.606266803s Test Suite Failed [38;5;228mGinkgo 2.0 is coming soon![0m [38;5;228m==========================[0m [1m[38;5;10mGinkgo 2.0[0m is under active development and will introduce several new features, improvements, and a small handful of breaking changes. A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. [1mPlease give the RC a try and send us feedback![0m - To learn more, view the migration guide at [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md[0m ... skipping 3 lines ... To [1m[38;5;204msilence this notice[0m, set the environment variable: [1mACK_GINKGO_RC=true[0m Alternatively you can: [1mtouch $HOME/.ack-ginkgo-rc[0m real 97m20.616s user 6m17.826s sys 1m25.154s make: *** [Makefile:183: e2e] Error 1 Releasing IP claims ipclaim.ipam.metal3.io "ip-claim-0ad0aee5c2e15fde16e754f42cf8ee7f9781afb8" deleted ipclaim.ipam.metal3.io "workload-ip-claim-d4bcb4f81addc2e22bcfc0c1fdd825df4791b0a8" deleted vpn WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`. Revoked credentials: ... skipping 13 lines ...