Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 13 succeeded |
Started | |
Elapsed | 1h41m |
Revision | release-1.5 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capv\-e2e\sCluster\screation\swith\sstorage\spolicy\sshould\screate\sa\scluster\ssuccessfully$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/storage_policy_test.go:57 Timed out after 600.001s. No Control Plane machines came into existence. Expected <bool>: false to be true /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.2/framework/controlplane_helpers.go:153from junit.e2e_suite.1.xml
�[1mSTEP�[0m: Creating a namespace for hosting the "capv-e2e" test spec INFO: Creating namespace capv-e2e-tfciy0 INFO: Creating event watcher for namespace "capv-e2e-tfciy0" �[1mSTEP�[0m: creating a workload cluster INFO: Creating the workload cluster with name "storage-policy-eaemtt" using the "storage-policy" template (Kubernetes v1.23.5, 1 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster storage-policy-eaemtt --infrastructure (default) --kubernetes-version v1.23.5 --control-plane-machine-count 1 --worker-machine-count 0 --flavor storage-policy INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capv-e2e-tfciy0/storage-policy-eaemtt to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist �[1mSTEP�[0m: Dumping all the Cluster API resources in the "capv-e2e-tfciy0" namespace �[1mSTEP�[0m: cleaning up namespace: capv-e2e-tfciy0 �[1mSTEP�[0m: Deleting cluster storage-policy-eaemtt INFO: Waiting for the Cluster capv-e2e-tfciy0/storage-policy-eaemtt to be deleted �[1mSTEP�[0m: Waiting for cluster storage-policy-eaemtt to be deleted �[1mSTEP�[0m: Deleting namespace used for hosting test spec INFO: Deleting namespace capv-e2e-tfciy0
Filter through log files | View test history on testgrid
capv-e2e Cluster Creation using Cluster API quick-start test [PR-Blocking] Should create a workload cluster
capv-e2e Cluster creation with [Ignition] bootstrap [PR-Blocking] Should create a workload cluster
capv-e2e Cluster creation with anti affined nodes should create a cluster with anti-affined nodes
capv-e2e ClusterAPI Machine Deployment Tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capv-e2e ClusterClass Creation using Cluster API quick-start test [PR-Blocking] [ClusterClass] Should create a workload cluster
capv-e2e DHCPOverrides configuration test when Creating a cluster with DHCPOverrides configured Only configures the network with the provided nameservers
capv-e2e Hardware version upgrade creates a cluster with VM hardware versions upgraded
capv-e2e Label nodes with ESXi host info creates a workload cluster whose nodes have the ESXi host info
capv-e2e When testing ClusterClass changes [ClusterClass] Should successfully rollout the managed topology upon changes to the ClusterClass
capv-e2e When testing MachineDeployment scale out/in Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capv-e2e When testing node drain timeout A node should be forcefully removed if it cannot be drained in time
capv-e2e When testing unhealthy machines remediation Should successfully trigger KCP remediation
capv-e2e When testing unhealthy machines remediation Should successfully trigger machine deployment remediation
capv-e2e Cluster creation with GPU devices as PCI passthrough [specialized-infra] should create the cluster with worker nodes having GPU cards added as PCI passthrough devices
capv-e2e ClusterAPI Upgrade Tests [clusterctl-Upgrade] Upgrading cluster from v1alpha4 to v1beta1 using clusterctl Should create a management cluster and then upgrade all the providers
capv-e2e When testing K8S conformance [Conformance] Should create a workload cluster and run kubetest
... skipping 656 lines ... INFO: Waiting for correct number of replicas to exist [1mSTEP[0m: Scaling the MachineDeployment down to 1 INFO: Scaling machine deployment md-scale-hv7o4l/md-scale-omkpeb-md-0 from 3 to 1 replicas INFO: Waiting for correct number of replicas to exist [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "md-scale-omkpeb" workload cluster Failed to get logs for machine md-scale-omkpeb-9f4wl, cluster md-scale-hv7o4l/md-scale-omkpeb: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine md-scale-omkpeb-md-0-5644577f5d-txz5b, cluster md-scale-hv7o4l/md-scale-omkpeb: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "md-scale-hv7o4l" namespace [1mSTEP[0m: Deleting cluster md-scale-hv7o4l/md-scale-omkpeb [1mSTEP[0m: Deleting cluster md-scale-omkpeb INFO: Waiting for the Cluster md-scale-hv7o4l/md-scale-omkpeb to be deleted [1mSTEP[0m: Waiting for cluster md-scale-omkpeb to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "md-scale" test spec ... skipping 184 lines ... INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "quick-start-worker") to complete. [1mSTEP[0m: Rebasing the Cluster to a ClusterClass with a modified label for MachineDeployments and wait for changes to be applied to the MachineDeployment objects INFO: Waiting for MachineDeployment rollout to complete. INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "quick-start-worker") to complete. [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "clusterclass-changes-21786g" workload cluster Failed to get logs for machine clusterclass-changes-21786g-kxh2m-zc9h6, cluster clusterclass-changes-vm3z1m/clusterclass-changes-21786g: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine clusterclass-changes-21786g-md-0-9d6gx-5fc6b8bfb8-9fk2t, cluster clusterclass-changes-vm3z1m/clusterclass-changes-21786g: dialing host IP address at : dial tcp :22: connect: connection refused Failed to get logs for machine clusterclass-changes-21786g-md-0-9d6gx-69776b6dcc-r4mms, cluster clusterclass-changes-vm3z1m/clusterclass-changes-21786g: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "clusterclass-changes-vm3z1m" namespace [1mSTEP[0m: Deleting cluster clusterclass-changes-vm3z1m/clusterclass-changes-21786g [1mSTEP[0m: Deleting cluster clusterclass-changes-21786g INFO: Waiting for the Cluster clusterclass-changes-vm3z1m/clusterclass-changes-21786g to be deleted [1mSTEP[0m: Waiting for cluster clusterclass-changes-21786g to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "clusterclass-changes" test spec ... skipping 50 lines ... INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist [1mSTEP[0m: Checking all the machines controlled by quick-start-fd22wd-md-0 are in the "<None>" failure domain INFO: Waiting for the machine pools to be provisioned [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "quick-start-fd22wd" workload cluster Failed to get logs for machine quick-start-fd22wd-2h2jb, cluster quick-start-lq0yyq/quick-start-fd22wd: dialing host IP address at 192.168.6.18: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain Failed to get logs for machine quick-start-fd22wd-md-0-5f7b469ff-5fddt, cluster quick-start-lq0yyq/quick-start-fd22wd: dialing host IP address at 192.168.6.93: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-lq0yyq" namespace [1mSTEP[0m: Deleting cluster quick-start-lq0yyq/quick-start-fd22wd [1mSTEP[0m: Deleting cluster quick-start-fd22wd INFO: Waiting for the Cluster quick-start-lq0yyq/quick-start-fd22wd to be deleted [1mSTEP[0m: Waiting for cluster quick-start-fd22wd to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "quick-start" test spec ... skipping 50 lines ... INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist [1mSTEP[0m: Checking all the machines controlled by quick-start-8k09yf-md-0-ljmvl are in the "<None>" failure domain INFO: Waiting for the machine pools to be provisioned [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "quick-start-8k09yf" workload cluster Failed to get logs for machine quick-start-8k09yf-md-0-ljmvl-d57494588-stmhc, cluster quick-start-v4mzxm/quick-start-8k09yf: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine quick-start-8k09yf-z8tqj-pmrx6, cluster quick-start-v4mzxm/quick-start-8k09yf: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-v4mzxm" namespace [1mSTEP[0m: Deleting cluster quick-start-v4mzxm/quick-start-8k09yf [1mSTEP[0m: Deleting cluster quick-start-8k09yf INFO: Waiting for the Cluster quick-start-v4mzxm/quick-start-8k09yf to be deleted [1mSTEP[0m: Waiting for cluster quick-start-8k09yf to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "quick-start" test spec ... skipping 60 lines ... Patching MachineHealthCheck unhealthy condition to one of the nodes INFO: Patching the node condition to the node Waiting for remediation Waiting until the node with unhealthy node condition is remediated [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "mhc-remediation-kq7sga" workload cluster Failed to get logs for machine mhc-remediation-kq7sga-72mnj, cluster mhc-remediation-qqejez/mhc-remediation-kq7sga: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine mhc-remediation-kq7sga-md-0-8584b95d89-c7s5s, cluster mhc-remediation-qqejez/mhc-remediation-kq7sga: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "mhc-remediation-qqejez" namespace [1mSTEP[0m: Deleting cluster mhc-remediation-qqejez/mhc-remediation-kq7sga [1mSTEP[0m: Deleting cluster mhc-remediation-kq7sga INFO: Waiting for the Cluster mhc-remediation-qqejez/mhc-remediation-kq7sga to be deleted [1mSTEP[0m: Waiting for cluster mhc-remediation-kq7sga to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "mhc-remediation" test spec ... skipping 60 lines ... Patching MachineHealthCheck unhealthy condition to one of the nodes INFO: Patching the node condition to the node Waiting for remediation Waiting until the node with unhealthy node condition is remediated [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "mhc-remediation-krub5m" workload cluster Failed to get logs for machine mhc-remediation-krub5m-md-0-6bf7565d7d-2trpm, cluster mhc-remediation-amj9ox/mhc-remediation-krub5m: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine mhc-remediation-krub5m-rfvxp, cluster mhc-remediation-amj9ox/mhc-remediation-krub5m: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine mhc-remediation-krub5m-swhd2, cluster mhc-remediation-amj9ox/mhc-remediation-krub5m: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine mhc-remediation-krub5m-txnnr, cluster mhc-remediation-amj9ox/mhc-remediation-krub5m: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "mhc-remediation-amj9ox" namespace [1mSTEP[0m: Deleting cluster mhc-remediation-amj9ox/mhc-remediation-krub5m [1mSTEP[0m: Deleting cluster mhc-remediation-krub5m INFO: Waiting for the Cluster mhc-remediation-amj9ox/mhc-remediation-krub5m to be deleted [1mSTEP[0m: Waiting for cluster mhc-remediation-krub5m to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "mhc-remediation" test spec ... skipping 64 lines ... [1mSTEP[0m: Waiting for deployment node-drain-7a4eoo-unevictable-workload/unevictable-pod-i2j to be available [1mSTEP[0m: Scale down the controlplane of the workload cluster and make sure that nodes running workload can be deleted even the draining process is blocked. INFO: Scaling controlplane node-drain-7a4eoo/node-drain-btl3ju from 3 to 1 replicas INFO: Waiting for correct number of replicas to exist [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "node-drain-btl3ju" workload cluster Failed to get logs for machine node-drain-btl3ju-srqwm, cluster node-drain-7a4eoo/node-drain-btl3ju: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "node-drain-7a4eoo" namespace [1mSTEP[0m: Deleting cluster node-drain-7a4eoo/node-drain-btl3ju [1mSTEP[0m: Deleting cluster node-drain-btl3ju INFO: Waiting for the Cluster node-drain-7a4eoo/node-drain-btl3ju to be deleted [1mSTEP[0m: Waiting for cluster node-drain-btl3ju to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "node-drain" test spec ... skipping 56 lines ... INFO: Waiting for rolling upgrade to start. INFO: Waiting for MachineDeployment rolling upgrade to start INFO: Waiting for rolling upgrade to complete. INFO: Waiting for MachineDeployment rolling upgrade to complete [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "md-rollout-p924fq" workload cluster Failed to get logs for machine md-rollout-p924fq-md-0-c7b5bb9c4-5gs58, cluster md-rollout-vwjk0a/md-rollout-p924fq: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine md-rollout-p924fq-zzbl6, cluster md-rollout-vwjk0a/md-rollout-p924fq: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "md-rollout-vwjk0a" namespace [1mSTEP[0m: Deleting cluster md-rollout-vwjk0a/md-rollout-p924fq [1mSTEP[0m: Deleting cluster md-rollout-p924fq INFO: Waiting for the Cluster md-rollout-vwjk0a/md-rollout-p924fq to be deleted [1mSTEP[0m: Waiting for cluster md-rollout-p924fq to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "md-rollout" test spec ... skipping 52 lines ... INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist [1mSTEP[0m: Checking all the machines controlled by quick-start-10k7ly-md-0 are in the "<None>" failure domain INFO: Waiting for the machine pools to be provisioned [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "quick-start-10k7ly" workload cluster Failed to get logs for machine quick-start-10k7ly-md-0-5c46cf7b7d-bg7w2, cluster quick-start-6pvagi/quick-start-10k7ly: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine quick-start-10k7ly-qph8c, cluster quick-start-6pvagi/quick-start-10k7ly: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-6pvagi" namespace [1mSTEP[0m: Deleting cluster quick-start-6pvagi/quick-start-10k7ly [1mSTEP[0m: Deleting cluster quick-start-10k7ly INFO: Waiting for the Cluster quick-start-6pvagi/quick-start-10k7ly to be deleted [1mSTEP[0m: Waiting for cluster quick-start-10k7ly to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "quick-start" test spec ... skipping 145 lines ... JUnit report was created: /logs/artifacts/junit.e2e_suite.1.xml [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mCluster creation with storage policy [0m[91m[1m[It] should create a cluster successfully [0m [37m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.2/framework/controlplane_helpers.go:153[0m [1m[91mRan 14 of 17 Specs in 5774.134 seconds[0m [1m[91mFAIL![0m -- [32m[1m13 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m3 Skipped[0m --- FAIL: TestE2E (5774.15s) FAIL [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [1m[38;5;10mGinkgo 2.0[0m is under active development and will introduce several new features, improvements, and a small handful of breaking changes. A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. [1mPlease give the RC a try and send us feedback![0m - To learn more, view the migration guide at [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md[0m ... skipping 7 lines ... [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=1.16.5[0m Ginkgo ran 1 suite in 1h37m17.671753391s Test Suite Failed [38;5;228mGinkgo 2.0 is coming soon![0m [38;5;228m==========================[0m [1m[38;5;10mGinkgo 2.0[0m is under active development and will introduce several new features, improvements, and a small handful of breaking changes. A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. [1mPlease give the RC a try and send us feedback![0m - To learn more, view the migration guide at [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md[0m ... skipping 3 lines ... To [1m[38;5;204msilence this notice[0m, set the environment variable: [1mACK_GINKGO_RC=true[0m Alternatively you can: [1mtouch $HOME/.ack-ginkgo-rc[0m real 97m17.680s user 6m8.445s sys 1m18.984s make: *** [Makefile:183: e2e] Error 1 Releasing IP claims ipclaim.ipam.metal3.io "ip-claim-92f7b9f6d7e769b6e741ef003acc6074db7b486c" deleted ipclaim.ipam.metal3.io "workload-ip-claim-900069800bbb587f3ae826b6c47f3c2323b44384" deleted vpn WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`. Revoked credentials: ... skipping 13 lines ...