Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 13 succeeded |
Started | |
Elapsed | 1h37m |
Revision | release-1.5 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capv\-e2e\sCluster\screation\swith\sstorage\spolicy\sshould\screate\sa\scluster\ssuccessfully$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/storage_policy_test.go:57 Timed out after 600.001s. No Control Plane machines came into existence. Expected <bool>: false to be true /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.2/framework/controlplane_helpers.go:153from junit.e2e_suite.1.xml
�[1mSTEP�[0m: Creating a namespace for hosting the "capv-e2e" test spec INFO: Creating namespace capv-e2e-ecgswx INFO: Creating event watcher for namespace "capv-e2e-ecgswx" �[1mSTEP�[0m: creating a workload cluster INFO: Creating the workload cluster with name "storage-policy-l8rrt8" using the "storage-policy" template (Kubernetes v1.23.5, 1 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster storage-policy-l8rrt8 --infrastructure (default) --kubernetes-version v1.23.5 --control-plane-machine-count 1 --worker-machine-count 0 --flavor storage-policy INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capv-e2e-ecgswx/storage-policy-l8rrt8 to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist �[1mSTEP�[0m: Dumping all the Cluster API resources in the "capv-e2e-ecgswx" namespace �[1mSTEP�[0m: cleaning up namespace: capv-e2e-ecgswx �[1mSTEP�[0m: Deleting cluster storage-policy-l8rrt8 INFO: Waiting for the Cluster capv-e2e-ecgswx/storage-policy-l8rrt8 to be deleted �[1mSTEP�[0m: Waiting for cluster storage-policy-l8rrt8 to be deleted �[1mSTEP�[0m: Deleting namespace used for hosting test spec INFO: Deleting namespace capv-e2e-ecgswx
Filter through log files | View test history on testgrid
capv-e2e Cluster Creation using Cluster API quick-start test [PR-Blocking] Should create a workload cluster
capv-e2e Cluster creation with [Ignition] bootstrap [PR-Blocking] Should create a workload cluster
capv-e2e Cluster creation with anti affined nodes should create a cluster with anti-affined nodes
capv-e2e ClusterAPI Machine Deployment Tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capv-e2e ClusterClass Creation using Cluster API quick-start test [PR-Blocking] [ClusterClass] Should create a workload cluster
capv-e2e DHCPOverrides configuration test when Creating a cluster with DHCPOverrides configured Only configures the network with the provided nameservers
capv-e2e Hardware version upgrade creates a cluster with VM hardware versions upgraded
capv-e2e Label nodes with ESXi host info creates a workload cluster whose nodes have the ESXi host info
capv-e2e When testing ClusterClass changes [ClusterClass] Should successfully rollout the managed topology upon changes to the ClusterClass
capv-e2e When testing MachineDeployment scale out/in Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capv-e2e When testing node drain timeout A node should be forcefully removed if it cannot be drained in time
capv-e2e When testing unhealthy machines remediation Should successfully trigger KCP remediation
capv-e2e When testing unhealthy machines remediation Should successfully trigger machine deployment remediation
capv-e2e Cluster creation with GPU devices as PCI passthrough [specialized-infra] should create the cluster with worker nodes having GPU cards added as PCI passthrough devices
capv-e2e ClusterAPI Upgrade Tests [clusterctl-Upgrade] Upgrading cluster from v1alpha4 to v1beta1 using clusterctl Should create a management cluster and then upgrade all the providers
capv-e2e When testing K8S conformance [Conformance] Should create a workload cluster and run kubetest
... skipping 625 lines ... INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist [1mSTEP[0m: Checking all the machines controlled by quick-start-lm49vd-md-0-h5pkz are in the "<None>" failure domain INFO: Waiting for the machine pools to be provisioned [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "quick-start-lm49vd" workload cluster Failed to get logs for machine quick-start-lm49vd-9tbqt-b79n5, cluster quick-start-rlvpc4/quick-start-lm49vd: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine quick-start-lm49vd-md-0-h5pkz-5cc9c85b58-7d94s, cluster quick-start-rlvpc4/quick-start-lm49vd: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-rlvpc4" namespace [1mSTEP[0m: Deleting cluster quick-start-rlvpc4/quick-start-lm49vd [1mSTEP[0m: Deleting cluster quick-start-lm49vd INFO: Waiting for the Cluster quick-start-rlvpc4/quick-start-lm49vd to be deleted [1mSTEP[0m: Waiting for cluster quick-start-lm49vd to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "quick-start" test spec ... skipping 119 lines ... INFO: Waiting for correct number of replicas to exist [1mSTEP[0m: Scaling the MachineDeployment down to 1 INFO: Scaling machine deployment md-scale-9bxhqa/md-scale-zdo4ox-md-0 from 3 to 1 replicas INFO: Waiting for correct number of replicas to exist [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "md-scale-zdo4ox" workload cluster Failed to get logs for machine md-scale-zdo4ox-5blm8, cluster md-scale-9bxhqa/md-scale-zdo4ox: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine md-scale-zdo4ox-md-0-6b69b8776-ttqqd, cluster md-scale-9bxhqa/md-scale-zdo4ox: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "md-scale-9bxhqa" namespace [1mSTEP[0m: Deleting cluster md-scale-9bxhqa/md-scale-zdo4ox [1mSTEP[0m: Deleting cluster md-scale-zdo4ox INFO: Waiting for the Cluster md-scale-9bxhqa/md-scale-zdo4ox to be deleted [1mSTEP[0m: Waiting for cluster md-scale-zdo4ox to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "md-scale" test spec ... skipping 136 lines ... INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "quick-start-worker") to complete. [1mSTEP[0m: Rebasing the Cluster to a ClusterClass with a modified label for MachineDeployments and wait for changes to be applied to the MachineDeployment objects INFO: Waiting for MachineDeployment rollout to complete. INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "quick-start-worker") to complete. [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "clusterclass-changes-m9tg6j" workload cluster Failed to get logs for machine clusterclass-changes-m9tg6j-2bpc4-zc8bb, cluster clusterclass-changes-lpnxux/clusterclass-changes-m9tg6j: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine clusterclass-changes-m9tg6j-md-0-p2x4j-574c444ff4-rzchp, cluster clusterclass-changes-lpnxux/clusterclass-changes-m9tg6j: dialing host IP address at : dial tcp :22: connect: connection refused Failed to get logs for machine clusterclass-changes-m9tg6j-md-0-p2x4j-849757568c-4cgbf, cluster clusterclass-changes-lpnxux/clusterclass-changes-m9tg6j: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "clusterclass-changes-lpnxux" namespace [1mSTEP[0m: Deleting cluster clusterclass-changes-lpnxux/clusterclass-changes-m9tg6j [1mSTEP[0m: Deleting cluster clusterclass-changes-m9tg6j INFO: Waiting for the Cluster clusterclass-changes-lpnxux/clusterclass-changes-m9tg6j to be deleted [1mSTEP[0m: Waiting for cluster clusterclass-changes-m9tg6j to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "clusterclass-changes" test spec ... skipping 64 lines ... [1mSTEP[0m: Waiting for deployment node-drain-h96wfm-unevictable-workload/unevictable-pod-exz to be available [1mSTEP[0m: Scale down the controlplane of the workload cluster and make sure that nodes running workload can be deleted even the draining process is blocked. INFO: Scaling controlplane node-drain-h96wfm/node-drain-oashyl from 3 to 1 replicas INFO: Waiting for correct number of replicas to exist [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "node-drain-oashyl" workload cluster Failed to get logs for machine node-drain-oashyl-8bs28, cluster node-drain-h96wfm/node-drain-oashyl: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "node-drain-h96wfm" namespace [1mSTEP[0m: Deleting cluster node-drain-h96wfm/node-drain-oashyl [1mSTEP[0m: Deleting cluster node-drain-oashyl INFO: Waiting for the Cluster node-drain-h96wfm/node-drain-oashyl to be deleted [1mSTEP[0m: Waiting for cluster node-drain-oashyl to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "node-drain" test spec ... skipping 122 lines ... Patching MachineHealthCheck unhealthy condition to one of the nodes INFO: Patching the node condition to the node Waiting for remediation Waiting until the node with unhealthy node condition is remediated [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "mhc-remediation-kumix4" workload cluster Failed to get logs for machine mhc-remediation-kumix4-md-0-7dfc954b77-875rp, cluster mhc-remediation-tum54h/mhc-remediation-kumix4: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine mhc-remediation-kumix4-qb99l, cluster mhc-remediation-tum54h/mhc-remediation-kumix4: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "mhc-remediation-tum54h" namespace [1mSTEP[0m: Deleting cluster mhc-remediation-tum54h/mhc-remediation-kumix4 [1mSTEP[0m: Deleting cluster mhc-remediation-kumix4 INFO: Waiting for the Cluster mhc-remediation-tum54h/mhc-remediation-kumix4 to be deleted [1mSTEP[0m: Waiting for cluster mhc-remediation-kumix4 to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "mhc-remediation" test spec ... skipping 60 lines ... Patching MachineHealthCheck unhealthy condition to one of the nodes INFO: Patching the node condition to the node Waiting for remediation Waiting until the node with unhealthy node condition is remediated [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "mhc-remediation-n8o1uc" workload cluster Failed to get logs for machine mhc-remediation-n8o1uc-csqv8, cluster mhc-remediation-2fx9f0/mhc-remediation-n8o1uc: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine mhc-remediation-n8o1uc-md-0-64d679fb9b-gf9vq, cluster mhc-remediation-2fx9f0/mhc-remediation-n8o1uc: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine mhc-remediation-n8o1uc-t2kfn, cluster mhc-remediation-2fx9f0/mhc-remediation-n8o1uc: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine mhc-remediation-n8o1uc-zzjph, cluster mhc-remediation-2fx9f0/mhc-remediation-n8o1uc: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "mhc-remediation-2fx9f0" namespace [1mSTEP[0m: Deleting cluster mhc-remediation-2fx9f0/mhc-remediation-n8o1uc [1mSTEP[0m: Deleting cluster mhc-remediation-n8o1uc INFO: Waiting for the Cluster mhc-remediation-2fx9f0/mhc-remediation-n8o1uc to be deleted [1mSTEP[0m: Waiting for cluster mhc-remediation-n8o1uc to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "mhc-remediation" test spec ... skipping 56 lines ... INFO: Waiting for rolling upgrade to start. INFO: Waiting for MachineDeployment rolling upgrade to start INFO: Waiting for rolling upgrade to complete. INFO: Waiting for MachineDeployment rolling upgrade to complete [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "md-rollout-n2ycfr" workload cluster Failed to get logs for machine md-rollout-n2ycfr-67hpf, cluster md-rollout-7pznfo/md-rollout-n2ycfr: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine md-rollout-n2ycfr-md-0-8494889fd5-vjwms, cluster md-rollout-7pznfo/md-rollout-n2ycfr: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "md-rollout-7pznfo" namespace [1mSTEP[0m: Deleting cluster md-rollout-7pznfo/md-rollout-n2ycfr [1mSTEP[0m: Deleting cluster md-rollout-n2ycfr INFO: Waiting for the Cluster md-rollout-7pznfo/md-rollout-n2ycfr to be deleted [1mSTEP[0m: Waiting for cluster md-rollout-n2ycfr to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "md-rollout" test spec ... skipping 52 lines ... INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist [1mSTEP[0m: Checking all the machines controlled by quick-start-k52q1w-md-0 are in the "<None>" failure domain INFO: Waiting for the machine pools to be provisioned [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "quick-start-k52q1w" workload cluster Failed to get logs for machine quick-start-k52q1w-md-0-5d55cb9795-r7qfq, cluster quick-start-34yyic/quick-start-k52q1w: dialing host IP address at 192.168.6.42: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain Failed to get logs for machine quick-start-k52q1w-sxhm5, cluster quick-start-34yyic/quick-start-k52q1w: dialing host IP address at 192.168.6.29: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-34yyic" namespace [1mSTEP[0m: Deleting cluster quick-start-34yyic/quick-start-k52q1w [1mSTEP[0m: Deleting cluster quick-start-k52q1w INFO: Waiting for the Cluster quick-start-34yyic/quick-start-k52q1w to be deleted [1mSTEP[0m: Waiting for cluster quick-start-k52q1w to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "quick-start" test spec ... skipping 110 lines ... INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist [1mSTEP[0m: Checking all the machines controlled by quick-start-j31ish-md-0 are in the "<None>" failure domain INFO: Waiting for the machine pools to be provisioned [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "quick-start-j31ish" workload cluster Failed to get logs for machine quick-start-j31ish-lpbm6, cluster quick-start-rv6hvz/quick-start-j31ish: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine quick-start-j31ish-md-0-586657dc5b-jvv46, cluster quick-start-rv6hvz/quick-start-j31ish: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-rv6hvz" namespace [1mSTEP[0m: Deleting cluster quick-start-rv6hvz/quick-start-j31ish [1mSTEP[0m: Deleting cluster quick-start-j31ish INFO: Waiting for the Cluster quick-start-rv6hvz/quick-start-j31ish to be deleted [1mSTEP[0m: Waiting for cluster quick-start-j31ish to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "quick-start" test spec ... skipping 10 lines ... JUnit report was created: /logs/artifacts/junit.e2e_suite.1.xml [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mCluster creation with storage policy [0m[91m[1m[It] should create a cluster successfully [0m [37m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.2/framework/controlplane_helpers.go:153[0m [1m[91mRan 14 of 17 Specs in 5512.278 seconds[0m [1m[91mFAIL![0m -- [32m[1m13 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m3 Skipped[0m --- FAIL: TestE2E (5512.29s) FAIL [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [1m[38;5;10mGinkgo 2.0[0m is under active development and will introduce several new features, improvements, and a small handful of breaking changes. A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. [1mPlease give the RC a try and send us feedback![0m - To learn more, view the migration guide at [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md[0m ... skipping 7 lines ... [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=1.16.5[0m Ginkgo ran 1 suite in 1h32m46.274662628s Test Suite Failed [38;5;228mGinkgo 2.0 is coming soon![0m [38;5;228m==========================[0m [1m[38;5;10mGinkgo 2.0[0m is under active development and will introduce several new features, improvements, and a small handful of breaking changes. A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. [1mPlease give the RC a try and send us feedback![0m - To learn more, view the migration guide at [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md[0m ... skipping 3 lines ... To [1m[38;5;204msilence this notice[0m, set the environment variable: [1mACK_GINKGO_RC=true[0m Alternatively you can: [1mtouch $HOME/.ack-ginkgo-rc[0m real 92m46.284s user 5m51.109s sys 1m14.045s make: *** [Makefile:183: e2e] Error 1 Releasing IP claims ipclaim.ipam.metal3.io "ip-claim-54c7304e2e54ca2c9f6b10b822d08910324b1bdc" deleted ipclaim.ipam.metal3.io "workload-ip-claim-81f4a9e778e7e4ada7f3ad0f6ec30b959d8889cc" deleted vpn WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`. Revoked credentials: ... skipping 13 lines ...