Recent runs || View in Spyglass
Result | FAILURE |
Tests | 2 failed / 12 succeeded |
Started | |
Elapsed | 1h42m |
Revision | release-1.5 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capv\-e2e\sCluster\screation\swith\santi\saffined\snodes\sshould\screate\sa\scluster\swith\santi\-affined\snodes$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/anti_affinity_test.go:61 Expected <bool>: true to equal <bool>: false /home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/anti_affinity_test.go:224from junit.e2e_suite.1.xml
�[1mSTEP�[0m: Creating a namespace for hosting the "anti-affinity-e2e" test spec INFO: Creating namespace anti-affinity-e2e-lgk4ew INFO: Creating event watcher for namespace "anti-affinity-e2e-lgk4ew" �[1mSTEP�[0m: creating a workload cluster with INFO: Creating the workload cluster with name "anti-affinity-0vbu2k" using the "(default)" template (Kubernetes v1.23.5, 1 control-plane machines, 5 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster anti-affinity-0vbu2k --infrastructure (default) --kubernetes-version v1.23.5 --control-plane-machine-count 1 --worker-machine-count 5 --flavor (default) INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by anti-affinity-e2e-lgk4ew/anti-affinity-0vbu2k to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane anti-affinity-e2e-lgk4ew/anti-affinity-0vbu2k to be ready (implies underlying nodes to be ready as well) �[1mSTEP�[0m: Waiting for the control plane to be ready �[1mSTEP�[0m: Checking all the the control plane machines are in the expected failure domains INFO: Waiting for the machine deployments to be provisioned �[1mSTEP�[0m: Waiting for the workload nodes to exist �[1mSTEP�[0m: Checking all the machines controlled by anti-affinity-0vbu2k-md-0 are in the "<None>" failure domain INFO: Waiting for the machine pools to be provisioned �[1mSTEP�[0m: checking for cluster module info on VSphereCluster object �[1mSTEP�[0m: verifying presence of cluster modules �[1mSTEP�[0m: verifying node anti-affinity for worker nodes �[1mSTEP�[0m: Scaling the MachineDeployment out to > 5 nodes INFO: Scaling machine deployment anti-affinity-e2e-lgk4ew/anti-affinity-0vbu2k-md-0 from 5 to 7 replicas INFO: Waiting for correct number of replicas to exist �[1mSTEP�[0m: Scaling the MachineDeployment down to 5 nodes INFO: Scaling machine deployment anti-affinity-e2e-lgk4ew/anti-affinity-0vbu2k-md-0 from 7 to 5 replicas INFO: Waiting for correct number of replicas to exist �[1mSTEP�[0m: worker nodes should be anti-affined again since enough hosts are available �[1mSTEP�[0m: Deleting the cluster anti-affinity-0vbu2k in namespace anti-affinity-e2e-lgk4ew �[1mSTEP�[0m: Deleting cluster anti-affinity-0vbu2k INFO: Waiting for the Cluster anti-affinity-e2e-lgk4ew/anti-affinity-0vbu2k to be deleted �[1mSTEP�[0m: Waiting for cluster anti-affinity-0vbu2k to be deleted �[1mSTEP�[0m: confirming deletion of cluster module constructs �[1mSTEP�[0m: Dumping all the Cluster API resources in the "anti-affinity-e2e-lgk4ew" namespace �[1mSTEP�[0m: cleaning up namespace: anti-affinity-e2e-lgk4ew �[1mSTEP�[0m: Deleting namespace used for hosting test spec INFO: Deleting namespace anti-affinity-e2e-lgk4ew
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capv\-e2e\sCluster\screation\swith\sstorage\spolicy\sshould\screate\sa\scluster\ssuccessfully$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/storage_policy_test.go:57 Timed out after 600.000s. No Control Plane machines came into existence. Expected <bool>: false to be true /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.2/framework/controlplane_helpers.go:153from junit.e2e_suite.1.xml
�[1mSTEP�[0m: Creating a namespace for hosting the "capv-e2e" test spec INFO: Creating namespace capv-e2e-sdcgq0 INFO: Creating event watcher for namespace "capv-e2e-sdcgq0" �[1mSTEP�[0m: creating a workload cluster INFO: Creating the workload cluster with name "storage-policy-l5lzbg" using the "storage-policy" template (Kubernetes v1.23.5, 1 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster storage-policy-l5lzbg --infrastructure (default) --kubernetes-version v1.23.5 --control-plane-machine-count 1 --worker-machine-count 0 --flavor storage-policy INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capv-e2e-sdcgq0/storage-policy-l5lzbg to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist �[1mSTEP�[0m: Dumping all the Cluster API resources in the "capv-e2e-sdcgq0" namespace �[1mSTEP�[0m: cleaning up namespace: capv-e2e-sdcgq0 �[1mSTEP�[0m: Deleting cluster storage-policy-l5lzbg INFO: Waiting for the Cluster capv-e2e-sdcgq0/storage-policy-l5lzbg to be deleted �[1mSTEP�[0m: Waiting for cluster storage-policy-l5lzbg to be deleted �[1mSTEP�[0m: Deleting namespace used for hosting test spec INFO: Deleting namespace capv-e2e-sdcgq0
Filter through log files | View test history on testgrid
capv-e2e Cluster Creation using Cluster API quick-start test [PR-Blocking] Should create a workload cluster
capv-e2e Cluster creation with [Ignition] bootstrap [PR-Blocking] Should create a workload cluster
capv-e2e ClusterAPI Machine Deployment Tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capv-e2e ClusterClass Creation using Cluster API quick-start test [PR-Blocking] [ClusterClass] Should create a workload cluster
capv-e2e DHCPOverrides configuration test when Creating a cluster with DHCPOverrides configured Only configures the network with the provided nameservers
capv-e2e Hardware version upgrade creates a cluster with VM hardware versions upgraded
capv-e2e Label nodes with ESXi host info creates a workload cluster whose nodes have the ESXi host info
capv-e2e When testing ClusterClass changes [ClusterClass] Should successfully rollout the managed topology upon changes to the ClusterClass
capv-e2e When testing MachineDeployment scale out/in Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capv-e2e When testing node drain timeout A node should be forcefully removed if it cannot be drained in time
capv-e2e When testing unhealthy machines remediation Should successfully trigger KCP remediation
capv-e2e When testing unhealthy machines remediation Should successfully trigger machine deployment remediation
capv-e2e Cluster creation with GPU devices as PCI passthrough [specialized-infra] should create the cluster with worker nodes having GPU cards added as PCI passthrough devices
capv-e2e ClusterAPI Upgrade Tests [clusterctl-Upgrade] Upgrading cluster from v1alpha4 to v1beta1 using clusterctl Should create a management cluster and then upgrade all the providers
capv-e2e When testing K8S conformance [Conformance] Should create a workload cluster and run kubetest