Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 28m12s |
Revision | release-0.5 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sConformance\sTests\sconformance\-tests$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 Timed out after 1200.001s. Expected <bool>: false to be true /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/controlplane_helpers.go:145from junit.e2e_suite.1.xml
[BeforeEach] Conformance Tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:56 INFO: Cluster name is capz-conf-l4i6qm �[1mSTEP�[0m: Creating namespace "capz-conf-l4i6qm" for hosting the cluster Jan 6 17:46:10.674: INFO: starting to create namespace for hosting the "capz-conf-l4i6qm" test spec INFO: Creating namespace capz-conf-l4i6qm INFO: Creating event watcher for namespace "capz-conf-l4i6qm" [Measure] conformance-tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 INFO: Creating the workload cluster with name "capz-conf-l4i6qm" using the "(default)" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-conf-l4i6qm --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 2 --flavor (default) INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-conf-l4i6qm/capz-conf-l4i6qm-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist [AfterEach] Conformance Tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:198 �[1mSTEP�[0m: Dumping logs from the "capz-conf-l4i6qm" workload cluster �[1mSTEP�[0m: Dumping workload cluster capz-conf-l4i6qm/capz-conf-l4i6qm logs Jan 6 18:07:12.406: INFO: INFO: Collecting logs for node capz-conf-l4i6qm-control-plane-7729h in cluster capz-conf-l4i6qm in namespace capz-conf-l4i6qm Jan 6 18:07:22.481: INFO: INFO: Collecting boot logs for AzureMachine capz-conf-l4i6qm-control-plane-7729h Jan 6 18:07:23.537: INFO: INFO: Collecting logs for node capz-conf-l4i6qm-md-0-dqjf8 in cluster capz-conf-l4i6qm in namespace capz-conf-l4i6qm Jan 6 18:07:26.948: INFO: INFO: Collecting boot logs for AzureMachine capz-conf-l4i6qm-md-0-dqjf8 �[1mSTEP�[0m: Redacting sensitive information from logs
Filter through log files
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster using scale in rollout Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a single control plane cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Should adopt up-to-date control plane Machines without modification Should adopt up-to-date control plane Machines without modification
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Workload cluster creation Creating a GPU-enabled cluster with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a VMSS cluster with a single control plane node and an AzureMachinePool with 2 nodes
capz-e2e Workload cluster creation Creating a Windows Enabled cluster With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
capz-e2e Workload cluster creation Creating a Windows enabled VMSS cluster with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider with a 1 control plane nodes and 2 worker nodes
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster With ipv6 worker node
capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node
capz-e2e Workload cluster creation With 3 control-plane nodes and 2 worker nodes
... skipping 387 lines ... [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100[0m [BeforeEach] Conformance Tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:56 INFO: Cluster name is capz-conf-l4i6qm [1mSTEP[0m: Creating namespace "capz-conf-l4i6qm" for hosting the cluster Jan 6 17:46:10.674: INFO: starting to create namespace for hosting the "capz-conf-l4i6qm" test spec 2023/01/06 17:46:10 failed trying to get namespace (capz-conf-l4i6qm):namespaces "capz-conf-l4i6qm" not found INFO: Creating namespace capz-conf-l4i6qm INFO: Creating event watcher for namespace "capz-conf-l4i6qm" [Measure] conformance-tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 INFO: Creating the workload cluster with name "capz-conf-l4i6qm" using the "(default)" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml ... skipping 97 lines ... JUnit report was created: /logs/artifacts/junit.e2e_suite.1.xml [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mConformance Tests [0m[91m[1m[Measurement] conformance-tests [0m [37m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/controlplane_helpers.go:145[0m [1m[91mRan 1 of 20 Specs in 1397.840 seconds[0m [1m[91mFAIL![0m -- [32m[1m0 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m19 Skipped[0m --- FAIL: TestE2E (1397.85s) FAIL [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m Ginkgo 2.0 is under active development and will introduce (a small number of) breaking changes. To learn more, view the migration guide at [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/v2/docs/MIGRATING_TO_V2.md[0m To comment, chime in at [38;5;14m[4mhttps://github.com/onsi/ginkgo/issues/711[0m ... skipping 7 lines ... [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=1.16.4[0m Ginkgo ran 1 suite in 24m45.348420771s Test Suite Failed make[2]: *** [Makefile:173: test-e2e-run] Error 1 make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[1]: *** [Makefile:189: test-e2e-local] Error 2 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:198: test-conformance] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...