Recent runs || View in Spyglass
PR | smarterclayton: wait: Introduce new methods that allow detection of context cancellation |
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 1h7m |
Revision | 310e8e1a64416e2590f7b243f985c04c666cfcb1 |
Refs |
107826 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sConformance\sTests\sconformance\-tests$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:99 Timed out after 1200.004s. { "metadata": { "creationTimestamp": null }, "spec": { "version": "", "machineTemplate": { "metadata": {}, "infrastructureRef": {} }, "kubeadmConfigSpec": {} }, "status": { "replicas": 0, "updatedReplicas": 0, "readyReplicas": 0, "unavailableReplicas": 0, "initialized": false, "ready": false } } Expected <string>: KubeadmControlPlane to match fields: { .Status.Ready: Expected <bool>: false to be true } /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.9/framework/controlplane_helpers.go:175from junit.e2e_suite.1.xml
[BeforeEach] Conformance Tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:55 INFO: Cluster name is capz-conf-8xo8g4 �[1mSTEP�[0m: Creating namespace "capz-conf-8xo8g4" for hosting the cluster Jan 23 18:02:21.335: INFO: starting to create namespace for hosting the "capz-conf-8xo8g4" test spec INFO: Creating namespace capz-conf-8xo8g4 INFO: Creating event watcher for namespace "capz-conf-8xo8g4" [Measure] conformance-tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:99 INFO: Creating the workload cluster with name "capz-conf-8xo8g4" using the "conformance-presubmit-artifacts" template (Kubernetes v1.27.0-alpha.0.1206+a2785a496085e4, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-conf-8xo8g4 --infrastructure (default) --kubernetes-version v1.27.0-alpha.0.1206+a2785a496085e4 --control-plane-machine-count 1 --worker-machine-count 2 --flavor conformance-presubmit-artifacts INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-conf-8xo8g4/capz-conf-8xo8g4-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-conf-8xo8g4/capz-conf-8xo8g4-control-plane to be ready (implies underlying nodes to be ready as well) �[1mSTEP�[0m: Waiting for the control plane to be ready [AfterEach] Conformance Tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:237 Jan 23 18:28:25.697: INFO: FAILED! Jan 23 18:28:25.697: INFO: Cleaning up after "Conformance Tests conformance-tests" spec �[1mSTEP�[0m: Dumping logs from the "capz-conf-8xo8g4" workload cluster �[1mSTEP�[0m: Dumping workload cluster capz-conf-8xo8g4/capz-conf-8xo8g4 logs Jan 23 18:28:25.781: INFO: Collecting logs for Linux node capz-conf-8xo8g4-control-plane-smz9m in cluster capz-conf-8xo8g4 in namespace capz-conf-8xo8g4 Jan 23 18:28:36.554: INFO: Collecting boot logs for AzureMachine capz-conf-8xo8g4-control-plane-smz9m Jan 23 18:28:37.695: INFO: Collecting logs for Linux node capz-conf-8xo8g4-md-0-rl2cl in cluster capz-conf-8xo8g4 in namespace capz-conf-8xo8g4 Jan 23 18:28:52.396: INFO: Collecting boot logs for AzureMachine capz-conf-8xo8g4-md-0-rl2cl Jan 23 18:28:52.850: INFO: Collecting logs for Linux node capz-conf-8xo8g4-md-0-g6ln8 in cluster capz-conf-8xo8g4 in namespace capz-conf-8xo8g4 Jan 23 18:29:02.877: INFO: Collecting boot logs for AzureMachine capz-conf-8xo8g4-md-0-g6ln8 �[1mSTEP�[0m: Dumping workload cluster capz-conf-8xo8g4/capz-conf-8xo8g4 kube-system pod logs �[1mSTEP�[0m: Fetching kube-system pod logs took 932.180241ms �[1mSTEP�[0m: Dumping workload cluster capz-conf-8xo8g4/capz-conf-8xo8g4 Azure activity log �[1mSTEP�[0m: Collecting events for Pod kube-system/kube-controller-manager-capz-conf-8xo8g4-control-plane-smz9m �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-conf-8xo8g4-control-plane-smz9m, container kube-scheduler �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-conf-8xo8g4-control-plane-smz9m, container kube-apiserver �[1mSTEP�[0m: Collecting events for Pod kube-system/kube-apiserver-capz-conf-8xo8g4-control-plane-smz9m �[1mSTEP�[0m: Creating log watcher for controller kube-system/etcd-capz-conf-8xo8g4-control-plane-smz9m, container etcd �[1mSTEP�[0m: Collecting events for Pod kube-system/kube-scheduler-capz-conf-8xo8g4-control-plane-smz9m �[1mSTEP�[0m: Collecting events for Pod kube-system/etcd-capz-conf-8xo8g4-control-plane-smz9m �[1mSTEP�[0m: failed to find events of Pod "etcd-capz-conf-8xo8g4-control-plane-smz9m" �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-conf-8xo8g4-control-plane-smz9m, container kube-controller-manager �[1mSTEP�[0m: Error starting logs stream for pod kube-system/kube-controller-manager-capz-conf-8xo8g4-control-plane-smz9m, container kube-controller-manager: container "kube-controller-manager" in pod "kube-controller-manager-capz-conf-8xo8g4-control-plane-smz9m" is not available �[1mSTEP�[0m: Fetching activity logs took 4.621733012s Jan 23 18:29:08.877: INFO: Dumping all the Cluster API resources in the "capz-conf-8xo8g4" namespace Jan 23 18:29:09.865: INFO: Deleting all clusters in the capz-conf-8xo8g4 namespace �[1mSTEP�[0m: Deleting cluster capz-conf-8xo8g4 INFO: Waiting for the Cluster capz-conf-8xo8g4/capz-conf-8xo8g4 to be deleted �[1mSTEP�[0m: Waiting for cluster capz-conf-8xo8g4 to be deleted Jan 23 18:37:00.395: INFO: Deleting namespace used for hosting the "conformance-tests" test spec INFO: Deleting namespace capz-conf-8xo8g4 Jan 23 18:37:00.413: INFO: Checking if any resources are left over in Azure for spec "conformance-tests" �[1mSTEP�[0m: Redacting sensitive information from logs
Filter through log files | View test history on testgrid
capz-e2e Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider
... skipping 119 lines ... Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 138 100 138 0 0 5520 0 --:--:-- --:--:-- --:--:-- 5750 100 35 100 35 0 0 312 0 --:--:-- --:--:-- --:--:-- 312 using CI_VERSION=v1.27.0-alpha.0.1206+a2785a496085e4 using KUBERNETES_VERSION=v1.27.0-alpha.0.1206+a2785a496085e4 using IMAGE_TAG=v1.27.0-alpha.0.1214_175b7765b8241c Error response from daemon: manifest for capzci.azurecr.io/kube-apiserver:v1.27.0-alpha.0.1214_175b7765b8241c not found: manifest unknown: manifest tagged by "v1.27.0-alpha.0.1214_175b7765b8241c" is not found Building Kubernetes make: Entering directory '/home/prow/go/src/k8s.io/kubernetes' +++ [0123 17:31:26] Verifying Prerequisites.... +++ [0123 17:31:27] Building Docker image kube-build:build-a3046abd25-5-v1.26.0-go1.19.5-bullseye.0 +++ [0123 17:34:47] Creating data container kube-build-data-a3046abd25-5-v1.26.0-go1.19.5-bullseye.0 +++ [0123 17:35:05] Syncing sources to container ... skipping 662 lines ... [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:99[0m [BeforeEach] Conformance Tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:55 INFO: Cluster name is capz-conf-8xo8g4 [1mSTEP[0m: Creating namespace "capz-conf-8xo8g4" for hosting the cluster Jan 23 18:02:21.335: INFO: starting to create namespace for hosting the "capz-conf-8xo8g4" test spec 2023/01/23 18:02:21 failed trying to get namespace (capz-conf-8xo8g4):namespaces "capz-conf-8xo8g4" not found INFO: Creating namespace capz-conf-8xo8g4 INFO: Creating event watcher for namespace "capz-conf-8xo8g4" [Measure] conformance-tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:99 INFO: Creating the workload cluster with name "capz-conf-8xo8g4" using the "conformance-presubmit-artifacts" template (Kubernetes v1.27.0-alpha.0.1206+a2785a496085e4, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml ... skipping 27 lines ... [1mSTEP[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-conf-8xo8g4/capz-conf-8xo8g4-control-plane to be ready (implies underlying nodes to be ready as well) [1mSTEP[0m: Waiting for the control plane to be ready [AfterEach] Conformance Tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:237 Jan 23 18:28:25.697: INFO: FAILED! Jan 23 18:28:25.697: INFO: Cleaning up after "Conformance Tests conformance-tests" spec [1mSTEP[0m: Dumping logs from the "capz-conf-8xo8g4" workload cluster [1mSTEP[0m: Dumping workload cluster capz-conf-8xo8g4/capz-conf-8xo8g4 logs Jan 23 18:28:25.781: INFO: Collecting logs for Linux node capz-conf-8xo8g4-control-plane-smz9m in cluster capz-conf-8xo8g4 in namespace capz-conf-8xo8g4 Jan 23 18:28:36.554: INFO: Collecting boot logs for AzureMachine capz-conf-8xo8g4-control-plane-smz9m ... skipping 13 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-conf-8xo8g4-control-plane-smz9m, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-conf-8xo8g4-control-plane-smz9m, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-capz-conf-8xo8g4-control-plane-smz9m [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-capz-conf-8xo8g4-control-plane-smz9m, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-capz-conf-8xo8g4-control-plane-smz9m [1mSTEP[0m: Collecting events for Pod kube-system/etcd-capz-conf-8xo8g4-control-plane-smz9m [1mSTEP[0m: failed to find events of Pod "etcd-capz-conf-8xo8g4-control-plane-smz9m" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-conf-8xo8g4-control-plane-smz9m, container kube-controller-manager [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-controller-manager-capz-conf-8xo8g4-control-plane-smz9m, container kube-controller-manager: container "kube-controller-manager" in pod "kube-controller-manager-capz-conf-8xo8g4-control-plane-smz9m" is not available [1mSTEP[0m: Fetching activity logs took 4.621733012s Jan 23 18:29:08.877: INFO: Dumping all the Cluster API resources in the "capz-conf-8xo8g4" namespace Jan 23 18:29:09.865: INFO: Deleting all clusters in the capz-conf-8xo8g4 namespace [1mSTEP[0m: Deleting cluster capz-conf-8xo8g4 INFO: Waiting for the Cluster capz-conf-8xo8g4/capz-conf-8xo8g4 to be deleted [1mSTEP[0m: Waiting for cluster capz-conf-8xo8g4 to be deleted ... skipping 97 lines ... JUnit report was created: /logs/artifacts/junit.e2e_suite.1.xml [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mConformance Tests [0m[91m[1m[Measurement] conformance-tests [0m [37m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.9/framework/controlplane_helpers.go:175[0m [1m[91mRan 1 of 25 Specs in 2251.769 seconds[0m [1m[91mFAIL![0m -- [32m[1m0 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m24 Skipped[0m --- FAIL: TestE2E (2251.80s) FAIL [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [1m[38;5;10mGinkgo 2.0[0m is under active development and will introduce several new features, improvements, and a small handful of breaking changes. A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. [1mPlease give the RC a try and send us feedback![0m - To learn more, view the migration guide at [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md[0m ... skipping 13 lines ... [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=1.16.5[0m Ginkgo ran 1 suite in 40m35.002357882s Test Suite Failed [38;5;228mGinkgo 2.0 is coming soon![0m [38;5;228m==========================[0m [1m[38;5;10mGinkgo 2.0[0m is under active development and will introduce several new features, improvements, and a small handful of breaking changes. A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. [1mPlease give the RC a try and send us feedback![0m - To learn more, view the migration guide at [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md[0m - For instructions on using the Release Candidate visit [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta[0m - To comment, chime in at [38;5;14m[4mhttps://github.com/onsi/ginkgo/issues/711[0m To [1m[38;5;204msilence this notice[0m, set the environment variable: [1mACK_GINKGO_RC=true[0m Alternatively you can: [1mtouch $HOME/.ack-ginkgo-rc[0m make[2]: *** [Makefile:652: test-e2e-run] Error 1 make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[1]: *** [Makefile:669: test-e2e-local] Error 2 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:679: test-conformance] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 6 lines ...