Recent runs || View in Spyglass
PR | smarterclayton: wait: Introduce new methods that allow detection of context cancellation |
Result | FAILURE |
Tests | 1 failed / 2 succeeded |
Started | |
Elapsed | 1h12m |
Revision | eaecd4c50be480e422f9e45e71a031e9edb05e1c |
Refs |
107826 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sConformance\sTests\sconformance\-tests$'
[FAILED] Timed out after 900.000s. Deployment tigera-operator/tigera-operator failed Deployment: { "metadata": { "name": "tigera-operator", "namespace": "tigera-operator", "uid": "f617dff2-36f0-4716-b7b3-a3a1dafa86be", "resourceVersion": "394", "generation": 1, "creationTimestamp": "2023-01-17T01:30:42Z", "labels": { "app.kubernetes.io/managed-by": "Helm", "k8s-app": "tigera-operator" }, "annotations": { "meta.helm.sh/release-name": "projectcalico", "meta.helm.sh/release-namespace": "tigera-operator" }, "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "apps/v1", "time": "2023-01-17T01:30:42Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:annotations": { ".": {}, "f:meta.helm.sh/release-name": {}, "f:meta.helm.sh/release-namespace": {} }, "f:labels": { ".": {}, "f:app.kubernetes.io/managed-by": {}, "f:k8s-app": {} } }, "f:spec": { "f:progressDeadlineSeconds": {}, "f:replicas": {}, "f:revisionHistoryLimit": {}, "f:selector": {}, "f:strategy": { "f:rollingUpdate": { ".": {}, "f:maxSurge": {}, "f:maxUnavailable": {} }, "f:type": {} }, "f:template": { "f:metadata": { "f:labels": { ".": {}, "f:k8s-app": {}, "f:name": {} } }, "f:spec": { "f:containers": { "k:{\"name\":\"tigera-operator\"}": { ".": {}, "f:command": {}, "f:env": { ".": {}, "k:{\"name\":\"OPERATOR_NAME\"}": { ".": {}, "f:name": {}, "f:value": {} }, "k:{\"name\":\"POD_NAME\"}": { ".": {}, "f:name": {}, "f:valueFrom": { ".": {}, "f:fieldRef": {} } }, "k:{\"name\":\"TIGERA_OPERATOR_INIT_IMAGE_VERSION\"}": { ".": {}, "f:name": {}, "f:value": {} }, "k:{\"name\":\"WATCH_NAMESPACE\"}": { ".": {}, "f:name": {} } }, "f:envFrom": {}, "f:image": {}, "f:imagePullPolicy": {}, "f:name": {}, "f:resources": {}, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {}, "f:volumeMounts": { ".": {}, "k:{\"mountPath\":\"/var/lib/calico\"}": { ".": {}, "f:mountPath": {}, "f:name": {}, "f:readOnly": {} } } } }, "f:dnsPolicy": {}, "f:hostNetwork": {}, "f:nodeSelector": {}, "f:restartPolicy": {}, "f:schedulerName": {}, "f:securityContext": {}, "f:serviceAccount": {}, "f:serviceAccountName": {}, "f:terminationGracePeriodSeconds": {}, "f:tolerations": {}, "f:volumes": { ".": {}, "k:{\"name\":\"var-lib-calico\"}": { ".": {}, "f:hostPath": { ".": {}, "f:path": {}, "f:type": {} }, "f:name": {} } } } } } } } ] }, "spec": { "replicas": 1, "selector": { "matchLabels": { "name": "tigera-operator" } }, "template": { "metadata": { "creationTimestamp": null, "labels": { "k8s-app": "tigera-operator", "name": "tigera-operator" } }, "spec": { "volumes": [ { "name": "var-lib-calico", "hostPath": { "path": "/var/lib/calico", "type": "" } } ], "containers": [ { "name": "tigera-operator", "image": "quay.io/tigera/operator:v1.29.0", "command": [ "operator" ], "envFrom": [ { "configMapRef": { "name": "kubernetes-services-endpoint", "optional": true } } ], "env": [ { "name": "WATCH_NAMESPACE" }, { "name": "POD_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "metadata.name" } } }, { "name": "OPERATOR_NAME", "value": "tigera-operator" }, { "name": "TIGERA_OPERATOR_INIT_IMAGE_VERSION", "value": "v1.29.0" } ], "resources": {}, "volumeMounts": [ { "name": "var-lib-calico", "readOnly": true, "mountPath": "/var/lib/calico" } ], "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "IfNotPresent" } ], "restartPolicy": "Always", "terminationGracePeriodSeconds": 30, "dnsPolicy": "ClusterFirstWithHostNet", "nodeSelector": { "kubernetes.io/os": "linux" }, "serviceAccountName": "tigera-operator", "serviceAccount": "tigera-operator", "hostNetwork": true, "securityContext": {}, "schedulerName": "default-scheduler", "tolerations": [ { "operator": "Exists", "effect": "NoExecute" }, { "operator": "Exists", "effect": "NoSchedule" } ] } }, "strategy": { "type": "RollingUpdate", "rollingUpdate": { "maxUnavailable": "25%", "maxSurge": "25%" } }, "revisionHistoryLimit": 10, "progressDeadlineSeconds": 600 }, "status": {} } LAST SEEN TYPE REASON OBJECT MESSAGE Expected <bool>: false to be true In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:122 @ 01/17/23 01:45:45.866from junit.e2e_suite.1.xml
> Enter [BeforeEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:56 @ 01/17/23 01:24:21.124 INFO: Cluster name is capz-conf-ob30hj STEP: Creating namespace "capz-conf-ob30hj" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/17/23 01:24:21.124 Jan 17 01:24:21.124: INFO: starting to create namespace for hosting the "capz-conf-ob30hj" test spec INFO: Creating namespace capz-conf-ob30hj INFO: Creating event watcher for namespace "capz-conf-ob30hj" < Exit [BeforeEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:56 @ 01/17/23 01:24:21.172 (48ms) > Enter [It] conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 @ 01/17/23 01:24:21.172 conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:102 @ 01/17/23 01:24:21.172 conformance-tests Name | N | Min | Median | Mean | StdDev | Max INFO: Creating the workload cluster with name "capz-conf-ob30hj" using the "conformance-presubmit-artifacts" template (Kubernetes v1.27.0-alpha.0.1003+7b01daba714514, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-conf-ob30hj --infrastructure (default) --kubernetes-version v1.27.0-alpha.0.1003+7b01daba714514 --control-plane-machine-count 1 --worker-machine-count 2 --flavor conformance-presubmit-artifacts INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:134 @ 01/17/23 01:24:24.322 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/17/23 01:26:34.433 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:131 @ 01/17/23 01:26:34.434 Jan 17 01:30:30.625: INFO: getting history for release projectcalico Jan 17 01:30:30.737: INFO: Release projectcalico does not exist, installing it Jan 17 01:30:32.063: INFO: creating 1 resource(s) Jan 17 01:30:32.206: INFO: creating 1 resource(s) Jan 17 01:30:32.345: INFO: creating 1 resource(s) Jan 17 01:30:32.468: INFO: creating 1 resource(s) Jan 17 01:30:32.609: INFO: creating 1 resource(s) Jan 17 01:30:32.748: INFO: creating 1 resource(s) Jan 17 01:30:33.063: INFO: creating 1 resource(s) Jan 17 01:30:33.250: INFO: creating 1 resource(s) Jan 17 01:30:33.375: INFO: creating 1 resource(s) Jan 17 01:30:33.511: INFO: creating 1 resource(s) Jan 17 01:30:33.643: INFO: creating 1 resource(s) Jan 17 01:30:33.769: INFO: creating 1 resource(s) Jan 17 01:30:33.896: INFO: creating 1 resource(s) Jan 17 01:30:34.028: INFO: creating 1 resource(s) Jan 17 01:30:34.161: INFO: creating 1 resource(s) Jan 17 01:30:34.308: INFO: creating 1 resource(s) Jan 17 01:30:34.490: INFO: creating 1 resource(s) Jan 17 01:30:34.628: INFO: creating 1 resource(s) Jan 17 01:30:34.821: INFO: creating 1 resource(s) Jan 17 01:30:35.060: INFO: creating 1 resource(s) Jan 17 01:30:35.673: INFO: creating 1 resource(s) Jan 17 01:30:35.793: INFO: Clearing discovery cache Jan 17 01:30:35.793: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 17 01:30:41.333: INFO: creating 1 resource(s) Jan 17 01:30:42.393: INFO: creating 6 resource(s) Jan 17 01:30:43.778: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/17/23 01:30:44.93 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/17/23 01:30:45.749 Jan 17 01:30:45.750: INFO: starting to wait for deployment to become available [FAILED] Timed out after 900.000s. Deployment tigera-operator/tigera-operator failed Deployment: { "metadata": { "name": "tigera-operator", "namespace": "tigera-operator", "uid": "f617dff2-36f0-4716-b7b3-a3a1dafa86be", "resourceVersion": "394", "generation": 1, "creationTimestamp": "2023-01-17T01:30:42Z", "labels": { "app.kubernetes.io/managed-by": "Helm", "k8s-app": "tigera-operator" }, "annotations": { "meta.helm.sh/release-name": "projectcalico", "meta.helm.sh/release-namespace": "tigera-operator" }, "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "apps/v1", "time": "2023-01-17T01:30:42Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:annotations": { ".": {}, "f:meta.helm.sh/release-name": {}, "f:meta.helm.sh/release-namespace": {} }, "f:labels": { ".": {}, "f:app.kubernetes.io/managed-by": {}, "f:k8s-app": {} } }, "f:spec": { "f:progressDeadlineSeconds": {}, "f:replicas": {}, "f:revisionHistoryLimit": {}, "f:selector": {}, "f:strategy": { "f:rollingUpdate": { ".": {}, "f:maxSurge": {}, "f:maxUnavailable": {} }, "f:type": {} }, "f:template": { "f:metadata": { "f:labels": { ".": {}, "f:k8s-app": {}, "f:name": {} } }, "f:spec": { "f:containers": { "k:{\"name\":\"tigera-operator\"}": { ".": {}, "f:command": {}, "f:env": { ".": {}, "k:{\"name\":\"OPERATOR_NAME\"}": { ".": {}, "f:name": {}, "f:value": {} }, "k:{\"name\":\"POD_NAME\"}": { ".": {}, "f:name": {}, "f:valueFrom": { ".": {}, "f:fieldRef": {} } }, "k:{\"name\":\"TIGERA_OPERATOR_INIT_IMAGE_VERSION\"}": { ".": {}, "f:name": {}, "f:value": {} }, "k:{\"name\":\"WATCH_NAMESPACE\"}": { ".": {}, "f:name": {} } }, "f:envFrom": {}, "f:image": {}, "f:imagePullPolicy": {}, "f:name": {}, "f:resources": {}, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {}, "f:volumeMounts": { ".": {}, "k:{\"mountPath\":\"/var/lib/calico\"}": { ".": {}, "f:mountPath": {}, "f:name": {}, "f:readOnly": {} } } } }, "f:dnsPolicy": {}, "f:hostNetwork": {}, "f:nodeSelector": {}, "f:restartPolicy": {}, "f:schedulerName": {}, "f:securityContext": {}, "f:serviceAccount": {}, "f:serviceAccountName": {}, "f:terminationGracePeriodSeconds": {}, "f:tolerations": {}, "f:volumes": { ".": {}, "k:{\"name\":\"var-lib-calico\"}": { ".": {}, "f:hostPath": { ".": {}, "f:path": {}, "f:type": {} }, "f:name": {} } } } } } } } ] }, "spec": { "replicas": 1, "selector": { "matchLabels": { "name": "tigera-operator" } }, "template": { "metadata": { "creationTimestamp": null, "labels": { "k8s-app": "tigera-operator", "name": "tigera-operator" } }, "spec": { "volumes": [ { "name": "var-lib-calico", "hostPath": { "path": "/var/lib/calico", "type": "" } } ], "containers": [ { "name": "tigera-operator", "image": "quay.io/tigera/operator:v1.29.0", "command": [ "operator" ], "envFrom": [ { "configMapRef": { "name": "kubernetes-services-endpoint", "optional": true } } ], "env": [ { "name": "WATCH_NAMESPACE" }, { "name": "POD_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "metadata.name" } } }, { "name": "OPERATOR_NAME", "value": "tigera-operator" }, { "name": "TIGERA_OPERATOR_INIT_IMAGE_VERSION", "value": "v1.29.0" } ], "resources": {}, "volumeMounts": [ { "name": "var-lib-calico", "readOnly": true, "mountPath": "/var/lib/calico" } ], "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "IfNotPresent" } ], "restartPolicy": "Always", "terminationGracePeriodSeconds": 30, "dnsPolicy": "ClusterFirstWithHostNet", "nodeSelector": { "kubernetes.io/os": "linux" }, "serviceAccountName": "tigera-operator", "serviceAccount": "tigera-operator", "hostNetwork": true, "securityContext": {}, "schedulerName": "default-scheduler", "tolerations": [ { "operator": "Exists", "effect": "NoExecute" }, { "operator": "Exists", "effect": "NoSchedule" } ] } }, "strategy": { "type": "RollingUpdate", "rollingUpdate": { "maxUnavailable": "25%", "maxSurge": "25%" } }, "revisionHistoryLimit": 10, "progressDeadlineSeconds": 600 }, "status": {} } LAST SEEN TYPE REASON OBJECT MESSAGE Expected <bool>: false to be true In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:122 @ 01/17/23 01:45:45.866 < Exit [It] conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 @ 01/17/23 01:45:45.866 (21m24.694s) > Enter [AfterEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:242 @ 01/17/23 01:45:45.866 Jan 17 01:45:45.866: INFO: FAILED! Jan 17 01:45:45.866: INFO: Cleaning up after "Conformance Tests conformance-tests" spec STEP: Dumping logs from the "capz-conf-ob30hj" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/17/23 01:45:45.867 Jan 17 01:45:45.867: INFO: Dumping workload cluster capz-conf-ob30hj/capz-conf-ob30hj logs Jan 17 01:45:45.986: INFO: Collecting logs for Linux node capz-conf-ob30hj-control-plane-vq2z2 in cluster capz-conf-ob30hj in namespace capz-conf-ob30hj Jan 17 01:45:55.558: INFO: Collecting boot logs for AzureMachine capz-conf-ob30hj-control-plane-vq2z2 Jan 17 01:45:57.407: INFO: Collecting logs for Linux node capz-conf-ob30hj-md-0-mz2bb in cluster capz-conf-ob30hj in namespace capz-conf-ob30hj Jan 17 01:46:04.883: INFO: Collecting boot logs for AzureMachine capz-conf-ob30hj-md-0-mz2bb Jan 17 01:46:05.732: INFO: Collecting logs for Linux node capz-conf-ob30hj-md-0-q4vdp in cluster capz-conf-ob30hj in namespace capz-conf-ob30hj Jan 17 01:46:13.952: INFO: Collecting boot logs for AzureMachine capz-conf-ob30hj-md-0-q4vdp Jan 17 01:46:14.657: INFO: Dumping workload cluster capz-conf-ob30hj/capz-conf-ob30hj kube-system pod logs Jan 17 01:46:15.812: INFO: Fetching kube-system pod logs took 1.155545604s Jan 17 01:46:15.813: INFO: Dumping workload cluster capz-conf-ob30hj/capz-conf-ob30hj Azure activity log Jan 17 01:46:15.812: INFO: Creating log watcher for controller kube-system/etcd-capz-conf-ob30hj-control-plane-vq2z2, container etcd Jan 17 01:46:15.812: INFO: Collecting events for Pod kube-system/kube-apiserver-capz-conf-ob30hj-control-plane-vq2z2 Jan 17 01:46:15.812: INFO: Collecting events for Pod kube-system/etcd-capz-conf-ob30hj-control-plane-vq2z2 Jan 17 01:46:15.812: INFO: Creating log watcher for controller kube-system/kube-apiserver-capz-conf-ob30hj-control-plane-vq2z2, container kube-apiserver Jan 17 01:46:15.812: INFO: Collecting events for Pod kube-system/kube-controller-manager-capz-conf-ob30hj-control-plane-vq2z2 Jan 17 01:46:15.812: INFO: Creating log watcher for controller kube-system/kube-controller-manager-capz-conf-ob30hj-control-plane-vq2z2, container kube-controller-manager Jan 17 01:46:15.812: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-conf-ob30hj-control-plane-vq2z2, container kube-scheduler Jan 17 01:46:15.813: INFO: Collecting events for Pod kube-system/kube-scheduler-capz-conf-ob30hj-control-plane-vq2z2 Jan 17 01:46:15.962: INFO: Error starting logs stream for pod kube-system/kube-controller-manager-capz-conf-ob30hj-control-plane-vq2z2, container kube-controller-manager: container "kube-controller-manager" in pod "kube-controller-manager-capz-conf-ob30hj-control-plane-vq2z2" is not available Jan 17 01:46:20.632: INFO: Fetching activity logs took 4.81920581s Jan 17 01:46:20.632: INFO: Dumping all the Cluster API resources in the "capz-conf-ob30hj" namespace Jan 17 01:46:21.430: INFO: Deleting all clusters in the capz-conf-ob30hj namespace STEP: Deleting cluster capz-conf-ob30hj - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/17/23 01:46:21.489 INFO: Waiting for the Cluster capz-conf-ob30hj/capz-conf-ob30hj to be deleted STEP: Waiting for cluster capz-conf-ob30hj to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/17/23 01:46:21.528 Jan 17 01:54:11.808: INFO: Deleting namespace used for hosting the "conformance-tests" test spec INFO: Deleting namespace capz-conf-ob30hj Jan 17 01:54:11.832: INFO: Checking if any resources are left over in Azure for spec "conformance-tests" STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:212 @ 01/17/23 01:54:12.469 < Exit [AfterEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:242 @ 01/17/23 01:54:17.486 (8m31.62s)
Filter through log files | View test history on testgrid
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [It] Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e [It] Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e [It] Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e [It] Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e [It] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and machinepools [OPTIONAL] with 1 control plane node and 1 machinepool
capz-e2e [It] Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [It] Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e [It] Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e [It] Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with out-of-tree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider
... skipping 121 lines ... Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 138 100 138 0 0 5307 0 --:--:-- --:--:-- --:--:-- 5307 100 35 100 35 0 0 448 0 --:--:-- --:--:-- --:--:-- 448 using CI_VERSION=v1.27.0-alpha.0.1003+7b01daba714514 using KUBERNETES_VERSION=v1.27.0-alpha.0.1003+7b01daba714514 using IMAGE_TAG=v1.27.0-alpha.0.1012_f25f7ce90b6bbc Error response from daemon: manifest for capzci.azurecr.io/kube-apiserver:v1.27.0-alpha.0.1012_f25f7ce90b6bbc not found: manifest unknown: manifest tagged by "v1.27.0-alpha.0.1012_f25f7ce90b6bbc" is not found Building Kubernetes make: Entering directory '/home/prow/go/src/k8s.io/kubernetes' +++ [0117 00:43:15] Verifying Prerequisites.... +++ [0117 00:43:15] Building Docker image kube-build:build-0901a3b61c-5-v1.26.0-go1.19.5-bullseye.0 +++ [0117 00:46:50] Creating data container kube-build-data-0901a3b61c-5-v1.26.0-go1.19.5-bullseye.0 +++ [0117 00:47:19] Syncing sources to container ... skipping 665 lines ... [38;5;243m------------------------------[0m [0mConformance Tests [0m[1mconformance-tests[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100[0m INFO: Cluster name is capz-conf-ob30hj [1mSTEP:[0m Creating namespace "capz-conf-ob30hj" for hosting the cluster [38;5;243m@ 01/17/23 01:24:21.124[0m Jan 17 01:24:21.124: INFO: starting to create namespace for hosting the "capz-conf-ob30hj" test spec 2023/01/17 01:24:21 failed trying to get namespace (capz-conf-ob30hj):namespaces "capz-conf-ob30hj" not found INFO: Creating namespace capz-conf-ob30hj INFO: Creating event watcher for namespace "capz-conf-ob30hj" [1mconformance-tests[38;5;243m - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:102 @ 01/17/23 01:24:21.172[0m [1mconformance-tests [0m[1mName[0m | [1mN[0m | [1mMin[0m | [1mMedian[0m | [1mMean[0m | [1mStdDev[0m | [1mMax[0m INFO: Creating the workload cluster with name "capz-conf-ob30hj" using the "conformance-presubmit-artifacts" template (Kubernetes v1.27.0-alpha.0.1003+7b01daba714514, 1 control-plane machines, 2 worker machines) ... skipping 54 lines ... Jan 17 01:30:41.333: INFO: creating 1 resource(s) Jan 17 01:30:42.393: INFO: creating 6 resource(s) Jan 17 01:30:43.778: INFO: Install complete [1mSTEP:[0m Waiting for Ready tigera-operator deployment pods [38;5;243m@ 01/17/23 01:30:44.93[0m [1mSTEP:[0m waiting for deployment tigera-operator/tigera-operator to be available [38;5;243m@ 01/17/23 01:30:45.749[0m Jan 17 01:30:45.750: INFO: starting to wait for deployment to become available [38;5;9m[FAILED][0m in [It] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:122 [38;5;243m@ 01/17/23 01:45:45.866[0m Jan 17 01:45:45.866: INFO: FAILED! Jan 17 01:45:45.866: INFO: Cleaning up after "Conformance Tests conformance-tests" spec [1mSTEP:[0m Dumping logs from the "capz-conf-ob30hj" workload cluster [38;5;243m@ 01/17/23 01:45:45.867[0m Jan 17 01:45:45.867: INFO: Dumping workload cluster capz-conf-ob30hj/capz-conf-ob30hj logs Jan 17 01:45:45.986: INFO: Collecting logs for Linux node capz-conf-ob30hj-control-plane-vq2z2 in cluster capz-conf-ob30hj in namespace capz-conf-ob30hj Jan 17 01:45:55.558: INFO: Collecting boot logs for AzureMachine capz-conf-ob30hj-control-plane-vq2z2 ... skipping 14 lines ... Jan 17 01:46:15.812: INFO: Collecting events for Pod kube-system/etcd-capz-conf-ob30hj-control-plane-vq2z2 Jan 17 01:46:15.812: INFO: Creating log watcher for controller kube-system/kube-apiserver-capz-conf-ob30hj-control-plane-vq2z2, container kube-apiserver Jan 17 01:46:15.812: INFO: Collecting events for Pod kube-system/kube-controller-manager-capz-conf-ob30hj-control-plane-vq2z2 Jan 17 01:46:15.812: INFO: Creating log watcher for controller kube-system/kube-controller-manager-capz-conf-ob30hj-control-plane-vq2z2, container kube-controller-manager Jan 17 01:46:15.812: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-conf-ob30hj-control-plane-vq2z2, container kube-scheduler Jan 17 01:46:15.813: INFO: Collecting events for Pod kube-system/kube-scheduler-capz-conf-ob30hj-control-plane-vq2z2 Jan 17 01:46:15.962: INFO: Error starting logs stream for pod kube-system/kube-controller-manager-capz-conf-ob30hj-control-plane-vq2z2, container kube-controller-manager: container "kube-controller-manager" in pod "kube-controller-manager-capz-conf-ob30hj-control-plane-vq2z2" is not available Jan 17 01:46:20.632: INFO: Fetching activity logs took 4.81920581s Jan 17 01:46:20.632: INFO: Dumping all the Cluster API resources in the "capz-conf-ob30hj" namespace Jan 17 01:46:21.430: INFO: Deleting all clusters in the capz-conf-ob30hj namespace [1mSTEP:[0m Deleting cluster capz-conf-ob30hj [38;5;243m@ 01/17/23 01:46:21.489[0m INFO: Waiting for the Cluster capz-conf-ob30hj/capz-conf-ob30hj to be deleted [1mSTEP:[0m Waiting for cluster capz-conf-ob30hj to be deleted [38;5;243m@ 01/17/23 01:46:21.528[0m Jan 17 01:54:11.808: INFO: Deleting namespace used for hosting the "conformance-tests" test spec INFO: Deleting namespace capz-conf-ob30hj Jan 17 01:54:11.832: INFO: Checking if any resources are left over in Azure for spec "conformance-tests" [1mSTEP:[0m Redacting sensitive information from logs [38;5;243m@ 01/17/23 01:54:12.469[0m [38;5;9m• [FAILED] [1796.362 seconds][0m [0mConformance Tests [38;5;9m[1m[It] conformance-tests[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100[0m [38;5;9m[FAILED] Timed out after 900.000s. Deployment tigera-operator/tigera-operator failed Deployment: { "metadata": { "name": "tigera-operator", "namespace": "tigera-operator", "uid": "f617dff2-36f0-4716-b7b3-a3a1dafa86be", ... skipping 265 lines ... [0m[1m[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report[0m [38;5;243mautogenerated by Ginkgo[0m [38;5;10m[ReportAfterSuite] PASSED [0.003 seconds][0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 1 Failure:[0m [38;5;9m[FAIL][0m [0mConformance Tests [38;5;9m[1m[It] conformance-tests[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:122[0m [38;5;9m[1mRan 1 of 26 Specs in 1993.337 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m0 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m25 Skipped[0m --- FAIL: TestE2E (1993.34s) FAIL [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mCurrentGinkgoTestDescription() is deprecated in Ginkgo V2. Use CurrentSpecReport() instead.[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:278[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:281[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.6.0[0m Ginkgo ran 1 suite in 35m36.959458339s Test Suite Failed make[2]: *** [Makefile:655: test-e2e-run] Error 1 make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[1]: *** [Makefile:670: test-e2e-skip-push] Error 2 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:686: test-conformance] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 6 lines ...