Result | FAILURE |
Tests | 1 failed / 2 succeeded |
Started | |
Elapsed | 1h21m |
Revision | release-1.7 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sConformance\sTests\sconformance\-tests$'
[FAILED] Timed out after 1500.000s. Timed out waiting for 2 nodes to be created for MachineDeployment capz-conf-ttqvdh/capz-conf-ttqvdh-md-win Expected <int>: 0 to equal <int>: 2 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:131 @ 01/31/23 16:31:40.085 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit.e2e_suite.1.xml
> Enter [BeforeEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:56 @ 01/31/23 15:59:53.39 INFO: Cluster name is capz-conf-ttqvdh STEP: Creating namespace "capz-conf-ttqvdh" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/31/23 15:59:53.39 Jan 31 15:59:53.390: INFO: starting to create namespace for hosting the "capz-conf-ttqvdh" test spec INFO: Creating namespace capz-conf-ttqvdh INFO: Creating event watcher for namespace "capz-conf-ttqvdh" < Exit [BeforeEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:56 @ 01/31/23 15:59:53.429 (39ms) > Enter [It] conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 @ 01/31/23 15:59:53.429 conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:102 @ 01/31/23 15:59:53.429 conformance-tests Name | N | Min | Median | Mean | StdDev | Max INFO: Creating the workload cluster with name "capz-conf-ttqvdh" using the "conformance-ci-artifacts-windows-containerd" template (Kubernetes v1.25.7-rc.0.14+f6fb2e8bdbec52, 1 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-conf-ttqvdh --infrastructure (default) --kubernetes-version v1.25.7-rc.0.14+f6fb2e8bdbec52 --control-plane-machine-count 1 --worker-machine-count 0 --flavor conformance-ci-artifacts-windows-containerd INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:134 @ 01/31/23 15:59:56.502 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/31/23 16:01:46.581 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:131 @ 01/31/23 16:01:46.581 Jan 31 16:04:29.797: INFO: getting history for release projectcalico Jan 31 16:04:29.831: INFO: Release projectcalico does not exist, installing it Jan 31 16:04:30.772: INFO: creating 1 resource(s) Jan 31 16:04:30.822: INFO: creating 1 resource(s) Jan 31 16:04:30.869: INFO: creating 1 resource(s) Jan 31 16:04:30.914: INFO: creating 1 resource(s) Jan 31 16:04:30.970: INFO: creating 1 resource(s) Jan 31 16:04:31.022: INFO: creating 1 resource(s) Jan 31 16:04:31.150: INFO: creating 1 resource(s) Jan 31 16:04:31.221: INFO: creating 1 resource(s) Jan 31 16:04:31.267: INFO: creating 1 resource(s) Jan 31 16:04:31.316: INFO: creating 1 resource(s) Jan 31 16:04:31.367: INFO: creating 1 resource(s) Jan 31 16:04:31.413: INFO: creating 1 resource(s) Jan 31 16:04:31.460: INFO: creating 1 resource(s) Jan 31 16:04:31.509: INFO: creating 1 resource(s) Jan 31 16:04:31.552: INFO: creating 1 resource(s) Jan 31 16:04:31.602: INFO: creating 1 resource(s) Jan 31 16:04:31.663: INFO: creating 1 resource(s) Jan 31 16:04:31.724: INFO: creating 1 resource(s) Jan 31 16:04:31.798: INFO: creating 1 resource(s) Jan 31 16:04:31.911: INFO: creating 1 resource(s) Jan 31 16:04:32.281: INFO: creating 1 resource(s) Jan 31 16:04:32.324: INFO: Clearing discovery cache Jan 31 16:04:32.324: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 31 16:04:35.674: INFO: creating 1 resource(s) Jan 31 16:04:36.063: INFO: creating 6 resource(s) Jan 31 16:04:36.595: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/31/23 16:04:36.968 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/31/23 16:04:37.206 Jan 31 16:04:37.206: INFO: starting to wait for deployment to become available Jan 31 16:04:47.292: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.085625368s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:74 @ 01/31/23 16:04:47.898 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/31/23 16:04:48.198 Jan 31 16:04:48.198: INFO: starting to wait for deployment to become available Jan 31 16:05:51.302: INFO: Deployment calico-system/calico-kube-controllers is now available, took 1m3.104029022s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/31/23 16:05:51.593 Jan 31 16:05:51.593: INFO: starting to wait for deployment to become available Jan 31 16:05:51.626: INFO: Deployment calico-system/calico-typha is now available, took 33.335099ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:79 @ 01/31/23 16:05:51.627 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/31/23 16:05:51.863 Jan 31 16:05:51.863: INFO: starting to wait for deployment to become available Jan 31 16:06:01.928: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 10.065039466s STEP: Waiting for Ready calico-node daemonset pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:84 @ 01/31/23 16:06:01.928 STEP: waiting for daemonset calico-system/calico-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/31/23 16:06:02.333 Jan 31 16:06:02.333: INFO: waiting for daemonset calico-system/calico-node to be complete Jan 31 16:06:02.365: INFO: 1 daemonset calico-system/calico-node pods are running, took 32.839976ms STEP: Waiting for Ready calico windows pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:91 @ 01/31/23 16:06:02.366 STEP: waiting for daemonset calico-system/calico-node-windows to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/31/23 16:06:02.603 Jan 31 16:06:02.603: INFO: waiting for daemonset calico-system/calico-node-windows to be complete Jan 31 16:06:02.635: INFO: 0 daemonset calico-system/calico-node-windows pods are running, took 31.755556ms STEP: Waiting for Ready calico windows pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:97 @ 01/31/23 16:06:02.635 STEP: waiting for daemonset kube-system/kube-proxy-windows to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/31/23 16:06:02.866 Jan 31 16:06:02.867: INFO: waiting for daemonset kube-system/kube-proxy-windows to be complete Jan 31 16:06:02.899: INFO: 0 daemonset kube-system/kube-proxy-windows pods are running, took 32.38134ms INFO: Waiting for the first control plane machine managed by capz-conf-ttqvdh/capz-conf-ttqvdh-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 @ 01/31/23 16:06:02.919 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:71 @ 01/31/23 16:06:02.925 Jan 31 16:06:02.977: INFO: getting history for release azuredisk-csi-driver-oot Jan 31 16:06:03.052: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Jan 31 16:06:05.934: INFO: creating 1 resource(s) Jan 31 16:06:06.050: INFO: creating 18 resource(s) Jan 31 16:06:06.386: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:81 @ 01/31/23 16:06:06.406 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/31/23 16:06:06.66 Jan 31 16:06:06.660: INFO: starting to wait for deployment to become available Jan 31 16:06:37.241: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 30.580476004s STEP: Waiting for Running azure-disk-csi node pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:86 @ 01/31/23 16:06:37.241 STEP: waiting for daemonset kube-system/csi-azuredisk-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/31/23 16:06:38.554 Jan 31 16:06:38.554: INFO: waiting for daemonset kube-system/csi-azuredisk-node to be complete Jan 31 16:06:39.818: INFO: 1 daemonset kube-system/csi-azuredisk-node pods are running, took 1.263555207s STEP: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/31/23 16:06:39.982 Jan 31 16:06:39.982: INFO: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete Jan 31 16:06:40.014: INFO: 0 daemonset kube-system/csi-azuredisk-node-win pods are running, took 31.73533ms INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-conf-ttqvdh/capz-conf-ttqvdh-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:165 @ 01/31/23 16:06:40.029 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:196 @ 01/31/23 16:06:40.037 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:102 @ 01/31/23 16:06:40.063 STEP: Checking all the machines controlled by capz-conf-ttqvdh-md-0 are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/31/23 16:06:40.074 STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:102 @ 01/31/23 16:06:40.084 [FAILED] Timed out after 1500.000s. Timed out waiting for 2 nodes to be created for MachineDeployment capz-conf-ttqvdh/capz-conf-ttqvdh-md-win Expected <int>: 0 to equal <int>: 2 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:131 @ 01/31/23 16:31:40.085 < Exit [It] conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 @ 01/31/23 16:31:40.085 (31m46.656s) > Enter [AfterEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:242 @ 01/31/23 16:31:40.085 Jan 31 16:31:40.085: INFO: FAILED! Jan 31 16:31:40.085: INFO: Cleaning up after "Conformance Tests conformance-tests" spec STEP: Dumping logs from the "capz-conf-ttqvdh" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/31/23 16:31:40.085 Jan 31 16:31:40.085: INFO: Dumping workload cluster capz-conf-ttqvdh/capz-conf-ttqvdh logs Jan 31 16:31:40.128: INFO: Collecting logs for Linux node capz-conf-ttqvdh-control-plane-hqqpr in cluster capz-conf-ttqvdh in namespace capz-conf-ttqvdh Jan 31 16:32:00.737: INFO: Collecting boot logs for AzureMachine capz-conf-ttqvdh-control-plane-hqqpr Jan 31 16:32:01.688: INFO: Unable to collect logs as node doesn't have addresses Jan 31 16:32:01.688: INFO: Collecting logs for Windows node capz-conf-ttqvdh-md-win-f68px in cluster capz-conf-ttqvdh in namespace capz-conf-ttqvdh Jan 31 16:36:17.645: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-ttqvdh-md-win-f68px to /logs/artifacts/clusters/capz-conf-ttqvdh/machines/capz-conf-ttqvdh-md-win-8bd8475b5-6sxlt/crashdumps.tar Jan 31 16:36:18.170: INFO: Collecting boot logs for AzureMachine capz-conf-ttqvdh-md-win-f68px Jan 31 16:36:18.201: INFO: Unable to collect logs as node doesn't have addresses Jan 31 16:36:18.201: INFO: Collecting logs for Windows node capz-conf-ttqvdh-md-win-rg8qc in cluster capz-conf-ttqvdh in namespace capz-conf-ttqvdh Jan 31 16:40:28.369: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-ttqvdh-md-win-rg8qc to /logs/artifacts/clusters/capz-conf-ttqvdh/machines/capz-conf-ttqvdh-md-win-8bd8475b5-jbfws/crashdumps.tar Jan 31 16:40:28.897: INFO: Collecting boot logs for AzureMachine capz-conf-ttqvdh-md-win-rg8qc Jan 31 16:40:28.914: INFO: Dumping workload cluster capz-conf-ttqvdh/capz-conf-ttqvdh kube-system pod logs Jan 31 16:40:29.380: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-546454464f-nhl98, container calico-apiserver Jan 31 16:40:29.380: INFO: Describing Pod calico-apiserver/calico-apiserver-546454464f-nhl98 Jan 31 16:40:29.454: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-546454464f-z2gbf, container calico-apiserver Jan 31 16:40:29.454: INFO: Describing Pod calico-apiserver/calico-apiserver-546454464f-z2gbf Jan 31 16:40:29.537: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-5f9dc85578-9tzhw, container calico-kube-controllers Jan 31 16:40:29.537: INFO: Describing Pod calico-system/calico-kube-controllers-5f9dc85578-9tzhw Jan 31 16:40:29.616: INFO: Creating log watcher for controller calico-system/calico-node-wddtq, container calico-node Jan 31 16:40:29.616: INFO: Describing Pod calico-system/calico-node-wddtq Jan 31 16:40:29.710: INFO: Creating log watcher for controller calico-system/calico-typha-6f6c8589d7-wnpxb, container calico-typha Jan 31 16:40:29.711: INFO: Describing Pod calico-system/calico-typha-6f6c8589d7-wnpxb Jan 31 16:40:29.786: INFO: Describing Pod calico-system/csi-node-driver-6hwx5 Jan 31 16:40:29.786: INFO: Creating log watcher for controller calico-system/csi-node-driver-6hwx5, container csi-node-driver-registrar Jan 31 16:40:29.786: INFO: Creating log watcher for controller calico-system/csi-node-driver-6hwx5, container calico-csi Jan 31 16:40:30.140: INFO: Describing Pod kube-system/coredns-565d847f94-w6ssp Jan 31 16:40:30.140: INFO: Creating log watcher for controller kube-system/coredns-565d847f94-w6ssp, container coredns Jan 31 16:40:30.533: INFO: Creating log watcher for controller kube-system/coredns-565d847f94-xjvwt, container coredns Jan 31 16:40:30.533: INFO: Describing Pod kube-system/coredns-565d847f94-xjvwt Jan 31 16:40:30.935: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6b9657f4f7-nppsl, container csi-provisioner Jan 31 16:40:30.935: INFO: Describing Pod kube-system/csi-azuredisk-controller-6b9657f4f7-nppsl Jan 31 16:40:30.935: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6b9657f4f7-nppsl, container csi-resizer Jan 31 16:40:30.935: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6b9657f4f7-nppsl, container csi-attacher Jan 31 16:40:30.935: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6b9657f4f7-nppsl, container csi-snapshotter Jan 31 16:40:30.935: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6b9657f4f7-nppsl, container liveness-probe Jan 31 16:40:30.935: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6b9657f4f7-nppsl, container azuredisk Jan 31 16:40:31.331: INFO: Describing Pod kube-system/csi-azuredisk-node-qwkdm Jan 31 16:40:31.331: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-qwkdm, container azuredisk Jan 31 16:40:31.331: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-qwkdm, container node-driver-registrar Jan 31 16:40:31.331: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-qwkdm, container liveness-probe Jan 31 16:40:31.729: INFO: Creating log watcher for controller kube-system/etcd-capz-conf-ttqvdh-control-plane-hqqpr, container etcd Jan 31 16:40:31.729: INFO: Describing Pod kube-system/etcd-capz-conf-ttqvdh-control-plane-hqqpr Jan 31 16:40:32.130: INFO: Creating log watcher for controller kube-system/kube-apiserver-capz-conf-ttqvdh-control-plane-hqqpr, container kube-apiserver Jan 31 16:40:32.130: INFO: Describing Pod kube-system/kube-apiserver-capz-conf-ttqvdh-control-plane-hqqpr Jan 31 16:40:32.529: INFO: Creating log watcher for controller kube-system/kube-controller-manager-capz-conf-ttqvdh-control-plane-hqqpr, container kube-controller-manager Jan 31 16:40:32.529: INFO: Describing Pod kube-system/kube-controller-manager-capz-conf-ttqvdh-control-plane-hqqpr Jan 31 16:40:32.930: INFO: Describing Pod kube-system/kube-proxy-fb8qc Jan 31 16:40:32.930: INFO: Creating log watcher for controller kube-system/kube-proxy-fb8qc, container kube-proxy Jan 31 16:40:33.330: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-conf-ttqvdh-control-plane-hqqpr, container kube-scheduler Jan 31 16:40:33.330: INFO: Describing Pod kube-system/kube-scheduler-capz-conf-ttqvdh-control-plane-hqqpr Jan 31 16:40:33.731: INFO: Describing Pod kube-system/metrics-server-76f7667fbf-z4s5x Jan 31 16:40:33.731: INFO: Creating log watcher for controller kube-system/metrics-server-76f7667fbf-z4s5x, container metrics-server Jan 31 16:40:34.132: INFO: Fetching kube-system pod logs took 5.217655531s Jan 31 16:40:34.132: INFO: Dumping workload cluster capz-conf-ttqvdh/capz-conf-ttqvdh Azure activity log Jan 31 16:40:34.132: INFO: Describing Pod tigera-operator/tigera-operator-64db64cb98-68vlp Jan 31 16:40:34.132: INFO: Creating log watcher for controller tigera-operator/tigera-operator-64db64cb98-68vlp, container tigera-operator Jan 31 16:40:38.506: INFO: Fetching activity logs took 4.374536268s Jan 31 16:40:38.506: INFO: Dumping all the Cluster API resources in the "capz-conf-ttqvdh" namespace Jan 31 16:40:38.850: INFO: Deleting all clusters in the capz-conf-ttqvdh namespace STEP: Deleting cluster capz-conf-ttqvdh - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/31/23 16:40:38.867 INFO: Waiting for the Cluster capz-conf-ttqvdh/capz-conf-ttqvdh to be deleted STEP: Waiting for cluster capz-conf-ttqvdh to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/31/23 16:40:38.882 STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:212 @ 01/31/23 17:10:38.883 [FAILED] Timed out after 1800.001s. Expected <bool>: false to be true In [AfterEach] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:176 @ 01/31/23 17:10:51.612 < Exit [AfterEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:242 @ 01/31/23 17:10:51.612 (39m11.527s)
Filter through log files | View test history on testgrid
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e [It] Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e [It] Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e [It] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and machinepools [OPTIONAL] with 1 control plane node and 1 machinepool
capz-e2e [It] Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [It] Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e [It] Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with out-of-tree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider
... skipping 486 lines ... [38;5;243m------------------------------[0m [0mConformance Tests [0m[1mconformance-tests[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100[0m INFO: Cluster name is capz-conf-ttqvdh [1mSTEP:[0m Creating namespace "capz-conf-ttqvdh" for hosting the cluster [38;5;243m@ 01/31/23 15:59:53.39[0m Jan 31 15:59:53.390: INFO: starting to create namespace for hosting the "capz-conf-ttqvdh" test spec 2023/01/31 15:59:53 failed trying to get namespace (capz-conf-ttqvdh):namespaces "capz-conf-ttqvdh" not found INFO: Creating namespace capz-conf-ttqvdh INFO: Creating event watcher for namespace "capz-conf-ttqvdh" [1mconformance-tests[38;5;243m - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:102 @ 01/31/23 15:59:53.429[0m [1mconformance-tests [0m[1mName[0m | [1mN[0m | [1mMin[0m | [1mMedian[0m | [1mMean[0m | [1mStdDev[0m | [1mMax[0m INFO: Creating the workload cluster with name "capz-conf-ttqvdh" using the "conformance-ci-artifacts-windows-containerd" template (Kubernetes v1.25.7-rc.0.14+f6fb2e8bdbec52, 1 control-plane machines, 0 worker machines) ... skipping 107 lines ... [1mSTEP:[0m Waiting for the control plane to be ready [38;5;243m@ 01/31/23 16:06:40.029[0m [1mSTEP:[0m Checking all the control plane machines are in the expected failure domains [38;5;243m@ 01/31/23 16:06:40.037[0m INFO: Waiting for the machine deployments to be provisioned [1mSTEP:[0m Waiting for the workload nodes to exist [38;5;243m@ 01/31/23 16:06:40.063[0m [1mSTEP:[0m Checking all the machines controlled by capz-conf-ttqvdh-md-0 are in the "<None>" failure domain [38;5;243m@ 01/31/23 16:06:40.074[0m [1mSTEP:[0m Waiting for the workload nodes to exist [38;5;243m@ 01/31/23 16:06:40.084[0m [38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:131 [38;5;243m@ 01/31/23 16:31:40.085[0m Jan 31 16:31:40.085: INFO: FAILED! Jan 31 16:31:40.085: INFO: Cleaning up after "Conformance Tests conformance-tests" spec [1mSTEP:[0m Dumping logs from the "capz-conf-ttqvdh" workload cluster [38;5;243m@ 01/31/23 16:31:40.085[0m Jan 31 16:31:40.085: INFO: Dumping workload cluster capz-conf-ttqvdh/capz-conf-ttqvdh logs Jan 31 16:31:40.128: INFO: Collecting logs for Linux node capz-conf-ttqvdh-control-plane-hqqpr in cluster capz-conf-ttqvdh in namespace capz-conf-ttqvdh Jan 31 16:32:00.737: INFO: Collecting boot logs for AzureMachine capz-conf-ttqvdh-control-plane-hqqpr Jan 31 16:32:01.688: INFO: Unable to collect logs as node doesn't have addresses Jan 31 16:32:01.688: INFO: Collecting logs for Windows node capz-conf-ttqvdh-md-win-f68px in cluster capz-conf-ttqvdh in namespace capz-conf-ttqvdh Jan 31 16:36:17.645: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-ttqvdh-md-win-f68px to /logs/artifacts/clusters/capz-conf-ttqvdh/machines/capz-conf-ttqvdh-md-win-8bd8475b5-6sxlt/crashdumps.tar Jan 31 16:36:18.170: INFO: Collecting boot logs for AzureMachine capz-conf-ttqvdh-md-win-f68px Failed to get logs for Machine capz-conf-ttqvdh-md-win-8bd8475b5-6sxlt, Cluster capz-conf-ttqvdh/capz-conf-ttqvdh: [dialing from control plane to target node at capz-conf-ttqvdh-md-win-f68px: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: AzureMachine provider ID is nil] Jan 31 16:36:18.201: INFO: Unable to collect logs as node doesn't have addresses Jan 31 16:36:18.201: INFO: Collecting logs for Windows node capz-conf-ttqvdh-md-win-rg8qc in cluster capz-conf-ttqvdh in namespace capz-conf-ttqvdh Jan 31 16:40:28.369: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-ttqvdh-md-win-rg8qc to /logs/artifacts/clusters/capz-conf-ttqvdh/machines/capz-conf-ttqvdh-md-win-8bd8475b5-jbfws/crashdumps.tar Jan 31 16:40:28.897: INFO: Collecting boot logs for AzureMachine capz-conf-ttqvdh-md-win-rg8qc Failed to get logs for Machine capz-conf-ttqvdh-md-win-8bd8475b5-jbfws, Cluster capz-conf-ttqvdh/capz-conf-ttqvdh: [dialing from control plane to target node at capz-conf-ttqvdh-md-win-rg8qc: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: AzureMachine provider ID is nil] Jan 31 16:40:28.914: INFO: Dumping workload cluster capz-conf-ttqvdh/capz-conf-ttqvdh kube-system pod logs Jan 31 16:40:29.380: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-546454464f-nhl98, container calico-apiserver Jan 31 16:40:29.380: INFO: Describing Pod calico-apiserver/calico-apiserver-546454464f-nhl98 Jan 31 16:40:29.454: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-546454464f-z2gbf, container calico-apiserver Jan 31 16:40:29.454: INFO: Describing Pod calico-apiserver/calico-apiserver-546454464f-z2gbf Jan 31 16:40:29.537: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-5f9dc85578-9tzhw, container calico-kube-controllers ... skipping 40 lines ... Jan 31 16:40:38.506: INFO: Dumping all the Cluster API resources in the "capz-conf-ttqvdh" namespace Jan 31 16:40:38.850: INFO: Deleting all clusters in the capz-conf-ttqvdh namespace [1mSTEP:[0m Deleting cluster capz-conf-ttqvdh [38;5;243m@ 01/31/23 16:40:38.867[0m INFO: Waiting for the Cluster capz-conf-ttqvdh/capz-conf-ttqvdh to be deleted [1mSTEP:[0m Waiting for cluster capz-conf-ttqvdh to be deleted [38;5;243m@ 01/31/23 16:40:38.882[0m [1mSTEP:[0m Redacting sensitive information from logs [38;5;243m@ 01/31/23 17:10:38.883[0m [38;5;9m[FAILED][0m in [AfterEach] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:176 [38;5;243m@ 01/31/23 17:10:51.612[0m [38;5;9m• [FAILED] [4258.223 seconds][0m [0mConformance Tests [38;5;9m[1m[It] conformance-tests[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100[0m [38;5;9m[FAILED] Timed out after 1500.000s. Timed out waiting for 2 nodes to be created for MachineDeployment capz-conf-ttqvdh/capz-conf-ttqvdh-md-win Expected <int>: 0 to equal <int>: 2[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:131[0m [38;5;243m@ 01/31/23 16:31:40.085[0m ... skipping 20 lines ... [0m[1m[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report[0m [38;5;243mautogenerated by Ginkgo[0m [38;5;10m[ReportAfterSuite] PASSED [0.003 seconds][0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 1 Failure:[0m [38;5;9m[FAIL][0m [0mConformance Tests [38;5;9m[1m[It] conformance-tests[0m [38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:131[0m [38;5;9m[1mRan 1 of 23 Specs in 4421.561 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m0 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m22 Skipped[0m --- FAIL: TestE2E (4421.57s) FAIL [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mCurrentGinkgoTestDescription() is deprecated in Ginkgo V2. Use CurrentSpecReport() instead.[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:278[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:281[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.6.0[0m Ginkgo ran 1 suite in 1h16m32.852432374s Test Suite Failed make[3]: *** [Makefile:655: test-e2e-run] Error 1 make[3]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[2]: *** [Makefile:670: test-e2e-skip-push] Error 2 make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[1]: *** [Makefile:686: test-conformance] Error 2 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:696: test-windows-upstream] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...