Recent runs || View in Spyglass
PR | jackfrancis: helm gpu-operator instead of ClusterResourceSet |
Result | FAILURE |
Tests | 1 failed / 26 succeeded |
Started | |
Elapsed | 1h0m |
Revision | cbc5593db41cf85e77f5bb760b8207cfcf56c80e |
Refs |
3099 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sWorkload\scluster\screation\sCreating\sa\scluster\sthat\suses\sthe\sexternal\scloud\sprovider\sand\sexternal\sazurediskcsi\sdriver\s\[OPTIONAL\]\swith\sa\s1\scontrol\splane\snodes\sand\s2\sworker\snodes$'
[FAILED] Timed out after 1500.000s. Timed out waiting for 2 nodes to be created for MachineDeployment capz-e2e-5a6ef3/capz-e2e-5a6ef3-oot-md-0 Expected <int>: 0 to equal <int>: 2 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:131 @ 01/26/23 03:56:08.552from junit.e2e_suite.1.xml
2023/01/26 03:24:28 failed trying to get namespace (capz-e2e-5a6ef3):namespaces "capz-e2e-5a6ef3" not found cluster.cluster.x-k8s.io/capz-e2e-5a6ef3-oot created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-5a6ef3-oot created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-5a6ef3-oot-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-5a6ef3-oot-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-5a6ef3-oot-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-5a6ef3-oot-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-5a6ef3-oot-md-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created felixconfiguration.crd.projectcalico.org/default configured > Enter [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:56 @ 01/26/23 03:24:28.002 INFO: "" started at Thu, 26 Jan 2023 03:24:28 UTC on Ginkgo node 4 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml STEP: Creating namespace "capz-e2e-5a6ef3" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/26/23 03:24:28.002 Jan 26 03:24:28.002: INFO: starting to create namespace for hosting the "capz-e2e-5a6ef3" test spec INFO: Creating namespace capz-e2e-5a6ef3 INFO: Creating event watcher for namespace "capz-e2e-5a6ef3" Jan 26 03:24:28.118: INFO: Creating cluster identity secret "cluster-identity-secret" < Exit [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:56 @ 01/26/23 03:24:28.167 (165ms) > Enter [It] with a 1 control plane nodes and 2 worker nodes - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:639 @ 01/26/23 03:24:28.167 STEP: using user-assigned identity - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:640 @ 01/26/23 03:24:28.167 INFO: Cluster name is capz-e2e-5a6ef3-oot INFO: Creating the workload cluster with name "capz-e2e-5a6ef3-oot" using the "external-cloud-provider" template (Kubernetes v1.24.10, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-5a6ef3-oot --infrastructure (default) --kubernetes-version v1.24.10 --control-plane-machine-count 1 --worker-machine-count 2 --flavor external-cloud-provider INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:134 @ 01/26/23 03:24:31.371 INFO: Waiting for control plane to be initialized STEP: Installing cloud-provider-azure components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:45 @ 01/26/23 03:26:51.498 Jan 26 03:28:38.237: INFO: getting history for release cloud-provider-azure-oot Jan 26 03:28:38.347: INFO: Release cloud-provider-azure-oot does not exist, installing it Jan 26 03:28:41.817: INFO: creating 1 resource(s) Jan 26 03:28:42.070: INFO: creating 10 resource(s) Jan 26 03:28:42.910: INFO: Install complete STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/26/23 03:28:42.91 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:112 @ 01/26/23 03:28:42.91 Jan 26 03:28:43.037: INFO: getting history for release projectcalico Jan 26 03:28:43.147: INFO: Release projectcalico does not exist, installing it Jan 26 03:28:43.853: INFO: creating 1 resource(s) Jan 26 03:28:43.985: INFO: creating 1 resource(s) Jan 26 03:28:44.116: INFO: creating 1 resource(s) Jan 26 03:28:44.238: INFO: creating 1 resource(s) Jan 26 03:28:44.372: INFO: creating 1 resource(s) Jan 26 03:28:44.503: INFO: creating 1 resource(s) Jan 26 03:28:44.647: INFO: creating 1 resource(s) Jan 26 03:28:44.787: INFO: creating 1 resource(s) Jan 26 03:28:44.921: INFO: creating 1 resource(s) Jan 26 03:28:45.073: INFO: creating 1 resource(s) Jan 26 03:28:45.204: INFO: creating 1 resource(s) Jan 26 03:28:45.325: INFO: creating 1 resource(s) Jan 26 03:28:45.445: INFO: creating 1 resource(s) Jan 26 03:28:45.572: INFO: creating 1 resource(s) Jan 26 03:28:45.695: INFO: creating 1 resource(s) Jan 26 03:28:45.825: INFO: creating 1 resource(s) Jan 26 03:28:45.967: INFO: creating 1 resource(s) Jan 26 03:28:46.102: INFO: creating 1 resource(s) Jan 26 03:28:46.244: INFO: creating 1 resource(s) Jan 26 03:28:46.444: INFO: creating 1 resource(s) Jan 26 03:28:47.014: INFO: creating 1 resource(s) Jan 26 03:28:47.160: INFO: Clearing discovery cache Jan 26 03:28:47.160: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 26 03:28:52.420: INFO: creating 1 resource(s) Jan 26 03:28:53.146: INFO: creating 6 resource(s) Jan 26 03:28:54.462: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/26/23 03:28:55.262 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/26/23 03:28:55.705 Jan 26 03:28:55.705: INFO: starting to wait for deployment to become available Jan 26 03:29:05.924: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.218142525s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:74 @ 01/26/23 03:29:07.147 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/26/23 03:29:07.709 Jan 26 03:29:07.709: INFO: starting to wait for deployment to become available Jan 26 03:30:08.483: INFO: Deployment calico-system/calico-kube-controllers is now available, took 1m0.773197762s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/26/23 03:30:09.473 Jan 26 03:30:09.473: INFO: starting to wait for deployment to become available Jan 26 03:30:09.584: INFO: Deployment calico-system/calico-typha is now available, took 110.102729ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:79 @ 01/26/23 03:30:09.584 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/26/23 03:30:10.374 Jan 26 03:30:10.374: INFO: starting to wait for deployment to become available Jan 26 03:30:30.706: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 20.331638909s STEP: Waiting for Ready cloud-controller-manager deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:55 @ 01/26/23 03:30:30.706 STEP: waiting for deployment kube-system/cloud-controller-manager to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/26/23 03:30:31.263 Jan 26 03:30:31.263: INFO: starting to wait for deployment to become available Jan 26 03:30:31.379: INFO: Deployment kube-system/cloud-controller-manager is now available, took 115.436264ms INFO: Waiting for the first control plane machine managed by capz-e2e-5a6ef3/capz-e2e-5a6ef3-oot-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:133 @ 01/26/23 03:30:31.404 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:65 @ 01/26/23 03:30:31.411 Jan 26 03:30:31.536: INFO: getting history for release azuredisk-csi-driver-oot Jan 26 03:30:31.646: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Jan 26 03:30:36.050: INFO: creating 1 resource(s) Jan 26 03:30:36.414: INFO: creating 18 resource(s) Jan 26 03:30:37.300: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:75 @ 01/26/23 03:30:37.3 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/26/23 03:30:37.769 Jan 26 03:30:37.769: INFO: starting to wait for deployment to become available Jan 26 03:31:08.509: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 30.740379735s INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-5a6ef3/capz-e2e-5a6ef3-oot-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:165 @ 01/26/23 03:31:08.524 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:196 @ 01/26/23 03:31:08.53 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:102 @ 01/26/23 03:31:08.551 [FAILED] Timed out after 1500.000s. Timed out waiting for 2 nodes to be created for MachineDeployment capz-e2e-5a6ef3/capz-e2e-5a6ef3-oot-md-0 Expected <int>: 0 to equal <int>: 2 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:131 @ 01/26/23 03:56:08.552 < Exit [It] with a 1 control plane nodes and 2 worker nodes - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:639 @ 01/26/23 03:56:08.552 (31m40.385s) > Enter [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:117 @ 01/26/23 03:56:08.552 Jan 26 03:56:08.552: INFO: FAILED! Jan 26 03:56:08.552: INFO: Cleaning up after "Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes" spec STEP: Dumping logs from the "capz-e2e-5a6ef3-oot" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/26/23 03:56:08.552 Jan 26 03:56:08.552: INFO: Dumping workload cluster capz-e2e-5a6ef3/capz-e2e-5a6ef3-oot logs Jan 26 03:56:08.597: INFO: Collecting logs for Linux node capz-e2e-5a6ef3-oot-control-plane-5rtfl in cluster capz-e2e-5a6ef3-oot in namespace capz-e2e-5a6ef3 Jan 26 03:56:25.545: INFO: Collecting boot logs for AzureMachine capz-e2e-5a6ef3-oot-control-plane-5rtfl Jan 26 03:56:27.288: INFO: Collecting logs for Linux node capz-e2e-5a6ef3-oot-md-0-ngxlq in cluster capz-e2e-5a6ef3-oot in namespace capz-e2e-5a6ef3 Jan 26 03:56:36.982: INFO: Collecting boot logs for AzureMachine capz-e2e-5a6ef3-oot-md-0-ngxlq Jan 26 03:56:37.627: INFO: Collecting logs for Linux node capz-e2e-5a6ef3-oot-md-0-s2h9h in cluster capz-e2e-5a6ef3-oot in namespace capz-e2e-5a6ef3 Jan 26 03:56:47.742: INFO: Collecting boot logs for AzureMachine capz-e2e-5a6ef3-oot-md-0-s2h9h Jan 26 03:56:48.503: INFO: Dumping workload cluster capz-e2e-5a6ef3/capz-e2e-5a6ef3-oot kube-system pod logs Jan 26 03:56:49.683: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-677c89d6d7-68jkf, container calico-apiserver Jan 26 03:56:49.683: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-677c89d6d7-68jkf Jan 26 03:56:49.683: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-677c89d6d7-ktb5f, container calico-apiserver Jan 26 03:56:49.684: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-677c89d6d7-ktb5f Jan 26 03:56:49.824: INFO: Collecting events for Pod calico-system/calico-kube-controllers-594d54f99-wzm59 Jan 26 03:56:49.824: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-594d54f99-wzm59, container calico-kube-controllers Jan 26 03:56:49.824: INFO: Creating log watcher for controller calico-system/calico-typha-5fc9c84f67-np5d8, container calico-typha Jan 26 03:56:49.825: INFO: Collecting events for Pod calico-system/calico-typha-5fc9c84f67-np5d8 Jan 26 03:56:49.825: INFO: Creating log watcher for controller calico-system/csi-node-driver-9hv8m, container calico-csi Jan 26 03:56:49.825: INFO: Creating log watcher for controller calico-system/calico-node-rkcsm, container calico-node Jan 26 03:56:49.826: INFO: Creating log watcher for controller calico-system/csi-node-driver-9hv8m, container csi-node-driver-registrar Jan 26 03:56:49.826: INFO: Collecting events for Pod calico-system/calico-node-rkcsm Jan 26 03:56:49.826: INFO: Collecting events for Pod calico-system/csi-node-driver-9hv8m Jan 26 03:56:49.994: INFO: Collecting events for Pod kube-system/cloud-controller-manager-5f5c8f4d78-zmp6n Jan 26 03:56:49.995: INFO: Collecting events for Pod kube-system/csi-azuredisk-controller-545d478dbf-7kqc5 Jan 26 03:56:49.995: INFO: Creating log watcher for controller kube-system/cloud-node-manager-wd7vb, container cloud-node-manager Jan 26 03:56:49.995: INFO: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-5a6ef3-oot-control-plane-5rtfl, container kube-apiserver Jan 26 03:56:49.995: INFO: Creating log watcher for controller kube-system/cloud-controller-manager-5f5c8f4d78-zmp6n, container cloud-controller-manager Jan 26 03:56:49.995: INFO: Collecting events for Pod kube-system/kube-apiserver-capz-e2e-5a6ef3-oot-control-plane-5rtfl Jan 26 03:56:49.995: INFO: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-5a6ef3-oot-control-plane-5rtfl, container kube-controller-manager Jan 26 03:56:49.996: INFO: Collecting events for Pod kube-system/cloud-node-manager-wd7vb Jan 26 03:56:49.996: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-8zwlc, container liveness-probe Jan 26 03:56:49.996: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-57d8l, container coredns Jan 26 03:56:49.996: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-8zwlc Jan 26 03:56:49.996: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-8zwlc, container node-driver-registrar Jan 26 03:56:49.996: INFO: Collecting events for Pod kube-system/kube-controller-manager-capz-e2e-5a6ef3-oot-control-plane-5rtfl Jan 26 03:56:49.996: INFO: Creating log watcher for controller kube-system/kube-proxy-72pc7, container kube-proxy Jan 26 03:56:49.997: INFO: Creating log watcher for controller kube-system/etcd-capz-e2e-5a6ef3-oot-control-plane-5rtfl, container etcd Jan 26 03:56:49.997: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-8zwlc, container azuredisk Jan 26 03:56:49.997: INFO: Collecting events for Pod kube-system/coredns-57575c5f89-57d8l Jan 26 03:56:49.997: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-5a6ef3-oot-control-plane-5rtfl, container kube-scheduler Jan 26 03:56:49.997: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-7kqc5, container csi-attacher Jan 26 03:56:49.997: INFO: Collecting events for Pod kube-system/coredns-57575c5f89-5mwnx Jan 26 03:56:49.997: INFO: Collecting events for Pod kube-system/kube-scheduler-capz-e2e-5a6ef3-oot-control-plane-5rtfl Jan 26 03:56:49.997: INFO: Collecting events for Pod kube-system/etcd-capz-e2e-5a6ef3-oot-control-plane-5rtfl Jan 26 03:56:49.997: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-7kqc5, container csi-snapshotter Jan 26 03:56:49.997: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-7kqc5, container csi-provisioner Jan 26 03:56:49.997: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-5mwnx, container coredns Jan 26 03:56:49.997: INFO: Collecting events for Pod kube-system/kube-proxy-72pc7 Jan 26 03:56:49.998: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-7kqc5, container liveness-probe Jan 26 03:56:49.998: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-7kqc5, container csi-resizer Jan 26 03:56:49.998: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-7kqc5, container azuredisk Jan 26 03:56:50.144: INFO: Fetching kube-system pod logs took 1.640803939s Jan 26 03:56:50.144: INFO: Dumping workload cluster capz-e2e-5a6ef3/capz-e2e-5a6ef3-oot Azure activity log Jan 26 03:56:50.144: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-6xbr9, container tigera-operator Jan 26 03:56:50.144: INFO: Collecting events for Pod tigera-operator/tigera-operator-65d6bf4d4f-6xbr9 Jan 26 03:56:53.922: INFO: Fetching activity logs took 3.777578655s Jan 26 03:56:53.922: INFO: Dumping all the Cluster API resources in the "capz-e2e-5a6ef3" namespace Jan 26 03:56:54.229: INFO: Deleting all clusters in the capz-e2e-5a6ef3 namespace STEP: Deleting cluster capz-e2e-5a6ef3-oot - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/26/23 03:56:54.245 INFO: Waiting for the Cluster capz-e2e-5a6ef3/capz-e2e-5a6ef3-oot to be deleted STEP: Waiting for cluster capz-e2e-5a6ef3-oot to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/26/23 03:56:54.256 Jan 26 04:02:44.431: INFO: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-5a6ef3 Jan 26 04:02:44.448: INFO: Checking if any resources are left over in Azure for spec "create-workload-cluster" STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:216 @ 01/26/23 04:02:45.06 INFO: "with a 1 control plane nodes and 2 worker nodes" started at Thu, 26 Jan 2023 04:04:07 UTC on Ginkgo node 4 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:117 @ 01/26/23 04:04:07.722 (7m59.171s)
Filter through log files | View test history on testgrid
capz-e2e [It] Workload cluster creation Creating a Flatcar cluster [OPTIONAL] With Flatcar control-plane and worker nodes
capz-e2e [It] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and machinepools [OPTIONAL] with 1 control plane node and 1 machinepool
capz-e2e [It] Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [It] Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e [It] Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [It] Conformance Tests conformance-tests
capz-e2e [It] Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e [It] Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e [It] Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e [It] Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e [It] Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e [It] Workload cluster creation Creating an AKS cluster [Managed Kubernetes] with a single control plane node and 1 node
... skipping 628 lines ... [38;5;243m------------------------------[0m [38;5;10m• [903.133 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a Flatcar cluster [OPTIONAL] [38;5;10m[1mWith Flatcar control-plane and worker nodes[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:321[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/26 03:24:28 failed trying to get namespace (capz-e2e-zs3uwq):namespaces "capz-e2e-zs3uwq" not found cluster.cluster.x-k8s.io/capz-e2e-zs3uwq-flatcar created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-zs3uwq-flatcar created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-zs3uwq-flatcar-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-zs3uwq-flatcar-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-zs3uwq-flatcar-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-zs3uwq-flatcar-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-zs3uwq-flatcar-md-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created felixconfiguration.crd.projectcalico.org/default configured Failed to get logs for Machine capz-e2e-zs3uwq-flatcar-control-plane-6v47s, Cluster capz-e2e-zs3uwq/capz-e2e-zs3uwq-flatcar: [dialing public load balancer at capz-e2e-zs3uwq-flatcar-1ea2f215.westeurope.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.83.154:59462->20.13.97.25:22: read: connection reset by peer, dialing public load balancer at capz-e2e-zs3uwq-flatcar-1ea2f215.westeurope.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.83.154:59464->20.13.97.25:22: read: connection reset by peer] Failed to get logs for Machine capz-e2e-zs3uwq-flatcar-md-0-568cfcb8b6-qlb6h, Cluster capz-e2e-zs3uwq/capz-e2e-zs3uwq-flatcar: [dialing public load balancer at capz-e2e-zs3uwq-flatcar-1ea2f215.westeurope.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.83.154:58098->20.13.97.25:22: read: connection reset by peer, dialing public load balancer at capz-e2e-zs3uwq-flatcar-1ea2f215.westeurope.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.83.154:58110->20.13.97.25:22: read: connection reset by peer, dialing public load balancer at capz-e2e-zs3uwq-flatcar-1ea2f215.westeurope.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.83.154:58106->20.13.97.25:22: read: connection reset by peer, dialing public load balancer at capz-e2e-zs3uwq-flatcar-1ea2f215.westeurope.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.83.154:58102->20.13.97.25:22: read: connection reset by peer, dialing public load balancer at capz-e2e-zs3uwq-flatcar-1ea2f215.westeurope.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.83.154:58096->20.13.97.25:22: read: connection reset by peer, dialing public load balancer at capz-e2e-zs3uwq-flatcar-1ea2f215.westeurope.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.83.154:58104->20.13.97.25:22: read: connection reset by peer, dialing public load balancer at capz-e2e-zs3uwq-flatcar-1ea2f215.westeurope.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.83.154:58112->20.13.97.25:22: read: connection reset by peer, dialing public load balancer at capz-e2e-zs3uwq-flatcar-1ea2f215.westeurope.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.83.154:58108->20.13.97.25:22: read: connection reset by peer, dialing public load balancer at capz-e2e-zs3uwq-flatcar-1ea2f215.westeurope.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.83.154:58100->20.13.97.25:22: read: connection reset by peer] [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Thu, 26 Jan 2023 03:24:27 UTC on Ginkgo node 10 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating namespace "capz-e2e-zs3uwq" for hosting the cluster [38;5;243m@ 01/26/23 03:24:27.998[0m Jan 26 03:24:27.998: INFO: starting to create namespace for hosting the "capz-e2e-zs3uwq" test spec ... skipping 157 lines ... [38;5;243m------------------------------[0m [38;5;10m• [1068.833 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a cluster that uses the external cloud provider and machinepools [OPTIONAL] [38;5;10m[1mwith 1 control plane node and 1 machinepool[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:575[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/26 03:24:28 failed trying to get namespace (capz-e2e-m7a47d):namespaces "capz-e2e-m7a47d" not found cluster.cluster.x-k8s.io/capz-e2e-m7a47d-flex created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-m7a47d-flex created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-m7a47d-flex-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-m7a47d-flex-control-plane created machinepool.cluster.x-k8s.io/capz-e2e-m7a47d-flex-mp-0 created azuremachinepool.infrastructure.cluster.x-k8s.io/capz-e2e-m7a47d-flex-mp-0 created ... skipping 2 lines ... felixconfiguration.crd.projectcalico.org/default configured W0126 03:34:44.050091 37418 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning 2023/01/26 03:35:35 [DEBUG] GET http://20.103.90.237 W0126 03:36:19.372228 37418 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning Failed to get logs for MachinePool capz-e2e-m7a47d-flex-mp-0, Cluster capz-e2e-m7a47d/capz-e2e-m7a47d-flex: Unable to collect VMSS Boot Diagnostic logs: failed to parse resource id: parsing failed for /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-m7a47d-flex/providers/Microsoft.Compute. Invalid resource Id format [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Thu, 26 Jan 2023 03:24:27 UTC on Ginkgo node 8 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating namespace "capz-e2e-m7a47d" for hosting the cluster [38;5;243m@ 01/26/23 03:24:27.999[0m Jan 26 03:24:27.999: INFO: starting to create namespace for hosting the "capz-e2e-m7a47d" test spec ... skipping 229 lines ... [38;5;243m------------------------------[0m [38;5;10m• [1303.040 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a GPU-enabled cluster [OPTIONAL] [38;5;10m[1mwith a single control plane node and 1 node[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:506[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/26 03:24:28 failed trying to get namespace (capz-e2e-ayniet):namespaces "capz-e2e-ayniet" not found cluster.cluster.x-k8s.io/capz-e2e-ayniet-gpu created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-ayniet-gpu created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-ayniet-gpu-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-ayniet-gpu-control-plane created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created machinedeployment.cluster.x-k8s.io/capz-e2e-ayniet-gpu-md-0 created ... skipping 232 lines ... [38;5;243m------------------------------[0m [38;5;10m• [1385.146 seconds][0m [0mWorkload cluster creation [38;5;243mCreating clusters using clusterclass [OPTIONAL] [38;5;10m[1mwith a single control plane node, one linux worker node, and one windows worker node[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:908[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/26 03:24:28 failed trying to get namespace (capz-e2e-09zm69):namespaces "capz-e2e-09zm69" not found clusterclass.cluster.x-k8s.io/ci-default created kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/ci-default-kubeadm-control-plane created azureclustertemplate.infrastructure.cluster.x-k8s.io/ci-default-azure-cluster created azuremachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-control-plane created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/ci-default-worker created azuremachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-worker created ... skipping 5 lines ... clusterresourceset.addons.cluster.x-k8s.io/csi-proxy created configmap/cni-capz-e2e-09zm69-cc-calico-windows created configmap/csi-proxy-addon created felixconfiguration.crd.projectcalico.org/default configured Failed to get logs for Machine capz-e2e-09zm69-cc-jf7ww-v8w2d, Cluster capz-e2e-09zm69/capz-e2e-09zm69-cc: dialing public load balancer at capz-e2e-09zm69-cc-ab4a6fa1.westeurope.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain Failed to get logs for Machine capz-e2e-09zm69-cc-md-0-2cl7g-648d686b8-4wcj4, Cluster capz-e2e-09zm69/capz-e2e-09zm69-cc: dialing public load balancer at capz-e2e-09zm69-cc-ab4a6fa1.westeurope.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain Failed to get logs for Machine capz-e2e-09zm69-cc-md-win-pfz64-599746bc-dk6b2, Cluster capz-e2e-09zm69/capz-e2e-09zm69-cc: dialing public load balancer at capz-e2e-09zm69-cc-ab4a6fa1.westeurope.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Thu, 26 Jan 2023 03:24:28 UTC on Ginkgo node 3 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating namespace "capz-e2e-09zm69" for hosting the cluster [38;5;243m@ 01/26/23 03:24:28.002[0m Jan 26 03:24:28.002: INFO: starting to create namespace for hosting the "capz-e2e-09zm69" test spec ... skipping 186 lines ... Jan 26 03:39:52.747: INFO: Collecting events for Pod kube-system/kube-proxy-wwz8m Jan 26 03:39:52.747: INFO: Creating log watcher for controller kube-system/csi-proxy-c954c, container csi-proxy Jan 26 03:39:53.019: INFO: Fetching kube-system pod logs took 1.732809569s Jan 26 03:39:53.019: INFO: Dumping workload cluster capz-e2e-09zm69/capz-e2e-09zm69-cc Azure activity log Jan 26 03:39:53.019: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-bkcgt, container tigera-operator Jan 26 03:39:53.020: INFO: Collecting events for Pod tigera-operator/tigera-operator-65d6bf4d4f-bkcgt Jan 26 03:39:53.040: INFO: Error fetching activity logs for cluster capz-e2e-09zm69-cc in namespace capz-e2e-09zm69. Not able to find the AzureManagedControlPlane on the management cluster: azuremanagedcontrolplanes.infrastructure.cluster.x-k8s.io "capz-e2e-09zm69-cc" not found Jan 26 03:39:53.040: INFO: Fetching activity logs took 21.260736ms Jan 26 03:39:53.040: INFO: Dumping all the Cluster API resources in the "capz-e2e-09zm69" namespace Jan 26 03:39:53.406: INFO: Deleting all clusters in the capz-e2e-09zm69 namespace [1mSTEP:[0m Deleting cluster capz-e2e-09zm69-cc [38;5;243m@ 01/26/23 03:39:53.423[0m INFO: Waiting for the Cluster capz-e2e-09zm69/capz-e2e-09zm69-cc to be deleted [1mSTEP:[0m Waiting for cluster capz-e2e-09zm69-cc to be deleted [38;5;243m@ 01/26/23 03:39:53.434[0m ... skipping 10 lines ... [38;5;243m------------------------------[0m [38;5;10m• [1497.082 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a dual-stack cluster [OPTIONAL] [38;5;10m[1mWith dual-stack worker node[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:832[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/26 03:24:28 failed trying to get namespace (capz-e2e-2upwju):namespaces "capz-e2e-2upwju" not found cluster.cluster.x-k8s.io/capz-e2e-2upwju-dual-stack created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-2upwju-dual-stack created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-2upwju-dual-stack-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-2upwju-dual-stack-control-plane created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created machinedeployment.cluster.x-k8s.io/capz-e2e-2upwju-dual-stack-md-0 created ... skipping 325 lines ... [38;5;243m<< Timeline[0m [38;5;243m------------------------------[0m [38;5;10m[SynchronizedAfterSuite] PASSED [0.000 seconds][0m [38;5;10m[1m[SynchronizedAfterSuite] [0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:116[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [2379.721 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] [38;5;9m[1m[It] with a 1 control plane nodes and 2 worker nodes[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:639[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/26 03:24:28 failed trying to get namespace (capz-e2e-5a6ef3):namespaces "capz-e2e-5a6ef3" not found cluster.cluster.x-k8s.io/capz-e2e-5a6ef3-oot created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-5a6ef3-oot created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-5a6ef3-oot-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-5a6ef3-oot-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-5a6ef3-oot-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-5a6ef3-oot-md-0 created ... skipping 90 lines ... INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-5a6ef3/capz-e2e-5a6ef3-oot-control-plane to be ready (implies underlying nodes to be ready as well) [1mSTEP:[0m Waiting for the control plane to be ready [38;5;243m@ 01/26/23 03:31:08.524[0m [1mSTEP:[0m Checking all the control plane machines are in the expected failure domains [38;5;243m@ 01/26/23 03:31:08.53[0m INFO: Waiting for the machine deployments to be provisioned [1mSTEP:[0m Waiting for the workload nodes to exist [38;5;243m@ 01/26/23 03:31:08.551[0m [38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:131 [38;5;243m@ 01/26/23 03:56:08.552[0m Jan 26 03:56:08.552: INFO: FAILED! Jan 26 03:56:08.552: INFO: Cleaning up after "Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes" spec [1mSTEP:[0m Dumping logs from the "capz-e2e-5a6ef3-oot" workload cluster [38;5;243m@ 01/26/23 03:56:08.552[0m Jan 26 03:56:08.552: INFO: Dumping workload cluster capz-e2e-5a6ef3/capz-e2e-5a6ef3-oot logs Jan 26 03:56:08.597: INFO: Collecting logs for Linux node capz-e2e-5a6ef3-oot-control-plane-5rtfl in cluster capz-e2e-5a6ef3-oot in namespace capz-e2e-5a6ef3 Jan 26 03:56:25.545: INFO: Collecting boot logs for AzureMachine capz-e2e-5a6ef3-oot-control-plane-5rtfl ... skipping 63 lines ... INFO: Deleting namespace capz-e2e-5a6ef3 Jan 26 04:02:44.448: INFO: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP:[0m Redacting sensitive information from logs [38;5;243m@ 01/26/23 04:02:45.06[0m INFO: "with a 1 control plane nodes and 2 worker nodes" started at Thu, 26 Jan 2023 04:04:07 UTC on Ginkgo node 4 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [38;5;243m<< Timeline[0m [38;5;9m[FAILED] Timed out after 1500.000s. Timed out waiting for 2 nodes to be created for MachineDeployment capz-e2e-5a6ef3/capz-e2e-5a6ef3-oot-md-0 Expected <int>: 0 to equal <int>: 2[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:131[0m [38;5;243m@ 01/26/23 03:56:08.552[0m ... skipping 14 lines ... [38;5;243m------------------------------[0m [38;5;10m• [2817.706 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a private cluster [OPTIONAL] [38;5;10m[1mCreates a public management cluster in a custom vnet[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:156[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/26 03:24:27 failed trying to get namespace (capz-e2e-xwo60y):namespaces "capz-e2e-xwo60y" not found cluster.cluster.x-k8s.io/capz-e2e-xwo60y-public-custom-vnet created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-xwo60y-public-custom-vnet created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-xwo60y-public-custom-vnet-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-xwo60y-public-custom-vnet-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-xwo60y-public-custom-vnet-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-xwo60y-public-custom-vnet-md-0 created ... skipping 247 lines ... Jan 26 04:05:10.289: INFO: Collecting events for Pod kube-system/kube-scheduler-capz-e2e-xwo60y-public-custom-vnet-control-plane-4nsbp Jan 26 04:05:10.290: INFO: Collecting events for Pod kube-system/kube-proxy-sppdw Jan 26 04:05:10.408: INFO: Fetching kube-system pod logs took 1.667169409s Jan 26 04:05:10.408: INFO: Dumping workload cluster capz-e2e-xwo60y/capz-e2e-xwo60y-public-custom-vnet Azure activity log Jan 26 04:05:10.408: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-j8mgc, container tigera-operator Jan 26 04:05:10.409: INFO: Collecting events for Pod tigera-operator/tigera-operator-65d6bf4d4f-j8mgc Jan 26 04:05:18.466: INFO: Got error while iterating over activity logs for resource group capz-e2e-xwo60y-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure responding to next results request: StatusCode=404 -- Original Error: autorest/azure: error response cannot be parsed: {"<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Strict//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd\">\r\n<html xmlns=\"http://www.w3.org/1999/xhtml\">\r\n<head>\r\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=iso-8859-1\"/>\r\n<title>404 - File or directory not found.</title>\r\n<style type=\"text/css\">\r\n<!--\r\nbody{margin:0;font-size:.7em;font-family:Verdana, Arial, Helvetica, sans-serif;background:#EEEEEE;}\r\nfieldset{padding:0 15px 10px 15px;} \r\nh1{font-size:2.4em;margin:0;color:#FFF;}\r\nh2{font-si" '\x00' '\x00'} error: invalid character '<' looking for beginning of value Jan 26 04:05:18.466: INFO: Fetching activity logs took 8.0577707s Jan 26 04:05:18.466: INFO: Dumping all the Cluster API resources in the "capz-e2e-xwo60y" namespace Jan 26 04:05:18.764: INFO: Deleting all clusters in the capz-e2e-xwo60y namespace [1mSTEP:[0m Deleting cluster capz-e2e-xwo60y-public-custom-vnet [38;5;243m@ 01/26/23 04:05:18.785[0m INFO: Waiting for the Cluster capz-e2e-xwo60y/capz-e2e-xwo60y-public-custom-vnet to be deleted [1mSTEP:[0m Waiting for cluster capz-e2e-xwo60y-public-custom-vnet to be deleted [38;5;243m@ 01/26/23 04:05:18.797[0m INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-66576dfdb7-jdzfk, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-768b7b88f9-hqmhn, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-54c5b7f555-d6p9h, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-8ddf45bf4-7pfqq, container manager: http2: client connection lost Jan 26 04:07:58.882: INFO: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-xwo60y Jan 26 04:07:58.900: INFO: Running additional cleanup for the "create-workload-cluster" test spec Jan 26 04:07:58.900: INFO: deleting an existing virtual network "custom-vnet" Jan 26 04:08:10.107: INFO: deleting an existing route table "node-routetable" Jan 26 04:08:13.385: INFO: deleting an existing network security group "node-nsg" ... skipping 16 lines ... [38;5;10m[ReportAfterSuite] PASSED [0.017 seconds][0m [38;5;10m[1m[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report[0m [38;5;243mautogenerated by Ginkgo[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 1 Failure:[0m [38;5;9m[FAIL][0m [0mWorkload cluster creation [38;5;243mCreating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] [38;5;9m[1m[It] with a 1 control plane nodes and 2 worker nodes[0m [38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:131[0m [38;5;9m[1mRan 7 of 24 Specs in 2999.136 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m6 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m17 Skipped[0m [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mCurrentGinkgoTestDescription() is deprecated in Ginkgo V2. Use CurrentSpecReport() instead.[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:427[0m ... skipping 43 lines ... [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:285[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:427[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.7.0[0m --- FAIL: TestE2E (2559.78s) FAIL [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mCurrentGinkgoTestDescription() is deprecated in Ginkgo V2. Use CurrentSpecReport() instead.[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:427[0m ... skipping 34 lines ... PASS Ginkgo ran 1 suite in 53m51.388867468s Test Suite Failed make[1]: *** [Makefile:654: test-e2e-run] Error 1 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:663: test-e2e] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...