Recent runs || View in Spyglass
PR | jackfrancis: E2E: use a common cluster-identity-secret |
Result | FAILURE |
Tests | 1 failed / 26 succeeded |
Started | |
Elapsed | 1h18m |
Revision | bc0386b8b11ed2532565fd131a83e7b0e167123a |
Refs |
3075 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sWorkload\scluster\screation\sCreating\sa\sGPU\-enabled\scluster\s\[OPTIONAL\]\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
[FAILED] Timed out after 1800.001s. Timed out waiting for 1 nodes to be created for MachineDeployment capz-e2e-1fstlz/capz-e2e-1fstlz-gpu-md-0 Expected <int>: 0 to equal <int>: 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:131 @ 01/24/23 00:11:51.411from junit.e2e_suite.1.xml
2023/01/23 23:34:29 failed trying to get namespace (capz-e2e-1fstlz):namespaces "capz-e2e-1fstlz" not found cluster.cluster.x-k8s.io/capz-e2e-1fstlz-gpu serverside-applied azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-1fstlz-gpu serverside-applied kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-1fstlz-gpu-control-plane serverside-applied azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-1fstlz-gpu-control-plane serverside-applied azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp serverside-applied machinedeployment.cluster.x-k8s.io/capz-e2e-1fstlz-gpu-md-0 serverside-applied azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-1fstlz-gpu-md-0 serverside-applied kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-1fstlz-gpu-md-0 serverside-applied clusterresourceset.addons.cluster.x-k8s.io/crs-gpu-operator serverside-applied configmap/nvidia-clusterpolicy-crd serverside-applied configmap/nvidia-gpu-operator-components serverside-applied felixconfiguration.crd.projectcalico.org/default configured Failed to get logs for Machine capz-e2e-1fstlz-gpu-md-0-66578c8d-9d4cb, Cluster capz-e2e-1fstlz/capz-e2e-1fstlz-gpu: dialing from control plane to target node at capz-e2e-1fstlz-gpu-md-0-kmt9d: ssh: rejected: connect failed (Temporary failure in name resolution) > Enter [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:56 @ 01/23/23 23:34:28.946 INFO: "" started at Mon, 23 Jan 2023 23:34:28 UTC on Ginkgo node 10 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml STEP: Creating namespace "capz-e2e-1fstlz" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/23/23 23:34:28.946 Jan 23 23:34:28.946: INFO: starting to create namespace for hosting the "capz-e2e-1fstlz" test spec INFO: Creating namespace capz-e2e-1fstlz INFO: Creating event watcher for namespace "capz-e2e-1fstlz" Jan 23 23:34:29.298: INFO: Creating cluster identity secret "cluster-identity-secret" < Exit [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:56 @ 01/23/23 23:34:29.54 (594ms) > Enter [It] with a single control plane node and 1 node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:506 @ 01/23/23 23:34:29.54 INFO: Cluster name is capz-e2e-1fstlz-gpu INFO: Creating the workload cluster with name "capz-e2e-1fstlz-gpu" using the "nvidia-gpu" template (Kubernetes v1.24.10, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-1fstlz-gpu --infrastructure (default) --kubernetes-version v1.24.10 --control-plane-machine-count 1 --worker-machine-count 1 --flavor nvidia-gpu INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:134 @ 01/23/23 23:34:44.157 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/23/23 23:36:44.483 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:112 @ 01/23/23 23:36:44.483 Jan 23 23:38:47.856: INFO: getting history for release projectcalico Jan 23 23:38:47.920: INFO: Release projectcalico does not exist, installing it Jan 23 23:38:49.086: INFO: creating 1 resource(s) Jan 23 23:38:49.224: INFO: creating 1 resource(s) Jan 23 23:38:49.320: INFO: creating 1 resource(s) Jan 23 23:38:49.405: INFO: creating 1 resource(s) Jan 23 23:38:49.497: INFO: creating 1 resource(s) Jan 23 23:38:49.632: INFO: creating 1 resource(s) Jan 23 23:38:49.816: INFO: creating 1 resource(s) Jan 23 23:38:49.939: INFO: creating 1 resource(s) Jan 23 23:38:50.017: INFO: creating 1 resource(s) Jan 23 23:38:50.142: INFO: creating 1 resource(s) Jan 23 23:38:50.229: INFO: creating 1 resource(s) Jan 23 23:38:50.316: INFO: creating 1 resource(s) Jan 23 23:38:50.402: INFO: creating 1 resource(s) Jan 23 23:38:50.493: INFO: creating 1 resource(s) Jan 23 23:38:50.580: INFO: creating 1 resource(s) Jan 23 23:38:50.722: INFO: creating 1 resource(s) Jan 23 23:38:50.841: INFO: creating 1 resource(s) Jan 23 23:38:50.937: INFO: creating 1 resource(s) Jan 23 23:38:51.053: INFO: creating 1 resource(s) Jan 23 23:38:51.330: INFO: creating 1 resource(s) Jan 23 23:38:51.843: INFO: creating 1 resource(s) Jan 23 23:38:51.912: INFO: Clearing discovery cache Jan 23 23:38:51.912: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 23 23:38:56.043: INFO: creating 1 resource(s) Jan 23 23:38:56.993: INFO: creating 6 resource(s) Jan 23 23:38:58.008: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/23/23 23:38:58.567 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/23/23 23:38:58.825 Jan 23 23:38:58.825: INFO: starting to wait for deployment to become available Jan 23 23:39:08.945: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.119426183s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:74 @ 01/23/23 23:39:10.168 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/23/23 23:39:10.465 Jan 23 23:39:10.465: INFO: starting to wait for deployment to become available Jan 23 23:40:43.410: INFO: Deployment calico-system/calico-kube-controllers is now available, took 1m32.945354894s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/23/23 23:40:43.699 Jan 23 23:40:43.699: INFO: starting to wait for deployment to become available Jan 23 23:40:43.760: INFO: Deployment calico-system/calico-typha is now available, took 61.459053ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:79 @ 01/23/23 23:40:43.76 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/23/23 23:40:54.116 Jan 23 23:40:54.116: INFO: starting to wait for deployment to become available Jan 23 23:41:04.232: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 10.115942955s INFO: Waiting for the first control plane machine managed by capz-e2e-1fstlz/capz-e2e-1fstlz-gpu-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:133 @ 01/23/23 23:41:04.331 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:65 @ 01/23/23 23:41:04.345 Jan 23 23:41:04.477: INFO: getting history for release azuredisk-csi-driver-oot Jan 23 23:41:04.535: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Jan 23 23:41:09.347: INFO: creating 1 resource(s) Jan 23 23:41:09.622: INFO: creating 18 resource(s) Jan 23 23:41:10.169: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:75 @ 01/23/23 23:41:10.222 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/23/23 23:41:10.472 Jan 23 23:41:10.472: INFO: starting to wait for deployment to become available Jan 23 23:41:51.190: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 40.717779799s INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-1fstlz/capz-e2e-1fstlz-gpu-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:165 @ 01/23/23 23:41:51.263 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:196 @ 01/23/23 23:41:51.299 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:102 @ 01/23/23 23:41:51.409 [FAILED] Timed out after 1800.001s. Timed out waiting for 1 nodes to be created for MachineDeployment capz-e2e-1fstlz/capz-e2e-1fstlz-gpu-md-0 Expected <int>: 0 to equal <int>: 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:131 @ 01/24/23 00:11:51.411 < Exit [It] with a single control plane node and 1 node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:506 @ 01/24/23 00:11:51.411 (37m21.87s) > Enter [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:117 @ 01/24/23 00:11:51.411 Jan 24 00:11:51.411: INFO: FAILED! Jan 24 00:11:51.411: INFO: Cleaning up after "Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node" spec STEP: Dumping logs from the "capz-e2e-1fstlz-gpu" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/24/23 00:11:51.411 Jan 24 00:11:51.411: INFO: Dumping workload cluster capz-e2e-1fstlz/capz-e2e-1fstlz-gpu logs Jan 24 00:11:51.476: INFO: Collecting logs for Linux node capz-e2e-1fstlz-gpu-control-plane-5mqhh in cluster capz-e2e-1fstlz-gpu in namespace capz-e2e-1fstlz Jan 24 00:12:09.914: INFO: Collecting boot logs for AzureMachine capz-e2e-1fstlz-gpu-control-plane-5mqhh Jan 24 00:12:11.263: INFO: Collecting logs for Linux node capz-e2e-1fstlz-gpu-md-0-kmt9d in cluster capz-e2e-1fstlz-gpu in namespace capz-e2e-1fstlz Jan 24 00:13:14.646: INFO: Collecting boot logs for AzureMachine capz-e2e-1fstlz-gpu-md-0-kmt9d Jan 24 00:13:15.269: INFO: Dumping workload cluster capz-e2e-1fstlz/capz-e2e-1fstlz-gpu kube-system pod logs Jan 24 00:13:15.957: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-847ffbc5fb-d6wrl, container calico-apiserver Jan 24 00:13:15.958: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-847ffbc5fb-jqgzq, container calico-apiserver Jan 24 00:13:15.959: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-847ffbc5fb-jqgzq Jan 24 00:13:15.959: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-847ffbc5fb-d6wrl Jan 24 00:13:16.031: INFO: Collecting events for Pod calico-system/calico-node-89kvl Jan 24 00:13:16.031: INFO: Collecting events for Pod calico-system/calico-kube-controllers-594d54f99-rh4sg Jan 24 00:13:16.031: INFO: Collecting events for Pod calico-system/calico-typha-dcb68d5bd-ws64r Jan 24 00:13:16.032: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-594d54f99-rh4sg, container calico-kube-controllers Jan 24 00:13:16.032: INFO: Creating log watcher for controller calico-system/calico-typha-dcb68d5bd-ws64r, container calico-typha Jan 24 00:13:16.032: INFO: Creating log watcher for controller calico-system/csi-node-driver-qbhpq, container calico-csi Jan 24 00:13:16.032: INFO: Creating log watcher for controller calico-system/csi-node-driver-qbhpq, container csi-node-driver-registrar Jan 24 00:13:16.032: INFO: Creating log watcher for controller calico-system/calico-node-89kvl, container calico-node Jan 24 00:13:16.032: INFO: Collecting events for Pod calico-system/csi-node-driver-qbhpq Jan 24 00:13:16.103: INFO: Creating log watcher for controller gpu-operator-resources/gpu-operator-79c7dfc46c-bgqd7, container gpu-operator Jan 24 00:13:16.105: INFO: Collecting events for Pod gpu-operator-resources/gpu-operator-79c7dfc46c-bgqd7 Jan 24 00:13:16.105: INFO: Creating log watcher for controller gpu-operator-resources/gpu-operator-node-feature-discovery-master-58fd98d466-qjflb, container master Jan 24 00:13:16.106: INFO: Collecting events for Pod gpu-operator-resources/gpu-operator-node-feature-discovery-master-58fd98d466-qjflb Jan 24 00:13:16.107: INFO: Creating log watcher for controller gpu-operator-resources/gpu-operator-node-feature-discovery-worker-w2wf7, container worker Jan 24 00:13:16.108: INFO: Collecting events for Pod gpu-operator-resources/gpu-operator-node-feature-discovery-worker-w2wf7 Jan 24 00:13:16.208: INFO: Collecting events for Pod kube-system/coredns-57575c5f89-fhcsr Jan 24 00:13:16.208: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-bqfdj, container liveness-probe Jan 24 00:13:16.208: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-bbp2p Jan 24 00:13:16.208: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-fhcsr, container coredns Jan 24 00:13:16.208: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-bqfdj, container azuredisk Jan 24 00:13:16.209: INFO: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-1fstlz-gpu-control-plane-5mqhh, container kube-controller-manager Jan 24 00:13:16.209: INFO: Collecting events for Pod kube-system/kube-controller-manager-capz-e2e-1fstlz-gpu-control-plane-5mqhh Jan 24 00:13:16.209: INFO: Creating log watcher for controller kube-system/kube-proxy-5bbxr, container kube-proxy Jan 24 00:13:16.210: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-qm6vg, container coredns Jan 24 00:13:16.210: INFO: Collecting events for Pod kube-system/csi-azuredisk-controller-545d478dbf-bqfdj Jan 24 00:13:16.210: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-bbp2p, container liveness-probe Jan 24 00:13:16.210: INFO: Creating log watcher for controller kube-system/etcd-capz-e2e-1fstlz-gpu-control-plane-5mqhh, container etcd Jan 24 00:13:16.210: INFO: Collecting events for Pod kube-system/kube-proxy-5bbxr Jan 24 00:13:16.211: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-1fstlz-gpu-control-plane-5mqhh, container kube-scheduler Jan 24 00:13:16.211: INFO: Collecting events for Pod kube-system/etcd-capz-e2e-1fstlz-gpu-control-plane-5mqhh Jan 24 00:13:16.211: INFO: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-1fstlz-gpu-control-plane-5mqhh, container kube-apiserver Jan 24 00:13:16.211: INFO: Collecting events for Pod kube-system/coredns-57575c5f89-qm6vg Jan 24 00:13:16.211: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-bqfdj, container csi-provisioner Jan 24 00:13:16.211: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-bbp2p, container node-driver-registrar Jan 24 00:13:16.212: INFO: Collecting events for Pod kube-system/kube-scheduler-capz-e2e-1fstlz-gpu-control-plane-5mqhh Jan 24 00:13:16.212: INFO: Collecting events for Pod kube-system/kube-apiserver-capz-e2e-1fstlz-gpu-control-plane-5mqhh Jan 24 00:13:16.212: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-bqfdj, container csi-snapshotter Jan 24 00:13:16.212: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-bbp2p, container azuredisk Jan 24 00:13:16.213: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-bqfdj, container csi-resizer Jan 24 00:13:16.213: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-bqfdj, container csi-attacher Jan 24 00:13:16.268: INFO: Fetching kube-system pod logs took 998.275781ms Jan 24 00:13:16.268: INFO: Dumping workload cluster capz-e2e-1fstlz/capz-e2e-1fstlz-gpu Azure activity log Jan 24 00:13:16.269: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-ljff2, container tigera-operator Jan 24 00:13:16.269: INFO: Collecting events for Pod tigera-operator/tigera-operator-65d6bf4d4f-ljff2 Jan 24 00:13:20.017: INFO: Fetching activity logs took 3.748886057s Jan 24 00:13:20.017: INFO: Dumping all the Cluster API resources in the "capz-e2e-1fstlz" namespace Jan 24 00:13:20.373: INFO: Deleting all clusters in the capz-e2e-1fstlz namespace STEP: Deleting cluster capz-e2e-1fstlz-gpu - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/24/23 00:13:20.392 INFO: Waiting for the Cluster capz-e2e-1fstlz/capz-e2e-1fstlz-gpu to be deleted STEP: Waiting for cluster capz-e2e-1fstlz-gpu to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/24/23 00:13:20.409 Jan 24 00:30:01.429: INFO: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-1fstlz Jan 24 00:30:01.475: INFO: Checking if any resources are left over in Azure for spec "create-workload-cluster" STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:216 @ 01/24/23 00:30:02.095 INFO: "with a single control plane node and 1 node" started at Tue, 24 Jan 2023 00:33:05 UTC on Ginkgo node 10 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:117 @ 01/24/23 00:33:05.145 (21m13.734s)
Filter through log files | View test history on testgrid
capz-e2e [It] Workload cluster creation Creating a Flatcar cluster [OPTIONAL] With Flatcar control-plane and worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and machinepools [OPTIONAL] with 1 control plane node and 1 machinepool
capz-e2e [It] Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [It] Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e [It] Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [It] Conformance Tests conformance-tests
capz-e2e [It] Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e [It] Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e [It] Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e [It] Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e [It] Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e [It] Workload cluster creation Creating an AKS cluster [Managed Kubernetes] with a single control plane node and 1 node
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with out-of-tree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider
... skipping 632 lines ... [38;5;243m------------------------------[0m [38;5;10m• [1116.808 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a cluster that uses the external cloud provider and machinepools [OPTIONAL] [38;5;10m[1mwith 1 control plane node and 1 machinepool[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:573[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/23 23:34:29 failed trying to get namespace (capz-e2e-4s66pq):namespaces "capz-e2e-4s66pq" not found cluster.cluster.x-k8s.io/capz-e2e-4s66pq-flex created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-4s66pq-flex created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-4s66pq-flex-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-4s66pq-flex-control-plane created machinepool.cluster.x-k8s.io/capz-e2e-4s66pq-flex-mp-0 created azuremachinepool.infrastructure.cluster.x-k8s.io/capz-e2e-4s66pq-flex-mp-0 created ... skipping 2 lines ... felixconfiguration.crd.projectcalico.org/default configured W0123 23:46:15.293535 37439 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning 2023/01/23 23:46:46 [DEBUG] GET http://20.71.67.115 W0123 23:47:21.094112 37439 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning Failed to get logs for MachinePool capz-e2e-4s66pq-flex-mp-0, Cluster capz-e2e-4s66pq/capz-e2e-4s66pq-flex: Unable to collect VMSS Boot Diagnostic logs: failed to parse resource id: parsing failed for /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-4s66pq-flex/providers/Microsoft.Compute. Invalid resource Id format [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Mon, 23 Jan 2023 23:34:28 UTC on Ginkgo node 2 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating namespace "capz-e2e-4s66pq" for hosting the cluster [38;5;243m@ 01/23/23 23:34:28.952[0m Jan 23 23:34:28.952: INFO: starting to create namespace for hosting the "capz-e2e-4s66pq" test spec ... skipping 229 lines ... [38;5;243m------------------------------[0m [38;5;10m• [1156.580 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a Flatcar cluster [OPTIONAL] [38;5;10m[1mWith Flatcar control-plane and worker nodes[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:321[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/23 23:34:28 failed trying to get namespace (capz-e2e-fiiwth):namespaces "capz-e2e-fiiwth" not found cluster.cluster.x-k8s.io/capz-e2e-fiiwth-flatcar created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-fiiwth-flatcar created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-fiiwth-flatcar-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-fiiwth-flatcar-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-fiiwth-flatcar-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-fiiwth-flatcar-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-fiiwth-flatcar-md-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created felixconfiguration.crd.projectcalico.org/default configured Failed to get logs for Machine capz-e2e-fiiwth-flatcar-control-plane-b6gzr, Cluster capz-e2e-fiiwth/capz-e2e-fiiwth-flatcar: [dialing public load balancer at capz-e2e-fiiwth-flatcar-5d2cafc5.westeurope.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.44.12:53490->20.238.254.30:22: read: connection reset by peer, dialing public load balancer at capz-e2e-fiiwth-flatcar-5d2cafc5.westeurope.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.44.12:53506->20.238.254.30:22: read: connection reset by peer] Failed to get logs for Machine capz-e2e-fiiwth-flatcar-md-0-5975bb9776-2nx7q, Cluster capz-e2e-fiiwth/capz-e2e-fiiwth-flatcar: [dialing public load balancer at capz-e2e-fiiwth-flatcar-5d2cafc5.westeurope.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.44.12:48878->20.238.254.30:22: read: connection reset by peer, dialing public load balancer at capz-e2e-fiiwth-flatcar-5d2cafc5.westeurope.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.44.12:48890->20.238.254.30:22: read: connection reset by peer, dialing public load balancer at capz-e2e-fiiwth-flatcar-5d2cafc5.westeurope.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.44.12:48894->20.238.254.30:22: read: connection reset by peer, dialing public load balancer at capz-e2e-fiiwth-flatcar-5d2cafc5.westeurope.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.44.12:48888->20.238.254.30:22: read: connection reset by peer, dialing public load balancer at capz-e2e-fiiwth-flatcar-5d2cafc5.westeurope.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.44.12:48886->20.238.254.30:22: read: connection reset by peer, dialing public load balancer at capz-e2e-fiiwth-flatcar-5d2cafc5.westeurope.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.44.12:48880->20.238.254.30:22: read: connection reset by peer, dialing public load balancer at capz-e2e-fiiwth-flatcar-5d2cafc5.westeurope.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.44.12:48904->20.238.254.30:22: read: connection reset by peer, dialing public load balancer at capz-e2e-fiiwth-flatcar-5d2cafc5.westeurope.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.44.12:48902->20.238.254.30:22: read: connection reset by peer, dialing public load balancer at capz-e2e-fiiwth-flatcar-5d2cafc5.westeurope.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.44.12:48896->20.238.254.30:22: read: connection reset by peer] [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Mon, 23 Jan 2023 23:34:28 UTC on Ginkgo node 4 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating namespace "capz-e2e-fiiwth" for hosting the cluster [38;5;243m@ 01/23/23 23:34:28.937[0m Jan 23 23:34:28.937: INFO: starting to create namespace for hosting the "capz-e2e-fiiwth" test spec ... skipping 157 lines ... [38;5;243m------------------------------[0m [38;5;10m• [1300.373 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] [38;5;10m[1mwith a 1 control plane nodes and 2 worker nodes[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:637[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/23 23:34:29 failed trying to get namespace (capz-e2e-5rucxy):namespaces "capz-e2e-5rucxy" not found cluster.cluster.x-k8s.io/capz-e2e-5rucxy-oot created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-5rucxy-oot created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-5rucxy-oot-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-5rucxy-oot-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-5rucxy-oot-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-5rucxy-oot-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-5rucxy-oot-md-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created felixconfiguration.crd.projectcalico.org/default configured W0123 23:46:04.667030 37443 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning 2023/01/23 23:47:05 [DEBUG] GET http://20.71.69.23 2023/01/23 23:47:35 [ERR] GET http://20.71.69.23 request failed: Get "http://20.71.69.23": dial tcp 20.71.69.23:80: i/o timeout 2023/01/23 23:47:35 [DEBUG] GET http://20.71.69.23: retrying in 1s (4 left) 2023/01/23 23:48:06 [ERR] GET http://20.71.69.23 request failed: Get "http://20.71.69.23": dial tcp 20.71.69.23:80: i/o timeout 2023/01/23 23:48:06 [DEBUG] GET http://20.71.69.23: retrying in 2s (3 left) W0123 23:49:05.545598 37443 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Mon, 23 Jan 2023 23:34:28 UTC on Ginkgo node 5 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml ... skipping 272 lines ... [38;5;243m------------------------------[0m [38;5;10m• [1439.604 seconds][0m [0mWorkload cluster creation [38;5;243mCreating clusters using clusterclass [OPTIONAL] [38;5;10m[1mwith a single control plane node, one linux worker node, and one windows worker node[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:896[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/23 23:34:29 failed trying to get namespace (capz-e2e-97qzmt):namespaces "capz-e2e-97qzmt" not found clusterclass.cluster.x-k8s.io/ci-default created kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/ci-default-kubeadm-control-plane created azureclustertemplate.infrastructure.cluster.x-k8s.io/ci-default-azure-cluster created azuremachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-control-plane created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/ci-default-worker created azuremachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-worker created ... skipping 5 lines ... clusterresourceset.addons.cluster.x-k8s.io/csi-proxy created configmap/cni-capz-e2e-97qzmt-cc-calico-windows created configmap/csi-proxy-addon created felixconfiguration.crd.projectcalico.org/default configured Failed to get logs for Machine capz-e2e-97qzmt-cc-md-0-57n4k-5d8f57fc84-w6lx8, Cluster capz-e2e-97qzmt/capz-e2e-97qzmt-cc: dialing public load balancer at capz-e2e-97qzmt-cc-78236fb5.westeurope.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain Failed to get logs for Machine capz-e2e-97qzmt-cc-md-win-pqsh6-7d4f566d69-vs4mt, Cluster capz-e2e-97qzmt/capz-e2e-97qzmt-cc: dialing public load balancer at capz-e2e-97qzmt-cc-78236fb5.westeurope.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain Failed to get logs for Machine capz-e2e-97qzmt-cc-pj8m4-tdkvr, Cluster capz-e2e-97qzmt/capz-e2e-97qzmt-cc: dialing public load balancer at capz-e2e-97qzmt-cc-78236fb5.westeurope.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Mon, 23 Jan 2023 23:34:28 UTC on Ginkgo node 6 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating namespace "capz-e2e-97qzmt" for hosting the cluster [38;5;243m@ 01/23/23 23:34:28.967[0m Jan 23 23:34:28.967: INFO: starting to create namespace for hosting the "capz-e2e-97qzmt" test spec ... skipping 186 lines ... Jan 23 23:52:01.970: INFO: Collecting events for Pod kube-system/kube-scheduler-capz-e2e-97qzmt-cc-control-plane-7l4hk-fsrcf Jan 23 23:52:01.971: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-p9j4s, container csi-resizer Jan 23 23:52:02.236: INFO: Fetching kube-system pod logs took 1.756793971s Jan 23 23:52:02.236: INFO: Dumping workload cluster capz-e2e-97qzmt/capz-e2e-97qzmt-cc Azure activity log Jan 23 23:52:02.236: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-nw6sm, container tigera-operator Jan 23 23:52:02.236: INFO: Collecting events for Pod tigera-operator/tigera-operator-65d6bf4d4f-nw6sm Jan 23 23:52:02.265: INFO: Error fetching activity logs for cluster capz-e2e-97qzmt-cc in namespace capz-e2e-97qzmt. Not able to find the AzureManagedControlPlane on the management cluster: azuremanagedcontrolplanes.infrastructure.cluster.x-k8s.io "capz-e2e-97qzmt-cc" not found Jan 23 23:52:02.265: INFO: Fetching activity logs took 29.258405ms Jan 23 23:52:02.265: INFO: Dumping all the Cluster API resources in the "capz-e2e-97qzmt" namespace Jan 23 23:52:02.738: INFO: Deleting all clusters in the capz-e2e-97qzmt namespace [1mSTEP:[0m Deleting cluster capz-e2e-97qzmt-cc [38;5;243m@ 01/23/23 23:52:02.768[0m INFO: Waiting for the Cluster capz-e2e-97qzmt/capz-e2e-97qzmt-cc to be deleted [1mSTEP:[0m Waiting for cluster capz-e2e-97qzmt-cc to be deleted [38;5;243m@ 01/23/23 23:52:02.79[0m ... skipping 10 lines ... [38;5;243m------------------------------[0m [38;5;10m• [1705.674 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a dual-stack cluster [OPTIONAL] [38;5;10m[1mWith dual-stack worker node[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:820[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/23 23:34:29 failed trying to get namespace (capz-e2e-3s91gu):namespaces "capz-e2e-3s91gu" not found cluster.cluster.x-k8s.io/capz-e2e-3s91gu-dual-stack created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-3s91gu-dual-stack created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-3s91gu-dual-stack-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-3s91gu-dual-stack-control-plane created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created machinedeployment.cluster.x-k8s.io/capz-e2e-3s91gu-dual-stack-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-3s91gu-dual-stack-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-3s91gu-dual-stack-md-0 created felixconfiguration.crd.projectcalico.org/default configured W0123 23:48:45.286238 37440 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning 2023/01/23 23:49:26 [DEBUG] GET http://20.71.69.51 2023/01/23 23:49:56 [ERR] GET http://20.71.69.51 request failed: Get "http://20.71.69.51": dial tcp 20.71.69.51:80: i/o timeout 2023/01/23 23:49:56 [DEBUG] GET http://20.71.69.51: retrying in 1s (4 left) W0123 23:50:41.232284 37440 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning W0123 23:51:45.428318 37440 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning W0123 23:52:54.918755 37440 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning [38;5;243m<< Captured StdOut/StdErr Output[0m ... skipping 320 lines ... [38;5;243m------------------------------[0m [38;5;10m• [3130.940 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a private cluster [OPTIONAL] [38;5;10m[1mCreates a public management cluster in a custom vnet[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:156[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/23 23:34:28 failed trying to get namespace (capz-e2e-sqza4k):namespaces "capz-e2e-sqza4k" not found cluster.cluster.x-k8s.io/capz-e2e-sqza4k-public-custom-vnet created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-sqza4k-public-custom-vnet created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-sqza4k-public-custom-vnet-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-sqza4k-public-custom-vnet-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-sqza4k-public-custom-vnet-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-sqza4k-public-custom-vnet-md-0 created ... skipping 255 lines ... Jan 24 00:20:11.242: INFO: Fetching activity logs took 10.064639068s Jan 24 00:20:11.242: INFO: Dumping all the Cluster API resources in the "capz-e2e-sqza4k" namespace Jan 24 00:20:11.925: INFO: Deleting all clusters in the capz-e2e-sqza4k namespace [1mSTEP:[0m Deleting cluster capz-e2e-sqza4k-public-custom-vnet [38;5;243m@ 01/24/23 00:20:11.967[0m INFO: Waiting for the Cluster capz-e2e-sqza4k/capz-e2e-sqza4k-public-custom-vnet to be deleted [1mSTEP:[0m Waiting for cluster capz-e2e-sqza4k-public-custom-vnet to be deleted [38;5;243m@ 01/24/23 00:20:11.995[0m INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-cc4f4b875-gc4cs, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-8ddf45bf4-xtfmt, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-66576dfdb7-wtt68, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-768b7b88f9-64n7t, container manager: http2: client connection lost Jan 24 00:23:12.133: INFO: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-sqza4k Jan 24 00:23:12.170: INFO: Running additional cleanup for the "create-workload-cluster" test spec Jan 24 00:23:12.170: INFO: deleting an existing virtual network "custom-vnet" Jan 24 00:23:23.768: INFO: deleting an existing route table "node-routetable" Jan 24 00:23:27.097: INFO: deleting an existing network security group "node-nsg" ... skipping 2 lines ... Jan 24 00:23:49.995: INFO: deleting the existing resource group "capz-e2e-sqza4k-public-custom-vnet" Jan 24 00:25:08.662: INFO: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP:[0m Redacting sensitive information from logs [38;5;243m@ 01/24/23 00:25:09.032[0m INFO: "Creates a public management cluster in a custom vnet" started at Tue, 24 Jan 2023 00:26:39 UTC on Ginkgo node 1 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [38;5;243m<< Timeline[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [3516.199 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a GPU-enabled cluster [OPTIONAL] [38;5;9m[1m[It] with a single control plane node and 1 node[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:506[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/23 23:34:29 failed trying to get namespace (capz-e2e-1fstlz):namespaces "capz-e2e-1fstlz" not found cluster.cluster.x-k8s.io/capz-e2e-1fstlz-gpu serverside-applied azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-1fstlz-gpu serverside-applied kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-1fstlz-gpu-control-plane serverside-applied azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-1fstlz-gpu-control-plane serverside-applied azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp serverside-applied machinedeployment.cluster.x-k8s.io/capz-e2e-1fstlz-gpu-md-0 serverside-applied ... skipping 2 lines ... clusterresourceset.addons.cluster.x-k8s.io/crs-gpu-operator serverside-applied configmap/nvidia-clusterpolicy-crd serverside-applied configmap/nvidia-gpu-operator-components serverside-applied felixconfiguration.crd.projectcalico.org/default configured Failed to get logs for Machine capz-e2e-1fstlz-gpu-md-0-66578c8d-9d4cb, Cluster capz-e2e-1fstlz/capz-e2e-1fstlz-gpu: dialing from control plane to target node at capz-e2e-1fstlz-gpu-md-0-kmt9d: ssh: rejected: connect failed (Temporary failure in name resolution) [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Mon, 23 Jan 2023 23:34:28 UTC on Ginkgo node 10 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating namespace "capz-e2e-1fstlz" for hosting the cluster [38;5;243m@ 01/23/23 23:34:28.946[0m Jan 23 23:34:28.946: INFO: starting to create namespace for hosting the "capz-e2e-1fstlz" test spec ... skipping 68 lines ... INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-1fstlz/capz-e2e-1fstlz-gpu-control-plane to be ready (implies underlying nodes to be ready as well) [1mSTEP:[0m Waiting for the control plane to be ready [38;5;243m@ 01/23/23 23:41:51.263[0m [1mSTEP:[0m Checking all the control plane machines are in the expected failure domains [38;5;243m@ 01/23/23 23:41:51.299[0m INFO: Waiting for the machine deployments to be provisioned [1mSTEP:[0m Waiting for the workload nodes to exist [38;5;243m@ 01/23/23 23:41:51.409[0m [38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:131 [38;5;243m@ 01/24/23 00:11:51.411[0m Jan 24 00:11:51.411: INFO: FAILED! Jan 24 00:11:51.411: INFO: Cleaning up after "Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node" spec [1mSTEP:[0m Dumping logs from the "capz-e2e-1fstlz-gpu" workload cluster [38;5;243m@ 01/24/23 00:11:51.411[0m Jan 24 00:11:51.411: INFO: Dumping workload cluster capz-e2e-1fstlz/capz-e2e-1fstlz-gpu logs Jan 24 00:11:51.476: INFO: Collecting logs for Linux node capz-e2e-1fstlz-gpu-control-plane-5mqhh in cluster capz-e2e-1fstlz-gpu in namespace capz-e2e-1fstlz Jan 24 00:12:09.914: INFO: Collecting boot logs for AzureMachine capz-e2e-1fstlz-gpu-control-plane-5mqhh ... skipping 61 lines ... INFO: Deleting namespace capz-e2e-1fstlz Jan 24 00:30:01.475: INFO: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP:[0m Redacting sensitive information from logs [38;5;243m@ 01/24/23 00:30:02.095[0m INFO: "with a single control plane node and 1 node" started at Tue, 24 Jan 2023 00:33:05 UTC on Ginkgo node 10 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [38;5;243m<< Timeline[0m [38;5;9m[FAILED] Timed out after 1800.001s. Timed out waiting for 1 nodes to be created for MachineDeployment capz-e2e-1fstlz/capz-e2e-1fstlz-gpu-md-0 Expected <int>: 0 to equal <int>: 1[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:131[0m [38;5;243m@ 01/24/23 00:11:51.411[0m ... skipping 23 lines ... [38;5;10m[ReportAfterSuite] PASSED [0.031 seconds][0m [38;5;10m[1m[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report[0m [38;5;243mautogenerated by Ginkgo[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 1 Failure:[0m [38;5;9m[FAIL][0m [0mWorkload cluster creation [38;5;243mCreating a GPU-enabled cluster [OPTIONAL] [38;5;9m[1m[It] with a single control plane node and 1 node[0m [38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:131[0m [38;5;9m[1mRan 7 of 27 Specs in 3812.638 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m6 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m20 Skipped[0m [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mCurrentGinkgoTestDescription() is deprecated in Ginkgo V2. Use CurrentSpecReport() instead.[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:429[0m ... skipping 85 lines ... [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:285[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:429[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.7.0[0m --- FAIL: TestE2E (3810.34s) FAIL Ginkgo ran 1 suite in 1h7m39.875033482s Test Suite Failed make[1]: *** [Makefile:654: test-e2e-run] Error 1 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:663: test-e2e] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...