Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 27 succeeded |
Started | |
Elapsed | 46m0s |
Revision | release-1.7 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sMachineDeployment\srollout\sspec\sShould\ssuccessfully\supgrade\sMachines\supon\schanges\sin\srelevant\sMachineDeployment\sfields$'
[FAILED] Failed to get controller-runtime client Unexpected error: <*url.Error | 0xc000f199b0>: { Op: "Get", URL: "https://md-rollout-ilisli-27b77228.uksouth.cloudapp.azure.com:6443/api?timeout=32s", Err: <http.tlsHandshakeTimeoutError>{}, } Get "https://md-rollout-ilisli-27b77228.uksouth.cloudapp.azure.com:6443/api?timeout=32s": net/http: TLS handshake timeout occurred In [AfterEach] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_proxy.go:193 @ 01/28/23 21:56:24.601from junit.e2e_suite.1.xml
cluster.cluster.x-k8s.io/md-rollout-ilisli created azurecluster.infrastructure.cluster.x-k8s.io/md-rollout-ilisli created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/md-rollout-ilisli-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/md-rollout-ilisli-control-plane created machinedeployment.cluster.x-k8s.io/md-rollout-ilisli-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/md-rollout-ilisli-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/md-rollout-ilisli-md-0 created machinedeployment.cluster.x-k8s.io/md-rollout-ilisli-md-win created azuremachinetemplate.infrastructure.cluster.x-k8s.io/md-rollout-ilisli-md-win created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/md-rollout-ilisli-md-win created machinehealthcheck.cluster.x-k8s.io/md-rollout-ilisli-mhc-0 created clusterresourceset.addons.cluster.x-k8s.io/md-rollout-ilisli-calico-windows created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created clusterresourceset.addons.cluster.x-k8s.io/csi-proxy created clusterresourceset.addons.cluster.x-k8s.io/containerd-logger-md-rollout-ilisli created configmap/cni-md-rollout-ilisli-calico-windows created configmap/csi-proxy-addon created configmap/containerd-logger-md-rollout-ilisli created felixconfiguration.crd.projectcalico.org/default configured Failed to get logs for Machine md-rollout-ilisli-md-win-794dbb7cf-6zr95, Cluster md-rollout-tmdobj/md-rollout-ilisli: [dialing from control plane to target node at md-rollou-qgvcj: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollou-qgvcj' under resource group 'capz-e2e-onmgii' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"] Failed to get logs for Machine md-rollout-ilisli-md-win-794dbb7cf-jwv2f, Cluster md-rollout-tmdobj/md-rollout-ilisli: [dialing from control plane to target node at md-rollou-mxf9p: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollou-mxf9p' under resource group 'capz-e2e-onmgii' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"] Failed to get logs for Machine md-rollout-ilisli-md-win-c9d858f64-fzhnk, Cluster md-rollout-tmdobj/md-rollout-ilisli: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] > Enter [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:52 @ 01/28/23 21:23:04.603 INFO: "" started at Sat, 28 Jan 2023 21:23:04 UTC on Ginkgo node 10 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:52 @ 01/28/23 21:23:04.898 (295ms) > Enter [BeforeEach] Running the MachineDeployment rollout spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:56 @ 01/28/23 21:23:04.898 STEP: Creating a namespace for hosting the "md-rollout" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 01/28/23 21:23:04.898 INFO: Creating namespace md-rollout-tmdobj INFO: Creating event watcher for namespace "md-rollout-tmdobj" < Exit [BeforeEach] Running the MachineDeployment rollout spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:56 @ 01/28/23 21:23:05.05 (152ms) > Enter [It] Should successfully upgrade Machines upon changes in relevant MachineDeployment fields - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:71 @ 01/28/23 21:23:05.05 STEP: Creating a workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:72 @ 01/28/23 21:23:05.05 INFO: Creating the workload cluster with name "md-rollout-ilisli" using the "(default)" template (Kubernetes v1.24.10, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster md-rollout-ilisli --infrastructure (default) --kubernetes-version v1.24.10 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default) INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:134 @ 01/28/23 21:23:13.99 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/28/23 21:25:34.165 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:131 @ 01/28/23 21:25:34.165 Jan 28 21:28:14.557: INFO: getting history for release projectcalico Jan 28 21:28:14.663: INFO: Release projectcalico does not exist, installing it Jan 28 21:28:15.993: INFO: creating 1 resource(s) Jan 28 21:28:16.128: INFO: creating 1 resource(s) Jan 28 21:28:16.359: INFO: creating 1 resource(s) Jan 28 21:28:16.599: INFO: creating 1 resource(s) Jan 28 21:28:16.737: INFO: creating 1 resource(s) Jan 28 21:28:16.866: INFO: creating 1 resource(s) Jan 28 21:28:17.175: INFO: creating 1 resource(s) Jan 28 21:28:17.469: INFO: creating 1 resource(s) Jan 28 21:28:17.756: INFO: creating 1 resource(s) Jan 28 21:28:17.906: INFO: creating 1 resource(s) Jan 28 21:28:18.151: INFO: creating 1 resource(s) Jan 28 21:28:18.269: INFO: creating 1 resource(s) Jan 28 21:28:18.459: INFO: creating 1 resource(s) Jan 28 21:28:18.631: INFO: creating 1 resource(s) Jan 28 21:28:18.758: INFO: creating 1 resource(s) Jan 28 21:28:19.017: INFO: creating 1 resource(s) Jan 28 21:28:19.198: INFO: creating 1 resource(s) Jan 28 21:28:19.350: INFO: creating 1 resource(s) Jan 28 21:28:19.566: INFO: creating 1 resource(s) Jan 28 21:28:19.859: INFO: creating 1 resource(s) Jan 28 21:28:20.410: INFO: creating 1 resource(s) Jan 28 21:28:20.660: INFO: Clearing discovery cache Jan 28 21:28:20.660: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 28 21:28:26.003: INFO: creating 1 resource(s) Jan 28 21:28:27.175: INFO: creating 6 resource(s) Jan 28 21:28:28.933: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/28/23 21:28:29.889 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/28/23 21:28:30.312 Jan 28 21:28:30.312: INFO: starting to wait for deployment to become available Jan 28 21:28:40.551: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.238705311s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:74 @ 01/28/23 21:28:41.916 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/28/23 21:28:42.439 Jan 28 21:28:42.439: INFO: starting to wait for deployment to become available Jan 28 21:29:33.120: INFO: Deployment calico-system/calico-kube-controllers is now available, took 50.681141548s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/28/23 21:29:33.639 Jan 28 21:29:33.639: INFO: starting to wait for deployment to become available Jan 28 21:29:33.743: INFO: Deployment calico-system/calico-typha is now available, took 103.699012ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:79 @ 01/28/23 21:29:33.743 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/28/23 21:29:44.366 Jan 28 21:29:44.366: INFO: starting to wait for deployment to become available Jan 28 21:29:54.574: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 10.208057818s STEP: Waiting for Ready calico-node daemonset pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:84 @ 01/28/23 21:29:54.574 STEP: waiting for daemonset calico-system/calico-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/28/23 21:29:55.095 Jan 28 21:29:55.095: INFO: waiting for daemonset calico-system/calico-node to be complete Jan 28 21:29:55.200: INFO: 1 daemonset calico-system/calico-node pods are running, took 104.481963ms STEP: Waiting for Ready calico windows pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:91 @ 01/28/23 21:29:55.2 STEP: waiting for daemonset calico-system/calico-node-windows to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/28/23 21:29:55.717 Jan 28 21:29:55.717: INFO: waiting for daemonset calico-system/calico-node-windows to be complete Jan 28 21:29:55.821: INFO: 0 daemonset calico-system/calico-node-windows pods are running, took 103.355704ms STEP: Waiting for Ready calico windows pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:97 @ 01/28/23 21:29:55.821 STEP: waiting for daemonset kube-system/kube-proxy-windows to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/28/23 21:29:56.243 Jan 28 21:29:56.243: INFO: waiting for daemonset kube-system/kube-proxy-windows to be complete Jan 28 21:29:56.346: INFO: 0 daemonset kube-system/kube-proxy-windows pods are running, took 103.033215ms INFO: Waiting for the first control plane machine managed by md-rollout-tmdobj/md-rollout-ilisli-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 @ 01/28/23 21:29:56.365 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:71 @ 01/28/23 21:29:56.372 Jan 28 21:29:56.493: INFO: getting history for release azuredisk-csi-driver-oot Jan 28 21:29:56.597: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Jan 28 21:30:00.478: INFO: creating 1 resource(s) Jan 28 21:30:00.819: INFO: creating 18 resource(s) Jan 28 21:30:01.643: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:81 @ 01/28/23 21:30:01.661 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/28/23 21:30:02.088 Jan 28 21:30:02.088: INFO: starting to wait for deployment to become available Jan 28 21:30:42.956: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 40.86897352s STEP: Waiting for Running azure-disk-csi node pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:86 @ 01/28/23 21:30:42.957 STEP: waiting for daemonset kube-system/csi-azuredisk-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/28/23 21:30:43.477 Jan 28 21:30:43.477: INFO: waiting for daemonset kube-system/csi-azuredisk-node to be complete Jan 28 21:30:43.581: INFO: 2 daemonset kube-system/csi-azuredisk-node pods are running, took 103.695629ms STEP: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/28/23 21:30:44.097 Jan 28 21:30:44.097: INFO: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete Jan 28 21:30:44.201: INFO: 0 daemonset kube-system/csi-azuredisk-node-win pods are running, took 103.601496ms INFO: Waiting for control plane to be ready INFO: Waiting for control plane md-rollout-tmdobj/md-rollout-ilisli-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:165 @ 01/28/23 21:30:44.217 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:196 @ 01/28/23 21:30:44.224 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:102 @ 01/28/23 21:30:44.249 STEP: Checking all the machines controlled by md-rollout-ilisli-md-0 are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/28/23 21:30:44.266 STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:102 @ 01/28/23 21:30:44.277 STEP: Checking all the machines controlled by md-rollout-ilisli-md-win are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/28/23 21:32:34.445 INFO: Waiting for the machine pools to be provisioned STEP: Upgrading MachineDeployment Infrastructure ref and wait for rolling upgrade - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:93 @ 01/28/23 21:32:34.503 INFO: Patching the new infrastructure ref to Machine Deployment md-rollout-tmdobj/md-rollout-ilisli-md-0 INFO: Waiting for rolling upgrade to start. INFO: Waiting for MachineDeployment rolling upgrade to start INFO: Waiting for rolling upgrade to complete. INFO: Waiting for MachineDeployment rolling upgrade to complete INFO: Patching the new infrastructure ref to Machine Deployment md-rollout-tmdobj/md-rollout-ilisli-md-win INFO: Waiting for rolling upgrade to start. INFO: Waiting for MachineDeployment rolling upgrade to start INFO: Waiting for rolling upgrade to complete. INFO: Waiting for MachineDeployment rolling upgrade to complete STEP: PASSED! - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:100 @ 01/28/23 21:38:04.816 < Exit [It] Should successfully upgrade Machines upon changes in relevant MachineDeployment fields - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:71 @ 01/28/23 21:38:04.816 (14m59.766s) > Enter [AfterEach] Running the MachineDeployment rollout spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:103 @ 01/28/23 21:38:04.816 STEP: Dumping logs from the "md-rollout-ilisli" workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 01/28/23 21:38:04.816 Jan 28 21:38:04.816: INFO: Dumping workload cluster md-rollout-tmdobj/md-rollout-ilisli logs Jan 28 21:38:04.868: INFO: Collecting logs for Linux node md-rollout-ilisli-control-plane-5vk95 in cluster md-rollout-ilisli in namespace md-rollout-tmdobj Jan 28 21:38:17.196: INFO: Collecting boot logs for AzureMachine md-rollout-ilisli-control-plane-5vk95 Jan 28 21:38:18.930: INFO: Collecting logs for Linux node md-rollout-ilisli-md-0-t9071l-lwkwt in cluster md-rollout-ilisli in namespace md-rollout-tmdobj Jan 28 21:38:29.014: INFO: Collecting boot logs for AzureMachine md-rollout-ilisli-md-0-t9071l-lwkwt Jan 28 21:38:30.079: INFO: Collecting logs for Windows node md-rollou-qgvcj in cluster md-rollout-ilisli in namespace md-rollout-tmdobj Jan 28 21:42:49.327: INFO: Attempting to copy file /c:/crashdumps.tar on node md-rollou-qgvcj to /logs/artifacts/clusters/md-rollout-ilisli/machines/md-rollout-ilisli-md-win-794dbb7cf-6zr95/crashdumps.tar Jan 28 21:42:50.900: INFO: Collecting boot logs for AzureMachine md-rollout-ilisli-md-win-qgvcj Jan 28 21:42:52.002: INFO: Collecting logs for Windows node md-rollou-mxf9p in cluster md-rollout-ilisli in namespace md-rollout-tmdobj Jan 28 21:47:32.285: INFO: Attempting to copy file /c:/crashdumps.tar on node md-rollou-mxf9p to /logs/artifacts/clusters/md-rollout-ilisli/machines/md-rollout-ilisli-md-win-794dbb7cf-jwv2f/crashdumps.tar Jan 28 21:47:41.907: INFO: Collecting boot logs for AzureMachine md-rollout-ilisli-md-win-mxf9p Jan 28 21:47:42.651: INFO: Collecting logs for Windows node md-rollou-9nrb8 in cluster md-rollout-ilisli in namespace md-rollout-tmdobj Jan 28 21:52:18.638: INFO: Attempting to copy file /c:/crashdumps.tar on node md-rollou-9nrb8 to /logs/artifacts/clusters/md-rollout-ilisli/machines/md-rollout-ilisli-md-win-c9d858f64-fzhnk/crashdumps.tar Jan 28 21:53:01.769: INFO: Collecting boot logs for AzureMachine md-rollout-ilisli-md-win-p08ive-9nrb8 Jan 28 21:53:03.126: INFO: Dumping workload cluster md-rollout-tmdobj/md-rollout-ilisli kube-system pod logs [FAILED] Failed to get controller-runtime client Unexpected error: <*url.Error | 0xc000f199b0>: { Op: "Get", URL: "https://md-rollout-ilisli-27b77228.uksouth.cloudapp.azure.com:6443/api?timeout=32s", Err: <http.tlsHandshakeTimeoutError>{}, } Get "https://md-rollout-ilisli-27b77228.uksouth.cloudapp.azure.com:6443/api?timeout=32s": net/http: TLS handshake timeout occurred In [AfterEach] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_proxy.go:193 @ 01/28/23 21:56:24.601 < Exit [AfterEach] Running the MachineDeployment rollout spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:103 @ 01/28/23 21:56:24.601 (18m19.785s) > Enter [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:97 @ 01/28/23 21:56:24.601 Jan 28 21:56:24.601: INFO: FAILED! Jan 28 21:56:24.601: INFO: Cleaning up after "Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields" spec STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:212 @ 01/28/23 21:56:24.601 INFO: "Should successfully upgrade Machines upon changes in relevant MachineDeployment fields" started at Sat, 28 Jan 2023 21:57:46 UTC on Ginkgo node 10 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:97 @ 01/28/23 21:57:46.008 (1m21.407s)
Filter through log files
capz-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e [It] Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e [It] Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e [It] Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [It] Conformance Tests conformance-tests
capz-e2e [It] Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and machinepools [OPTIONAL] with 1 control plane node and 1 machinepool
capz-e2e [It] Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [It] Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e [It] Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e [It] Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with out-of-tree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider
... skipping 828 lines ... [38;5;243m------------------------------[0m [38;5;10m• [1135.333 seconds][0m [0mRunning the Cluster API E2E tests [38;5;243mRunning the self-hosted spec [38;5;10m[1mShould pivot the bootstrap cluster to a self-hosted cluster[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:108[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/28 21:23:04 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/self-hosted-0bbdmk-md-0 created cluster.cluster.x-k8s.io/self-hosted-0bbdmk created machinedeployment.cluster.x-k8s.io/self-hosted-0bbdmk-md-0 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/self-hosted-0bbdmk-control-plane created azurecluster.infrastructure.cluster.x-k8s.io/self-hosted-0bbdmk created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created ... skipping 207 lines ... Jan 28 21:35:05.971: INFO: Fetching activity logs took 1.422098335s Jan 28 21:35:05.971: INFO: Dumping all the Cluster API resources in the "self-hosted" namespace Jan 28 21:35:06.340: INFO: Deleting all clusters in the self-hosted namespace [1mSTEP:[0m Deleting cluster self-hosted-0bbdmk [38;5;243m@ 01/28/23 21:35:06.364[0m INFO: Waiting for the Cluster self-hosted/self-hosted-0bbdmk to be deleted [1mSTEP:[0m Waiting for cluster self-hosted-0bbdmk to be deleted [38;5;243m@ 01/28/23 21:35:06.373[0m INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-8c96b57bb-khkq9, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-6bc947c55b-mvrjg, container manager: http2: client connection lost Jan 28 21:39:46.518: INFO: Deleting namespace used for hosting the "self-hosted" test spec INFO: Deleting namespace self-hosted Jan 28 21:39:46.536: INFO: Checking if any resources are left over in Azure for spec "self-hosted" [1mSTEP:[0m Redacting sensitive information from logs [38;5;243m@ 01/28/23 21:39:47.115[0m Jan 28 21:40:48.647: INFO: Cleaning up after "Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster" spec [1mSTEP:[0m Redacting sensitive information from logs [38;5;243m@ 01/28/23 21:40:48.647[0m ... skipping 27 lines ... configmap/cni-quick-start-jh0u1r-calico-windows created configmap/csi-proxy-addon created configmap/containerd-logger-quick-start-jh0u1r created felixconfiguration.crd.projectcalico.org/default configured Failed to get logs for Machine quick-start-jh0u1r-md-win-86b9b4f868-f66dv, Cluster quick-start-4kkwys/quick-start-jh0u1r: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Failed to get logs for Machine quick-start-jh0u1r-md-win-86b9b4f868-pffvh, Cluster quick-start-4kkwys/quick-start-jh0u1r: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Sat, 28 Jan 2023 21:23:04 UTC on Ginkgo node 1 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating a namespace for hosting the "quick-start" test spec [38;5;243m@ 01/28/23 21:23:04.764[0m INFO: Creating namespace quick-start-4kkwys ... skipping 603 lines ... Jan 28 21:41:11.486: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-qjlp2, container csi-resizer Jan 28 21:41:11.486: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-qjlp2, container liveness-probe Jan 28 21:41:11.487: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-qjlp2, container csi-attacher Jan 28 21:41:11.487: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-qjlp2, container azuredisk Jan 28 21:41:11.488: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-qjlp2, container csi-snapshotter Jan 28 21:41:11.488: INFO: Describing Pod kube-system/csi-azuredisk-controller-545d478dbf-qjlp2 Jan 28 21:41:11.642: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-controller-545d478dbf-qjlp2, container csi-snapshotter: container "csi-snapshotter" in pod "csi-azuredisk-controller-545d478dbf-qjlp2" is waiting to start: ContainerCreating Jan 28 21:41:11.647: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-controller-545d478dbf-qjlp2, container liveness-probe: container "liveness-probe" in pod "csi-azuredisk-controller-545d478dbf-qjlp2" is waiting to start: ContainerCreating Jan 28 21:41:11.647: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-controller-545d478dbf-qjlp2, container csi-attacher: container "csi-attacher" in pod "csi-azuredisk-controller-545d478dbf-qjlp2" is waiting to start: ContainerCreating Jan 28 21:41:11.656: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-controller-545d478dbf-qjlp2, container azuredisk: container "azuredisk" in pod "csi-azuredisk-controller-545d478dbf-qjlp2" is waiting to start: ContainerCreating Jan 28 21:41:11.656: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-controller-545d478dbf-qjlp2, container csi-provisioner: container "csi-provisioner" in pod "csi-azuredisk-controller-545d478dbf-qjlp2" is waiting to start: ContainerCreating Jan 28 21:41:11.656: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-controller-545d478dbf-qjlp2, container csi-resizer: container "csi-resizer" in pod "csi-azuredisk-controller-545d478dbf-qjlp2" is waiting to start: ContainerCreating Jan 28 21:41:11.708: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-qcnv7, container node-driver-registrar Jan 28 21:41:11.708: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-qcnv7, container liveness-probe Jan 28 21:41:11.708: INFO: Describing Pod kube-system/csi-azuredisk-node-qcnv7 Jan 28 21:41:11.709: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-qcnv7, container azuredisk Jan 28 21:41:11.918: INFO: Creating log watcher for controller kube-system/etcd-node-drain-en543w-control-plane-nt2bn, container etcd Jan 28 21:41:11.918: INFO: Describing Pod kube-system/etcd-node-drain-en543w-control-plane-nt2bn ... skipping 65 lines ... configmap/cni-md-scale-yo3qx5-calico-windows created configmap/csi-proxy-addon created configmap/containerd-logger-md-scale-yo3qx5 created felixconfiguration.crd.projectcalico.org/default configured Failed to get logs for Machine md-scale-yo3qx5-md-win-76648b6b95-2mws6, Cluster md-scale-fhg3vm/md-scale-yo3qx5: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Failed to get logs for Machine md-scale-yo3qx5-md-win-76648b6b95-752f9, Cluster md-scale-fhg3vm/md-scale-yo3qx5: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Sat, 28 Jan 2023 21:23:04 UTC on Ginkgo node 7 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating a namespace for hosting the "md-scale" test spec [38;5;243m@ 01/28/23 21:23:04.772[0m INFO: Creating namespace md-scale-fhg3vm ... skipping 370 lines ... Jan 28 21:46:00.101: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-54c7fc9cf5-qvzgs, container calico-apiserver Jan 28 21:46:00.101: INFO: Describing Pod calico-apiserver/calico-apiserver-54c7fc9cf5-qvzgs Jan 28 21:46:00.332: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-594d54f99-sjqkz, container calico-kube-controllers Jan 28 21:46:00.332: INFO: Describing Pod calico-system/calico-kube-controllers-594d54f99-sjqkz Jan 28 21:46:00.588: INFO: Creating log watcher for controller calico-system/calico-node-47k7b, container calico-node Jan 28 21:46:00.590: INFO: Describing Pod calico-system/calico-node-47k7b Jan 28 21:46:00.715: INFO: Error starting logs stream for pod calico-system/calico-node-47k7b, container calico-node: pods "machine-pool-xvkbmk-mp-0000002" not found Jan 28 21:46:00.840: INFO: Creating log watcher for controller calico-system/calico-node-z6js9, container calico-node Jan 28 21:46:00.840: INFO: Describing Pod calico-system/calico-node-z6js9 Jan 28 21:46:01.256: INFO: Creating log watcher for controller calico-system/calico-typha-dbfdfbdf9-94278, container calico-typha Jan 28 21:46:01.256: INFO: Describing Pod calico-system/calico-typha-dbfdfbdf9-94278 Jan 28 21:46:01.472: INFO: Creating log watcher for controller calico-system/csi-node-driver-5wwtq, container calico-csi Jan 28 21:46:01.472: INFO: Creating log watcher for controller calico-system/csi-node-driver-5wwtq, container csi-node-driver-registrar Jan 28 21:46:01.473: INFO: Describing Pod calico-system/csi-node-driver-5wwtq Jan 28 21:46:01.686: INFO: Creating log watcher for controller calico-system/csi-node-driver-rzlfp, container calico-csi Jan 28 21:46:01.687: INFO: Creating log watcher for controller calico-system/csi-node-driver-rzlfp, container csi-node-driver-registrar Jan 28 21:46:01.687: INFO: Describing Pod calico-system/csi-node-driver-rzlfp Jan 28 21:46:01.792: INFO: Error starting logs stream for pod calico-system/csi-node-driver-rzlfp, container csi-node-driver-registrar: pods "machine-pool-xvkbmk-mp-0000002" not found Jan 28 21:46:01.792: INFO: Error starting logs stream for pod calico-system/csi-node-driver-rzlfp, container calico-csi: pods "machine-pool-xvkbmk-mp-0000002" not found Jan 28 21:46:01.901: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-5rffr, container coredns Jan 28 21:46:01.902: INFO: Describing Pod kube-system/coredns-57575c5f89-5rffr Jan 28 21:46:02.115: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-82tn2, container coredns Jan 28 21:46:02.115: INFO: Describing Pod kube-system/coredns-57575c5f89-82tn2 Jan 28 21:46:02.336: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-wc24c, container csi-snapshotter Jan 28 21:46:02.336: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-wc24c, container csi-resizer ... skipping 3 lines ... Jan 28 21:46:02.337: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-wc24c, container csi-attacher Jan 28 21:46:02.337: INFO: Describing Pod kube-system/csi-azuredisk-controller-545d478dbf-wc24c Jan 28 21:46:02.565: INFO: Describing Pod kube-system/csi-azuredisk-node-pmfvj Jan 28 21:46:02.565: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-pmfvj, container node-driver-registrar Jan 28 21:46:02.565: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-pmfvj, container liveness-probe Jan 28 21:46:02.565: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-pmfvj, container azuredisk Jan 28 21:46:02.671: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-pmfvj, container liveness-probe: pods "machine-pool-xvkbmk-mp-0000002" not found Jan 28 21:46:02.672: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-pmfvj, container node-driver-registrar: pods "machine-pool-xvkbmk-mp-0000002" not found Jan 28 21:46:02.673: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-pmfvj, container azuredisk: pods "machine-pool-xvkbmk-mp-0000002" not found Jan 28 21:46:02.963: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-vmd5g, container liveness-probe Jan 28 21:46:02.963: INFO: Describing Pod kube-system/csi-azuredisk-node-vmd5g Jan 28 21:46:02.964: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-vmd5g, container node-driver-registrar Jan 28 21:46:02.964: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-vmd5g, container azuredisk Jan 28 21:46:03.368: INFO: Describing Pod kube-system/etcd-machine-pool-xvkbmk-control-plane-8n786 Jan 28 21:46:03.368: INFO: Creating log watcher for controller kube-system/etcd-machine-pool-xvkbmk-control-plane-8n786, container etcd ... skipping 2 lines ... Jan 28 21:46:04.160: INFO: Describing Pod kube-system/kube-controller-manager-machine-pool-xvkbmk-control-plane-8n786 Jan 28 21:46:04.160: INFO: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-xvkbmk-control-plane-8n786, container kube-controller-manager Jan 28 21:46:04.603: INFO: Creating log watcher for controller kube-system/kube-proxy-2r667, container kube-proxy Jan 28 21:46:04.603: INFO: Describing Pod kube-system/kube-proxy-2r667 Jan 28 21:46:04.962: INFO: Describing Pod kube-system/kube-proxy-5mv8q Jan 28 21:46:04.962: INFO: Creating log watcher for controller kube-system/kube-proxy-5mv8q, container kube-proxy Jan 28 21:46:05.067: INFO: Error starting logs stream for pod kube-system/kube-proxy-5mv8q, container kube-proxy: pods "machine-pool-xvkbmk-mp-0000002" not found Jan 28 21:46:05.360: INFO: Describing Pod kube-system/kube-scheduler-machine-pool-xvkbmk-control-plane-8n786 Jan 28 21:46:05.360: INFO: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-xvkbmk-control-plane-8n786, container kube-scheduler Jan 28 21:46:05.758: INFO: Fetching kube-system pod logs took 6.984216025s Jan 28 21:46:05.758: INFO: Dumping workload cluster machine-pool-nnrpni/machine-pool-xvkbmk Azure activity log Jan 28 21:46:05.758: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-k9hg4, container tigera-operator Jan 28 21:46:05.758: INFO: Describing Pod tigera-operator/tigera-operator-65d6bf4d4f-k9hg4 ... skipping 11 lines ... [38;5;243m<< Timeline[0m [38;5;243m------------------------------[0m [38;5;10m[SynchronizedAfterSuite] PASSED [0.000 seconds][0m [38;5;10m[1m[SynchronizedAfterSuite] [0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:116[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [2081.405 seconds][0m [0mRunning the Cluster API E2E tests [38;5;9m[1mRunning the MachineDeployment rollout spec [AfterEach] [0mShould successfully upgrade Machines upon changes in relevant MachineDeployment fields[0m [38;5;9m[AfterEach][0m [38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:103[0m [38;5;243m[It] /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:71[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m cluster.cluster.x-k8s.io/md-rollout-ilisli created ... skipping 14 lines ... configmap/cni-md-rollout-ilisli-calico-windows created configmap/csi-proxy-addon created configmap/containerd-logger-md-rollout-ilisli created felixconfiguration.crd.projectcalico.org/default configured Failed to get logs for Machine md-rollout-ilisli-md-win-794dbb7cf-6zr95, Cluster md-rollout-tmdobj/md-rollout-ilisli: [dialing from control plane to target node at md-rollou-qgvcj: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollou-qgvcj' under resource group 'capz-e2e-onmgii' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"] Failed to get logs for Machine md-rollout-ilisli-md-win-794dbb7cf-jwv2f, Cluster md-rollout-tmdobj/md-rollout-ilisli: [dialing from control plane to target node at md-rollou-mxf9p: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollou-mxf9p' under resource group 'capz-e2e-onmgii' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"] Failed to get logs for Machine md-rollout-ilisli-md-win-c9d858f64-fzhnk, Cluster md-rollout-tmdobj/md-rollout-ilisli: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Sat, 28 Jan 2023 21:23:04 UTC on Ginkgo node 10 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating a namespace for hosting the "md-rollout" test spec [38;5;243m@ 01/28/23 21:23:04.898[0m INFO: Creating namespace md-rollout-tmdobj ... skipping 127 lines ... Jan 28 21:47:42.651: INFO: Collecting logs for Windows node md-rollou-9nrb8 in cluster md-rollout-ilisli in namespace md-rollout-tmdobj Jan 28 21:52:18.638: INFO: Attempting to copy file /c:/crashdumps.tar on node md-rollou-9nrb8 to /logs/artifacts/clusters/md-rollout-ilisli/machines/md-rollout-ilisli-md-win-c9d858f64-fzhnk/crashdumps.tar Jan 28 21:53:01.769: INFO: Collecting boot logs for AzureMachine md-rollout-ilisli-md-win-p08ive-9nrb8 Jan 28 21:53:03.126: INFO: Dumping workload cluster md-rollout-tmdobj/md-rollout-ilisli kube-system pod logs [38;5;9m[FAILED][0m in [AfterEach] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_proxy.go:193 [38;5;243m@ 01/28/23 21:56:24.601[0m Jan 28 21:56:24.601: INFO: FAILED! Jan 28 21:56:24.601: INFO: Cleaning up after "Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields" spec [1mSTEP:[0m Redacting sensitive information from logs [38;5;243m@ 01/28/23 21:56:24.601[0m INFO: "Should successfully upgrade Machines upon changes in relevant MachineDeployment fields" started at Sat, 28 Jan 2023 21:57:46 UTC on Ginkgo node 10 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [38;5;243m<< Timeline[0m [38;5;9m[FAILED] Failed to get controller-runtime client Unexpected error: <*url.Error | 0xc000f199b0>: { Op: "Get", URL: "https://md-rollout-ilisli-27b77228.uksouth.cloudapp.azure.com:6443/api?timeout=32s", Err: <http.tlsHandshakeTimeoutError>{}, } Get "https://md-rollout-ilisli-27b77228.uksouth.cloudapp.azure.com:6443/api?timeout=32s": net/http: TLS handshake timeout occurred[0m ... skipping 26 lines ... [38;5;10m[ReportAfterSuite] PASSED [0.011 seconds][0m [38;5;10m[1m[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report[0m [38;5;243mautogenerated by Ginkgo[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 1 Failure:[0m [38;5;9m[FAIL][0m [0mRunning the Cluster API E2E tests [38;5;9m[1mRunning the MachineDeployment rollout spec [AfterEach] [0mShould successfully upgrade Machines upon changes in relevant MachineDeployment fields[0m [38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_proxy.go:193[0m [38;5;9m[1mRan 8 of 26 Specs in 2239.954 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m7 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m18 Skipped[0m [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mCurrentGinkgoTestDescription() is deprecated in Ginkgo V2. Use CurrentSpecReport() instead.[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:423[0m ... skipping 99 lines ... [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:281[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:423[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.6.0[0m --- FAIL: TestE2E (2238.37s) FAIL Ginkgo ran 1 suite in 39m32.185979028s Test Suite Failed make[1]: *** [Makefile:655: test-e2e-run] Error 1 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:664: test-e2e] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...