Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 27 succeeded |
Started | |
Elapsed | 45m59s |
Revision | release-1.7 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sMachineDeployment\srollout\sspec\sShould\ssuccessfully\supgrade\sMachines\supon\schanges\sin\srelevant\sMachineDeployment\sfields$'
[FAILED] Failed to get controller-runtime client Unexpected error: <*url.Error | 0xc00129cf90>: { Op: "Get", URL: "https://md-rollout-p7ujbw-92d0d2fc.northeurope.cloudapp.azure.com:6443/api?timeout=32s", Err: <http.tlsHandshakeTimeoutError>{}, } Get "https://md-rollout-p7ujbw-92d0d2fc.northeurope.cloudapp.azure.com:6443/api?timeout=32s": net/http: TLS handshake timeout occurred In [AfterEach] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_proxy.go:193 @ 01/21/23 21:53:54.651from junit.e2e_suite.1.xml
cluster.cluster.x-k8s.io/md-rollout-p7ujbw created azurecluster.infrastructure.cluster.x-k8s.io/md-rollout-p7ujbw created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/md-rollout-p7ujbw-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/md-rollout-p7ujbw-control-plane created machinedeployment.cluster.x-k8s.io/md-rollout-p7ujbw-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/md-rollout-p7ujbw-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/md-rollout-p7ujbw-md-0 created machinedeployment.cluster.x-k8s.io/md-rollout-p7ujbw-md-win created azuremachinetemplate.infrastructure.cluster.x-k8s.io/md-rollout-p7ujbw-md-win created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/md-rollout-p7ujbw-md-win created machinehealthcheck.cluster.x-k8s.io/md-rollout-p7ujbw-mhc-0 created clusterresourceset.addons.cluster.x-k8s.io/md-rollout-p7ujbw-calico-windows created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created clusterresourceset.addons.cluster.x-k8s.io/csi-proxy created clusterresourceset.addons.cluster.x-k8s.io/containerd-logger-md-rollout-p7ujbw created configmap/cni-md-rollout-p7ujbw-calico-windows created configmap/csi-proxy-addon created configmap/containerd-logger-md-rollout-p7ujbw created felixconfiguration.crd.projectcalico.org/default configured Failed to get logs for Machine md-rollout-p7ujbw-md-win-7bc6f966b4-689qn, Cluster md-rollout-kyz24f/md-rollout-p7ujbw: [dialing from control plane to target node at md-rollou-9wcwt: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollou-9wcwt' under resource group 'capz-e2e-zu0ndi' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"] Failed to get logs for Machine md-rollout-p7ujbw-md-win-7bc6f966b4-6mkln, Cluster md-rollout-kyz24f/md-rollout-p7ujbw: [dialing from control plane to target node at md-rollou-gjvs7: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollou-gjvs7' under resource group 'capz-e2e-zu0ndi' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"] Failed to get logs for Machine md-rollout-p7ujbw-md-win-d868d747d-7tzcx, Cluster md-rollout-kyz24f/md-rollout-p7ujbw: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] > Enter [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:52 @ 01/21/23 21:20:51.577 INFO: "" started at Sat, 21 Jan 2023 21:20:51 UTC on Ginkgo node 9 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:52 @ 01/21/23 21:20:51.731 (154ms) > Enter [BeforeEach] Running the MachineDeployment rollout spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:56 @ 01/21/23 21:20:51.731 STEP: Creating a namespace for hosting the "md-rollout" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 01/21/23 21:20:51.731 INFO: Creating namespace md-rollout-kyz24f INFO: Creating event watcher for namespace "md-rollout-kyz24f" < Exit [BeforeEach] Running the MachineDeployment rollout spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:56 @ 01/21/23 21:20:51.839 (108ms) > Enter [It] Should successfully upgrade Machines upon changes in relevant MachineDeployment fields - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:71 @ 01/21/23 21:20:51.839 STEP: Creating a workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:72 @ 01/21/23 21:20:51.839 INFO: Creating the workload cluster with name "md-rollout-p7ujbw" using the "(default)" template (Kubernetes v1.24.9, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster md-rollout-p7ujbw --infrastructure (default) --kubernetes-version v1.24.9 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default) INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:134 @ 01/21/23 21:20:55.922 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/21/23 21:23:06.044 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:131 @ 01/21/23 21:23:06.044 Jan 21 21:25:46.440: INFO: getting history for release projectcalico Jan 21 21:25:46.545: INFO: Release projectcalico does not exist, installing it Jan 21 21:25:47.780: INFO: creating 1 resource(s) Jan 21 21:25:47.931: INFO: creating 1 resource(s) Jan 21 21:25:48.058: INFO: creating 1 resource(s) Jan 21 21:25:48.184: INFO: creating 1 resource(s) Jan 21 21:25:48.316: INFO: creating 1 resource(s) Jan 21 21:25:48.443: INFO: creating 1 resource(s) Jan 21 21:25:48.924: INFO: creating 1 resource(s) Jan 21 21:25:49.090: INFO: creating 1 resource(s) Jan 21 21:25:49.212: INFO: creating 1 resource(s) Jan 21 21:25:49.344: INFO: creating 1 resource(s) Jan 21 21:25:49.468: INFO: creating 1 resource(s) Jan 21 21:25:49.593: INFO: creating 1 resource(s) Jan 21 21:25:49.720: INFO: creating 1 resource(s) Jan 21 21:25:49.841: INFO: creating 1 resource(s) Jan 21 21:25:49.964: INFO: creating 1 resource(s) Jan 21 21:25:50.141: INFO: creating 1 resource(s) Jan 21 21:25:50.301: INFO: creating 1 resource(s) Jan 21 21:25:50.423: INFO: creating 1 resource(s) Jan 21 21:25:50.601: INFO: creating 1 resource(s) Jan 21 21:25:50.785: INFO: creating 1 resource(s) Jan 21 21:25:51.362: INFO: creating 1 resource(s) Jan 21 21:25:51.514: INFO: Clearing discovery cache Jan 21 21:25:51.514: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 21 21:25:56.905: INFO: creating 1 resource(s) Jan 21 21:25:57.710: INFO: creating 6 resource(s) Jan 21 21:25:58.990: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/21/23 21:25:59.805 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/21/23 21:26:00.23 Jan 21 21:26:00.230: INFO: starting to wait for deployment to become available Jan 21 21:26:10.439: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.208334413s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:74 @ 01/21/23 21:26:11.633 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/21/23 21:26:12.153 Jan 21 21:26:12.153: INFO: starting to wait for deployment to become available Jan 21 21:27:04.729: INFO: Deployment calico-system/calico-kube-controllers is now available, took 52.575488951s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/21/23 21:27:05.255 Jan 21 21:27:05.255: INFO: starting to wait for deployment to become available Jan 21 21:27:05.360: INFO: Deployment calico-system/calico-typha is now available, took 104.673618ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:79 @ 01/21/23 21:27:05.36 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/21/23 21:27:15.997 Jan 21 21:27:15.997: INFO: starting to wait for deployment to become available Jan 21 21:27:36.400: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 20.40357755s STEP: Waiting for Ready calico-node daemonset pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:84 @ 01/21/23 21:27:36.4 STEP: waiting for daemonset calico-system/calico-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/21/23 21:27:36.929 Jan 21 21:27:36.929: INFO: waiting for daemonset calico-system/calico-node to be complete Jan 21 21:27:37.037: INFO: 1 daemonset calico-system/calico-node pods are running, took 107.526531ms STEP: Waiting for Ready calico windows pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:91 @ 01/21/23 21:27:37.037 STEP: waiting for daemonset calico-system/calico-node-windows to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/21/23 21:27:37.558 Jan 21 21:27:37.558: INFO: waiting for daemonset calico-system/calico-node-windows to be complete Jan 21 21:27:37.662: INFO: 0 daemonset calico-system/calico-node-windows pods are running, took 104.115803ms STEP: Waiting for Ready calico windows pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:97 @ 01/21/23 21:27:37.662 STEP: waiting for daemonset kube-system/kube-proxy-windows to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/21/23 21:27:38.091 Jan 21 21:27:38.091: INFO: waiting for daemonset kube-system/kube-proxy-windows to be complete Jan 21 21:27:38.195: INFO: 0 daemonset kube-system/kube-proxy-windows pods are running, took 103.603495ms INFO: Waiting for the first control plane machine managed by md-rollout-kyz24f/md-rollout-p7ujbw-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 @ 01/21/23 21:27:38.22 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:71 @ 01/21/23 21:27:38.226 Jan 21 21:27:38.352: INFO: getting history for release azuredisk-csi-driver-oot Jan 21 21:27:38.461: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Jan 21 21:27:42.759: INFO: creating 1 resource(s) Jan 21 21:27:43.109: INFO: creating 18 resource(s) Jan 21 21:27:44.001: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:81 @ 01/21/23 21:27:44.021 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/21/23 21:27:44.477 Jan 21 21:27:44.477: INFO: starting to wait for deployment to become available Jan 21 21:28:25.129: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 40.651968741s STEP: Waiting for Running azure-disk-csi node pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:86 @ 01/21/23 21:28:25.129 STEP: waiting for daemonset kube-system/csi-azuredisk-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/21/23 21:28:25.653 Jan 21 21:28:25.653: INFO: waiting for daemonset kube-system/csi-azuredisk-node to be complete Jan 21 21:28:25.756: INFO: 2 daemonset kube-system/csi-azuredisk-node pods are running, took 103.912457ms STEP: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/21/23 21:28:26.277 Jan 21 21:28:26.277: INFO: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete Jan 21 21:28:26.381: INFO: 0 daemonset kube-system/csi-azuredisk-node-win pods are running, took 103.984467ms INFO: Waiting for control plane to be ready INFO: Waiting for control plane md-rollout-kyz24f/md-rollout-p7ujbw-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:165 @ 01/21/23 21:28:26.395 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:196 @ 01/21/23 21:28:26.401 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:102 @ 01/21/23 21:28:26.432 STEP: Checking all the machines controlled by md-rollout-p7ujbw-md-0 are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/21/23 21:28:26.445 STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:102 @ 01/21/23 21:28:26.457 STEP: Checking all the machines controlled by md-rollout-p7ujbw-md-win are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/21/23 21:29:26.55 INFO: Waiting for the machine pools to be provisioned STEP: Upgrading MachineDeployment Infrastructure ref and wait for rolling upgrade - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:93 @ 01/21/23 21:29:26.596 INFO: Patching the new infrastructure ref to Machine Deployment md-rollout-kyz24f/md-rollout-p7ujbw-md-0 INFO: Waiting for rolling upgrade to start. INFO: Waiting for MachineDeployment rolling upgrade to start INFO: Waiting for rolling upgrade to complete. INFO: Waiting for MachineDeployment rolling upgrade to complete INFO: Patching the new infrastructure ref to Machine Deployment md-rollout-kyz24f/md-rollout-p7ujbw-md-win INFO: Waiting for rolling upgrade to start. INFO: Waiting for MachineDeployment rolling upgrade to start INFO: Waiting for rolling upgrade to complete. INFO: Waiting for MachineDeployment rolling upgrade to complete STEP: PASSED! - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:100 @ 01/21/23 21:34:56.88 < Exit [It] Should successfully upgrade Machines upon changes in relevant MachineDeployment fields - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:71 @ 01/21/23 21:34:56.881 (14m5.042s) > Enter [AfterEach] Running the MachineDeployment rollout spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:103 @ 01/21/23 21:34:56.881 STEP: Dumping logs from the "md-rollout-p7ujbw" workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 01/21/23 21:34:56.881 Jan 21 21:34:56.881: INFO: Dumping workload cluster md-rollout-kyz24f/md-rollout-p7ujbw logs Jan 21 21:34:56.929: INFO: Collecting logs for Linux node md-rollout-p7ujbw-control-plane-24qvt in cluster md-rollout-p7ujbw in namespace md-rollout-kyz24f Jan 21 21:35:12.040: INFO: Collecting boot logs for AzureMachine md-rollout-p7ujbw-control-plane-24qvt Jan 21 21:35:13.582: INFO: Collecting logs for Linux node md-rollout-p7ujbw-md-0-20urtc-9d4xm in cluster md-rollout-p7ujbw in namespace md-rollout-kyz24f Jan 21 21:35:27.223: INFO: Collecting boot logs for AzureMachine md-rollout-p7ujbw-md-0-20urtc-9d4xm Jan 21 21:35:27.832: INFO: Collecting logs for Windows node md-rollou-9wcwt in cluster md-rollout-p7ujbw in namespace md-rollout-kyz24f Jan 21 21:39:49.357: INFO: Attempting to copy file /c:/crashdumps.tar on node md-rollou-9wcwt to /logs/artifacts/clusters/md-rollout-p7ujbw/machines/md-rollout-p7ujbw-md-win-7bc6f966b4-689qn/crashdumps.tar Jan 21 21:39:50.676: INFO: Collecting boot logs for AzureMachine md-rollout-p7ujbw-md-win-9wcwt Jan 21 21:39:51.457: INFO: Collecting logs for Windows node md-rollou-gjvs7 in cluster md-rollout-p7ujbw in namespace md-rollout-kyz24f Jan 21 21:44:57.190: INFO: Attempting to copy file /c:/crashdumps.tar on node md-rollou-gjvs7 to /logs/artifacts/clusters/md-rollout-p7ujbw/machines/md-rollout-p7ujbw-md-win-7bc6f966b4-6mkln/crashdumps.tar Jan 21 21:45:07.687: INFO: Collecting boot logs for AzureMachine md-rollout-p7ujbw-md-win-gjvs7 Jan 21 21:45:08.499: INFO: Collecting logs for Windows node md-rollou-z5xcg in cluster md-rollout-p7ujbw in namespace md-rollout-kyz24f Jan 21 21:49:46.617: INFO: Attempting to copy file /c:/crashdumps.tar on node md-rollou-z5xcg to /logs/artifacts/clusters/md-rollout-p7ujbw/machines/md-rollout-p7ujbw-md-win-d868d747d-7tzcx/crashdumps.tar Jan 21 21:50:31.810: INFO: Collecting boot logs for AzureMachine md-rollout-p7ujbw-md-win-63mrcz-z5xcg Jan 21 21:50:33.203: INFO: Dumping workload cluster md-rollout-kyz24f/md-rollout-p7ujbw kube-system pod logs [FAILED] Failed to get controller-runtime client Unexpected error: <*url.Error | 0xc00129cf90>: { Op: "Get", URL: "https://md-rollout-p7ujbw-92d0d2fc.northeurope.cloudapp.azure.com:6443/api?timeout=32s", Err: <http.tlsHandshakeTimeoutError>{}, } Get "https://md-rollout-p7ujbw-92d0d2fc.northeurope.cloudapp.azure.com:6443/api?timeout=32s": net/http: TLS handshake timeout occurred In [AfterEach] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_proxy.go:193 @ 01/21/23 21:53:54.651 < Exit [AfterEach] Running the MachineDeployment rollout spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:103 @ 01/21/23 21:53:54.651 (18m57.77s) > Enter [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:97 @ 01/21/23 21:53:54.651 Jan 21 21:53:54.651: INFO: FAILED! Jan 21 21:53:54.651: INFO: Cleaning up after "Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields" spec STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:212 @ 01/21/23 21:53:54.651 INFO: "Should successfully upgrade Machines upon changes in relevant MachineDeployment fields" started at Sat, 21 Jan 2023 21:55:21 UTC on Ginkgo node 9 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:97 @ 01/21/23 21:55:21.437 (1m26.786s)
Filter through log files
capz-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e [It] Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e [It] Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e [It] Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [It] Conformance Tests conformance-tests
capz-e2e [It] Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and machinepools [OPTIONAL] with 1 control plane node and 1 machinepool
capz-e2e [It] Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [It] Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e [It] Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e [It] Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with out-of-tree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider
... skipping 784 lines ... Jan 21 21:31:48.611: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-54vdv, container csi-attacher Jan 21 21:31:48.611: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-4qprp Jan 21 21:31:48.611: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-54vdv, container csi-resizer Jan 21 21:31:48.612: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-4qprp, container azuredisk Jan 21 21:31:48.612: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-54vdv, container liveness-probe Jan 21 21:31:48.612: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-b768v, container liveness-probe Jan 21 21:31:48.666: INFO: Error starting logs stream for pod calico-system/calico-node-4ddpz, container calico-node: container "calico-node" in pod "calico-node-4ddpz" is waiting to start: PodInitializing Jan 21 21:31:48.667: INFO: Error starting logs stream for pod calico-system/csi-node-driver-q2wk5, container csi-node-driver-registrar: container "csi-node-driver-registrar" in pod "csi-node-driver-q2wk5" is waiting to start: ContainerCreating Jan 21 21:31:48.668: INFO: Error starting logs stream for pod calico-system/csi-node-driver-q2wk5, container calico-csi: container "calico-csi" in pod "csi-node-driver-q2wk5" is waiting to start: ContainerCreating Jan 21 21:31:48.718: INFO: Fetching kube-system pod logs took 1.230677654s Jan 21 21:31:48.718: INFO: Dumping workload cluster mhc-remediation-liyacx/mhc-remediation-kyct31 Azure activity log Jan 21 21:31:48.718: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-sll49, container tigera-operator Jan 21 21:31:48.718: INFO: Collecting events for Pod tigera-operator/tigera-operator-65d6bf4d4f-sll49 Jan 21 21:31:50.785: INFO: Fetching activity logs took 2.066843904s [1mSTEP:[0m Dumping all the Cluster API resources in the "mhc-remediation-liyacx" namespace [38;5;243m@ 01/21/23 21:31:50.785[0m ... skipping 35 lines ... configmap/cni-quick-start-rivhm1-calico-windows created configmap/csi-proxy-addon created configmap/containerd-logger-quick-start-rivhm1 created felixconfiguration.crd.projectcalico.org/default configured Failed to get logs for Machine quick-start-rivhm1-md-win-745f6bf4bf-fmk5d, Cluster quick-start-uul0ri/quick-start-rivhm1: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Failed to get logs for Machine quick-start-rivhm1-md-win-745f6bf4bf-xdssh, Cluster quick-start-uul0ri/quick-start-rivhm1: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Sat, 21 Jan 2023 21:20:51 UTC on Ginkgo node 1 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating a namespace for hosting the "quick-start" test spec [38;5;243m@ 01/21/23 21:20:51.554[0m INFO: Creating namespace quick-start-uul0ri ... skipping 428 lines ... Jan 21 21:37:12.249: INFO: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-x44xcx-control-plane-8fj9d, container kube-scheduler Jan 21 21:37:12.249: INFO: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-x44xcx-control-plane-dspj8, container kube-scheduler Jan 21 21:37:12.249: INFO: Collecting events for Pod kube-system/etcd-mhc-remediation-x44xcx-control-plane-8fj9d Jan 21 21:37:12.249: INFO: Creating log watcher for controller kube-system/etcd-mhc-remediation-x44xcx-control-plane-dspj8, container etcd Jan 21 21:37:12.249: INFO: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-x44xcx-control-plane-lmcnb, container kube-scheduler Jan 21 21:37:12.249: INFO: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-x44xcx-control-plane-dspj8 Jan 21 21:37:12.447: INFO: Error starting logs stream for pod calico-system/calico-node-gk28s, container calico-node: container "calico-node" in pod "calico-node-gk28s" is waiting to start: PodInitializing Jan 21 21:37:12.594: INFO: Fetching kube-system pod logs took 1.512045228s Jan 21 21:37:12.595: INFO: Dumping workload cluster mhc-remediation-g1mn07/mhc-remediation-x44xcx Azure activity log Jan 21 21:37:12.595: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-k74p9, container tigera-operator Jan 21 21:37:12.595: INFO: Collecting events for Pod tigera-operator/tigera-operator-65d6bf4d4f-k74p9 Jan 21 21:37:16.407: INFO: Fetching activity logs took 3.812069997s [1mSTEP:[0m Dumping all the Cluster API resources in the "mhc-remediation-g1mn07" namespace [38;5;243m@ 01/21/23 21:37:16.407[0m ... skipping 14 lines ... [38;5;243m------------------------------[0m [38;5;10m• [1333.795 seconds][0m [0mRunning the Cluster API E2E tests [38;5;243mRunning the self-hosted spec [38;5;10m[1mShould pivot the bootstrap cluster to a self-hosted cluster[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:108[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/21 21:20:51 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/self-hosted-sy1b4u-md-0 created cluster.cluster.x-k8s.io/self-hosted-sy1b4u created machinedeployment.cluster.x-k8s.io/self-hosted-sy1b4u-md-0 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/self-hosted-sy1b4u-control-plane created azurecluster.infrastructure.cluster.x-k8s.io/self-hosted-sy1b4u created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created ... skipping 365 lines ... Jan 21 21:38:41.412: INFO: Collecting events for Pod calico-system/calico-node-jbdn9 Jan 21 21:38:41.412: INFO: Creating log watcher for controller calico-system/csi-node-driver-854bq, container csi-node-driver-registrar Jan 21 21:38:41.412: INFO: Creating log watcher for controller calico-system/csi-node-driver-8vnwx, container calico-csi Jan 21 21:38:41.413: INFO: Creating log watcher for controller calico-system/csi-node-driver-8vnwx, container csi-node-driver-registrar Jan 21 21:38:41.413: INFO: Creating log watcher for controller calico-system/calico-typha-5b6456dd7b-th9lx, container calico-typha Jan 21 21:38:41.413: INFO: Collecting events for Pod calico-system/csi-node-driver-8vnwx Jan 21 21:38:41.525: INFO: Error starting logs stream for pod calico-system/csi-node-driver-8vnwx, container calico-csi: pods "node-drain-1w5121-control-plane-98sww" not found Jan 21 21:38:41.525: INFO: Error starting logs stream for pod calico-system/calico-node-jbdn9, container calico-node: pods "node-drain-1w5121-control-plane-98sww" not found Jan 21 21:38:41.525: INFO: Error starting logs stream for pod calico-system/csi-node-driver-8vnwx, container csi-node-driver-registrar: pods "node-drain-1w5121-control-plane-98sww" not found Jan 21 21:38:41.609: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-2752d, container coredns Jan 21 21:38:41.610: INFO: Collecting events for Pod kube-system/coredns-57575c5f89-2752d Jan 21 21:38:41.610: INFO: Collecting events for Pod kube-system/coredns-57575c5f89-gqz6v Jan 21 21:38:41.610: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-gc67m, container csi-provisioner Jan 21 21:38:41.610: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-gqz6v, container coredns Jan 21 21:38:41.610: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-gc67m, container csi-resizer ... skipping 27 lines ... Jan 21 21:38:41.614: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-x2wbd, container azuredisk Jan 21 21:38:41.614: INFO: Collecting events for Pod kube-system/csi-azuredisk-controller-545d478dbf-gc67m Jan 21 21:38:41.614: INFO: Collecting events for Pod kube-system/etcd-node-drain-1w5121-control-plane-98sww Jan 21 21:38:41.614: INFO: Creating log watcher for controller kube-system/etcd-node-drain-1w5121-control-plane-98sww, container etcd Jan 21 21:38:41.614: INFO: Creating log watcher for controller kube-system/etcd-node-drain-1w5121-control-plane-vqwsc, container etcd Jan 21 21:38:41.614: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-trn2b, container node-driver-registrar Jan 21 21:38:41.760: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-x2wbd, container azuredisk: pods "node-drain-1w5121-control-plane-98sww" not found Jan 21 21:38:41.784: INFO: Error starting logs stream for pod kube-system/kube-controller-manager-node-drain-1w5121-control-plane-98sww, container kube-controller-manager: pods "node-drain-1w5121-control-plane-98sww" not found Jan 21 21:38:41.784: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-x2wbd, container liveness-probe: pods "node-drain-1w5121-control-plane-98sww" not found Jan 21 21:38:41.784: INFO: Error starting logs stream for pod kube-system/kube-proxy-mzqcd, container kube-proxy: pods "node-drain-1w5121-control-plane-98sww" not found Jan 21 21:38:41.784: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-x2wbd, container node-driver-registrar: pods "node-drain-1w5121-control-plane-98sww" not found Jan 21 21:38:41.784: INFO: Error starting logs stream for pod kube-system/kube-scheduler-node-drain-1w5121-control-plane-98sww, container kube-scheduler: pods "node-drain-1w5121-control-plane-98sww" not found Jan 21 21:38:41.784: INFO: Error starting logs stream for pod kube-system/kube-apiserver-node-drain-1w5121-control-plane-98sww, container kube-apiserver: pods "node-drain-1w5121-control-plane-98sww" not found Jan 21 21:38:41.785: INFO: Error starting logs stream for pod kube-system/etcd-node-drain-1w5121-control-plane-98sww, container etcd: pods "node-drain-1w5121-control-plane-98sww" not found Jan 21 21:38:41.787: INFO: Creating log watcher for controller node-drain-00idlo-unevictable-workload/unevictable-pod-ph5-6f8c44cbdd-8mr96, container web Jan 21 21:38:41.787: INFO: Creating log watcher for controller node-drain-00idlo-unevictable-workload/unevictable-pod-ph5-6f8c44cbdd-fj4kk, container web Jan 21 21:38:41.787: INFO: Collecting events for Pod node-drain-00idlo-unevictable-workload/unevictable-pod-ph5-6f8c44cbdd-dl2tb Jan 21 21:38:41.787: INFO: Collecting events for Pod node-drain-00idlo-unevictable-workload/unevictable-pod-ph5-6f8c44cbdd-8mr96 Jan 21 21:38:41.787: INFO: Collecting events for Pod node-drain-00idlo-unevictable-workload/unevictable-pod-r7l-8598949b8b-6pjgj Jan 21 21:38:41.787: INFO: Collecting events for Pod node-drain-00idlo-unevictable-workload/unevictable-pod-ph5-6f8c44cbdd-fj4kk ... skipping 8 lines ... Jan 21 21:38:41.789: INFO: Collecting events for Pod node-drain-00idlo-unevictable-workload/unevictable-pod-r7l-8598949b8b-x6w94 Jan 21 21:38:41.790: INFO: Creating log watcher for controller node-drain-00idlo-unevictable-workload/unevictable-pod-ph5-6f8c44cbdd-dl2tb, container web Jan 21 21:38:41.953: INFO: Fetching kube-system pod logs took 1.687029165s Jan 21 21:38:41.953: INFO: Dumping workload cluster node-drain-00idlo/node-drain-1w5121 Azure activity log Jan 21 21:38:41.953: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-p92j9, container tigera-operator Jan 21 21:38:41.954: INFO: Collecting events for Pod tigera-operator/tigera-operator-65d6bf4d4f-p92j9 Jan 21 21:38:41.956: INFO: Error starting logs stream for pod node-drain-00idlo-unevictable-workload/unevictable-pod-ph5-6f8c44cbdd-dl2tb, container web: pods "node-drain-1w5121-control-plane-98sww" not found Jan 21 21:38:45.204: INFO: Fetching activity logs took 3.250796871s [1mSTEP:[0m Dumping all the Cluster API resources in the "node-drain-00idlo" namespace [38;5;243m@ 01/21/23 21:38:45.204[0m [1mSTEP:[0m Deleting cluster node-drain-00idlo/node-drain-1w5121 [38;5;243m@ 01/21/23 21:38:45.508[0m [1mSTEP:[0m Deleting cluster node-drain-1w5121 [38;5;243m@ 01/21/23 21:38:45.527[0m INFO: Waiting for the Cluster node-drain-00idlo/node-drain-1w5121 to be deleted [1mSTEP:[0m Waiting for cluster node-drain-1w5121 to be deleted [38;5;243m@ 01/21/23 21:38:45.537[0m ... skipping 31 lines ... configmap/cni-md-scale-r58z8z-calico-windows created configmap/csi-proxy-addon created configmap/containerd-logger-md-scale-r58z8z created felixconfiguration.crd.projectcalico.org/default configured Failed to get logs for Machine md-scale-r58z8z-md-win-dcfb877df-6gcs6, Cluster md-scale-jhfle5/md-scale-r58z8z: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Failed to get logs for Machine md-scale-r58z8z-md-win-dcfb877df-lzkzm, Cluster md-scale-jhfle5/md-scale-r58z8z: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Sat, 21 Jan 2023 21:20:51 UTC on Ginkgo node 2 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating a namespace for hosting the "md-scale" test spec [38;5;243m@ 01/21/23 21:20:51.731[0m INFO: Creating namespace md-scale-jhfle5 ... skipping 383 lines ... Jan 21 21:43:14.968: INFO: Creating log watcher for controller calico-system/calico-typha-696958d777-cxckm, container calico-typha Jan 21 21:43:14.968: INFO: Creating log watcher for controller calico-system/calico-node-windows-lsbmh, container calico-node-startup Jan 21 21:43:14.968: INFO: Collecting events for Pod calico-system/calico-typha-696958d777-cxckm Jan 21 21:43:14.969: INFO: Creating log watcher for controller calico-system/csi-node-driver-cckct, container calico-csi Jan 21 21:43:14.969: INFO: Collecting events for Pod calico-system/calico-node-k8wct Jan 21 21:43:14.969: INFO: Creating log watcher for controller calico-system/calico-node-windows-lsbmh, container calico-node-felix Jan 21 21:43:15.083: INFO: Error starting logs stream for pod calico-system/csi-node-driver-cckct, container csi-node-driver-registrar: pods "machine-pool-tpn23o-mp-0000002" not found Jan 21 21:43:15.102: INFO: Error starting logs stream for pod calico-system/calico-node-windows-lsbmh, container calico-node-felix: pods "win-p-win000002" not found Jan 21 21:43:15.168: INFO: Error starting logs stream for pod calico-system/csi-node-driver-cckct, container calico-csi: pods "machine-pool-tpn23o-mp-0000002" not found Jan 21 21:43:15.215: INFO: Error starting logs stream for pod calico-system/calico-node-k8wct, container calico-node: pods "machine-pool-tpn23o-mp-0000002" not found Jan 21 21:43:15.215: INFO: Error starting logs stream for pod calico-system/calico-node-windows-lsbmh, container calico-node-startup: pods "win-p-win000002" not found Jan 21 21:43:15.223: INFO: Creating log watcher for controller kube-system/containerd-logger-kb929, container containerd-logger Jan 21 21:43:15.225: INFO: Collecting events for Pod kube-system/containerd-logger-kb929 Jan 21 21:43:15.225: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-xq8rn, container liveness-probe Jan 21 21:43:15.225: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-xq8rn, container node-driver-registrar Jan 21 21:43:15.227: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-89qcn, container coredns Jan 21 21:43:15.228: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-xq8rn, container azuredisk ... skipping 29 lines ... Jan 21 21:43:15.238: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-win-vrhq8 Jan 21 21:43:15.238: INFO: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-tpn23o-control-plane-blqdl, container kube-controller-manager Jan 21 21:43:15.238: INFO: Collecting events for Pod kube-system/kube-apiserver-machine-pool-tpn23o-control-plane-blqdl Jan 21 21:43:15.238: INFO: Collecting events for Pod kube-system/kube-controller-manager-machine-pool-tpn23o-control-plane-blqdl Jan 21 21:43:15.238: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-6xfnm, container node-driver-registrar Jan 21 21:43:15.238: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-vrhq8, container liveness-probe Jan 21 21:43:15.407: INFO: Error starting logs stream for pod kube-system/containerd-logger-kb929, container containerd-logger: pods "win-p-win000002" not found Jan 21 21:43:15.408: INFO: Fetching kube-system pod logs took 1.599592666s Jan 21 21:43:15.408: INFO: Dumping workload cluster machine-pool-hp73in/machine-pool-tpn23o Azure activity log Jan 21 21:43:15.408: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-r78bk, container tigera-operator Jan 21 21:43:15.408: INFO: Collecting events for Pod tigera-operator/tigera-operator-65d6bf4d4f-r78bk Jan 21 21:43:15.410: INFO: Error starting logs stream for pod kube-system/kube-proxy-windows-7cjgp, container kube-proxy: pods "win-p-win000002" not found Jan 21 21:43:15.410: INFO: Error starting logs stream for pod kube-system/csi-proxy-6pmg4, container csi-proxy: pods "win-p-win000002" not found Jan 21 21:43:15.411: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-6xfnm, container liveness-probe: pods "machine-pool-tpn23o-mp-0000002" not found Jan 21 21:43:15.411: INFO: Error starting logs stream for pod kube-system/kube-proxy-ctx2g, container kube-proxy: pods "machine-pool-tpn23o-mp-0000002" not found Jan 21 21:43:15.411: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-vrhq8, container azuredisk: pods "win-p-win000002" not found Jan 21 21:43:15.411: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-6xfnm, container node-driver-registrar: pods "machine-pool-tpn23o-mp-0000002" not found Jan 21 21:43:15.412: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-vrhq8, container liveness-probe: pods "win-p-win000002" not found Jan 21 21:43:15.414: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-vrhq8, container node-driver-registrar: pods "win-p-win000002" not found Jan 21 21:43:15.414: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-6xfnm, container azuredisk: pods "machine-pool-tpn23o-mp-0000002" not found Jan 21 21:43:18.986: INFO: Fetching activity logs took 3.57833243s [1mSTEP:[0m Dumping all the Cluster API resources in the "machine-pool-hp73in" namespace [38;5;243m@ 01/21/23 21:43:18.986[0m [1mSTEP:[0m Deleting cluster machine-pool-hp73in/machine-pool-tpn23o [38;5;243m@ 01/21/23 21:43:19.6[0m [1mSTEP:[0m Deleting cluster machine-pool-tpn23o [38;5;243m@ 01/21/23 21:43:19.632[0m INFO: Waiting for the Cluster machine-pool-hp73in/machine-pool-tpn23o to be deleted [1mSTEP:[0m Waiting for cluster machine-pool-tpn23o to be deleted [38;5;243m@ 01/21/23 21:43:19.652[0m ... skipping 5 lines ... [38;5;243m<< Timeline[0m [38;5;243m------------------------------[0m [38;5;10m[SynchronizedAfterSuite] PASSED [0.000 seconds][0m [38;5;10m[1m[SynchronizedAfterSuite] [0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:116[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [2069.861 seconds][0m [0mRunning the Cluster API E2E tests [38;5;9m[1mRunning the MachineDeployment rollout spec [AfterEach] [0mShould successfully upgrade Machines upon changes in relevant MachineDeployment fields[0m [38;5;9m[AfterEach][0m [38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:103[0m [38;5;243m[It] /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:71[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m cluster.cluster.x-k8s.io/md-rollout-p7ujbw created ... skipping 14 lines ... configmap/cni-md-rollout-p7ujbw-calico-windows created configmap/csi-proxy-addon created configmap/containerd-logger-md-rollout-p7ujbw created felixconfiguration.crd.projectcalico.org/default configured Failed to get logs for Machine md-rollout-p7ujbw-md-win-7bc6f966b4-689qn, Cluster md-rollout-kyz24f/md-rollout-p7ujbw: [dialing from control plane to target node at md-rollou-9wcwt: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollou-9wcwt' under resource group 'capz-e2e-zu0ndi' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"] Failed to get logs for Machine md-rollout-p7ujbw-md-win-7bc6f966b4-6mkln, Cluster md-rollout-kyz24f/md-rollout-p7ujbw: [dialing from control plane to target node at md-rollou-gjvs7: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollou-gjvs7' under resource group 'capz-e2e-zu0ndi' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"] Failed to get logs for Machine md-rollout-p7ujbw-md-win-d868d747d-7tzcx, Cluster md-rollout-kyz24f/md-rollout-p7ujbw: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Sat, 21 Jan 2023 21:20:51 UTC on Ginkgo node 9 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating a namespace for hosting the "md-rollout" test spec [38;5;243m@ 01/21/23 21:20:51.731[0m INFO: Creating namespace md-rollout-kyz24f ... skipping 127 lines ... Jan 21 21:45:08.499: INFO: Collecting logs for Windows node md-rollou-z5xcg in cluster md-rollout-p7ujbw in namespace md-rollout-kyz24f Jan 21 21:49:46.617: INFO: Attempting to copy file /c:/crashdumps.tar on node md-rollou-z5xcg to /logs/artifacts/clusters/md-rollout-p7ujbw/machines/md-rollout-p7ujbw-md-win-d868d747d-7tzcx/crashdumps.tar Jan 21 21:50:31.810: INFO: Collecting boot logs for AzureMachine md-rollout-p7ujbw-md-win-63mrcz-z5xcg Jan 21 21:50:33.203: INFO: Dumping workload cluster md-rollout-kyz24f/md-rollout-p7ujbw kube-system pod logs [38;5;9m[FAILED][0m in [AfterEach] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_proxy.go:193 [38;5;243m@ 01/21/23 21:53:54.651[0m Jan 21 21:53:54.651: INFO: FAILED! Jan 21 21:53:54.651: INFO: Cleaning up after "Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields" spec [1mSTEP:[0m Redacting sensitive information from logs [38;5;243m@ 01/21/23 21:53:54.651[0m INFO: "Should successfully upgrade Machines upon changes in relevant MachineDeployment fields" started at Sat, 21 Jan 2023 21:55:21 UTC on Ginkgo node 9 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [38;5;243m<< Timeline[0m [38;5;9m[FAILED] Failed to get controller-runtime client Unexpected error: <*url.Error | 0xc00129cf90>: { Op: "Get", URL: "https://md-rollout-p7ujbw-92d0d2fc.northeurope.cloudapp.azure.com:6443/api?timeout=32s", Err: <http.tlsHandshakeTimeoutError>{}, } Get "https://md-rollout-p7ujbw-92d0d2fc.northeurope.cloudapp.azure.com:6443/api?timeout=32s": net/http: TLS handshake timeout occurred[0m ... skipping 26 lines ... [38;5;10m[ReportAfterSuite] PASSED [0.012 seconds][0m [38;5;10m[1m[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report[0m [38;5;243mautogenerated by Ginkgo[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 1 Failure:[0m [38;5;9m[FAIL][0m [0mRunning the Cluster API E2E tests [38;5;9m[1mRunning the MachineDeployment rollout spec [AfterEach] [0mShould successfully upgrade Machines upon changes in relevant MachineDeployment fields[0m [38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_proxy.go:193[0m [38;5;9m[1mRan 8 of 26 Specs in 2229.032 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m7 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m18 Skipped[0m [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mCurrentGinkgoTestDescription() is deprecated in Ginkgo V2. Use CurrentSpecReport() instead.[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:423[0m ... skipping 85 lines ... [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:281[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:423[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.6.0[0m --- FAIL: TestE2E (2227.41s) FAIL [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mCurrentGinkgoTestDescription() is deprecated in Ginkgo V2. Use CurrentSpecReport() instead.[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:423[0m ... skipping 6 lines ... PASS Ginkgo ran 1 suite in 39m25.550553681s Test Suite Failed make[1]: *** [Makefile:655: test-e2e-run] Error 1 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:664: test-e2e] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...