Recent runs || View in Spyglass
Result | FAILURE |
Tests | 2 failed / 26 succeeded |
Started | |
Elapsed | 1h9m |
Revision | release-1.7 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sself\-hosted\sspec\sShould\spivot\sthe\sbootstrap\scluster\sto\sa\sself\-hosted\scluster$'
[FAILED] Timed out after 1500.001s. Timed out waiting for 1 nodes to be created for MachineDeployment self-hosted/self-hosted-4n8pps-md-0 Expected <int>: 0 to equal <int>: 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:131 @ 01/29/23 21:59:03.988from junit.e2e_suite.1.xml
2023/01/29 21:24:32 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/self-hosted-4n8pps-md-0 created cluster.cluster.x-k8s.io/self-hosted-4n8pps created machinedeployment.cluster.x-k8s.io/self-hosted-4n8pps-md-0 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/self-hosted-4n8pps-control-plane created azurecluster.infrastructure.cluster.x-k8s.io/self-hosted-4n8pps created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created azuremachinetemplate.infrastructure.cluster.x-k8s.io/self-hosted-4n8pps-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/self-hosted-4n8pps-md-0 created felixconfiguration.crd.projectcalico.org/default configured Failed to get logs for Machine self-hosted-4n8pps-md-0-77d9bc5c5c-cc4w5, Cluster self-hosted/self-hosted-4n8pps: [dialing from control plane to target node at self-hosted-4n8pps-md-0-vngmt: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: AzureMachine provider ID is nil] > Enter [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:52 @ 01/29/23 21:24:32.236 INFO: "" started at Sun, 29 Jan 2023 21:24:32 UTC on Ginkgo node 10 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:52 @ 01/29/23 21:24:32.366 (129ms) > Enter [BeforeEach] Running the self-hosted spec - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:67 @ 01/29/23 21:24:32.366 STEP: Creating namespace "self-hosted" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/29/23 21:24:32.366 Jan 29 21:24:32.366: INFO: starting to create namespace for hosting the "self-hosted" test spec INFO: Creating namespace self-hosted INFO: Creating event watcher for namespace "self-hosted" < Exit [BeforeEach] Running the self-hosted spec - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:67 @ 01/29/23 21:24:32.533 (168ms) > Enter [It] Should pivot the bootstrap cluster to a self-hosted cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:108 @ 01/29/23 21:24:32.533 STEP: Creating a workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:109 @ 01/29/23 21:24:32.533 INFO: Creating the workload cluster with name "self-hosted-4n8pps" using the "management" template (Kubernetes v1.24.10, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster self-hosted-4n8pps --infrastructure (default) --kubernetes-version v1.24.10 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:134 @ 01/29/23 21:24:36.345 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/29/23 21:29:36.572 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:131 @ 01/29/23 21:29:36.572 Jan 29 21:31:36.966: INFO: getting history for release projectcalico Jan 29 21:31:37.075: INFO: Release projectcalico does not exist, installing it Jan 29 21:31:38.379: INFO: creating 1 resource(s) Jan 29 21:31:38.524: INFO: creating 1 resource(s) Jan 29 21:31:38.651: INFO: creating 1 resource(s) Jan 29 21:31:38.771: INFO: creating 1 resource(s) Jan 29 21:31:38.920: INFO: creating 1 resource(s) Jan 29 21:31:39.048: INFO: creating 1 resource(s) Jan 29 21:31:39.365: INFO: creating 1 resource(s) Jan 29 21:31:39.566: INFO: creating 1 resource(s) Jan 29 21:31:39.688: INFO: creating 1 resource(s) Jan 29 21:31:39.811: INFO: creating 1 resource(s) Jan 29 21:31:39.938: INFO: creating 1 resource(s) Jan 29 21:31:40.058: INFO: creating 1 resource(s) Jan 29 21:31:40.177: INFO: creating 1 resource(s) Jan 29 21:31:40.302: INFO: creating 1 resource(s) Jan 29 21:31:40.421: INFO: creating 1 resource(s) Jan 29 21:31:40.564: INFO: creating 1 resource(s) Jan 29 21:31:40.743: INFO: creating 1 resource(s) Jan 29 21:31:40.871: INFO: creating 1 resource(s) Jan 29 21:31:41.061: INFO: creating 1 resource(s) Jan 29 21:31:41.261: INFO: creating 1 resource(s) Jan 29 21:31:41.950: INFO: creating 1 resource(s) Jan 29 21:31:42.080: INFO: Clearing discovery cache Jan 29 21:31:42.080: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 29 21:31:47.489: INFO: creating 1 resource(s) Jan 29 21:31:48.373: INFO: creating 6 resource(s) Jan 29 21:31:49.676: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/29/23 21:31:50.469 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/29/23 21:31:50.91 Jan 29 21:31:50.910: INFO: starting to wait for deployment to become available Jan 29 21:32:01.127: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.217606448s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:74 @ 01/29/23 21:32:02.413 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/29/23 21:32:02.957 Jan 29 21:32:02.957: INFO: starting to wait for deployment to become available Jan 29 21:32:53.620: INFO: Deployment calico-system/calico-kube-controllers is now available, took 50.662483618s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/29/23 21:32:54.168 Jan 29 21:32:54.168: INFO: starting to wait for deployment to become available Jan 29 21:32:54.277: INFO: Deployment calico-system/calico-typha is now available, took 109.224271ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:79 @ 01/29/23 21:32:54.277 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/29/23 21:33:04.933 Jan 29 21:33:04.933: INFO: starting to wait for deployment to become available Jan 29 21:33:25.261: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 20.328599971s STEP: Waiting for Ready calico-node daemonset pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:84 @ 01/29/23 21:33:25.261 STEP: waiting for daemonset calico-system/calico-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/29/23 21:33:25.811 Jan 29 21:33:25.811: INFO: waiting for daemonset calico-system/calico-node to be complete Jan 29 21:33:25.920: INFO: 1 daemonset calico-system/calico-node pods are running, took 109.231249ms INFO: Waiting for the first control plane machine managed by self-hosted/self-hosted-4n8pps-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 @ 01/29/23 21:33:25.942 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:71 @ 01/29/23 21:33:25.948 Jan 29 21:33:26.076: INFO: getting history for release azuredisk-csi-driver-oot Jan 29 21:33:26.187: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Jan 29 21:33:30.384: INFO: creating 1 resource(s) Jan 29 21:33:30.845: INFO: creating 18 resource(s) Jan 29 21:33:31.719: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:81 @ 01/29/23 21:33:31.737 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/29/23 21:33:32.181 Jan 29 21:33:32.181: INFO: starting to wait for deployment to become available Jan 29 21:34:02.624: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 30.443436446s STEP: Waiting for Running azure-disk-csi node pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:86 @ 01/29/23 21:34:02.624 STEP: waiting for daemonset kube-system/csi-azuredisk-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/29/23 21:34:03.17 Jan 29 21:34:03.170: INFO: waiting for daemonset kube-system/csi-azuredisk-node to be complete Jan 29 21:34:03.280: INFO: 1 daemonset kube-system/csi-azuredisk-node pods are running, took 109.785679ms STEP: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/29/23 21:34:03.828 Jan 29 21:34:03.828: INFO: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete Jan 29 21:34:03.938: INFO: 0 daemonset kube-system/csi-azuredisk-node-win pods are running, took 109.668397ms INFO: Waiting for control plane to be ready INFO: Waiting for control plane self-hosted/self-hosted-4n8pps-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:165 @ 01/29/23 21:34:03.952 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:196 @ 01/29/23 21:34:03.958 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:102 @ 01/29/23 21:34:03.986 [FAILED] Timed out after 1500.001s. Timed out waiting for 1 nodes to be created for MachineDeployment self-hosted/self-hosted-4n8pps-md-0 Expected <int>: 0 to equal <int>: 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:131 @ 01/29/23 21:59:03.988 < Exit [It] Should pivot the bootstrap cluster to a self-hosted cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:108 @ 01/29/23 21:59:03.988 (34m31.454s) > Enter [AfterEach] Running the self-hosted spec - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:190 @ 01/29/23 21:59:03.988 STEP: Dumping logs from the "self-hosted-4n8pps" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/29/23 21:59:03.988 Jan 29 21:59:03.988: INFO: Dumping workload cluster self-hosted/self-hosted-4n8pps logs Jan 29 21:59:04.033: INFO: Collecting logs for Linux node self-hosted-4n8pps-control-plane-r2sh8 in cluster self-hosted-4n8pps in namespace self-hosted Jan 29 21:59:18.688: INFO: Collecting boot logs for AzureMachine self-hosted-4n8pps-control-plane-r2sh8 Jan 29 21:59:20.440: INFO: Collecting logs for Linux node self-hosted-4n8pps-md-0-vngmt in cluster self-hosted-4n8pps in namespace self-hosted Jan 29 22:00:26.508: INFO: Collecting boot logs for AzureMachine self-hosted-4n8pps-md-0-vngmt Jan 29 22:00:26.524: INFO: Dumping workload cluster self-hosted/self-hosted-4n8pps kube-system pod logs Jan 29 22:00:27.775: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-f4996db68-862rj, container calico-apiserver Jan 29 22:00:27.775: INFO: Describing Pod calico-apiserver/calico-apiserver-f4996db68-862rj Jan 29 22:00:27.998: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-f4996db68-9llpg, container calico-apiserver Jan 29 22:00:27.998: INFO: Describing Pod calico-apiserver/calico-apiserver-f4996db68-9llpg Jan 29 22:00:28.222: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-594d54f99-4qwfq, container calico-kube-controllers Jan 29 22:00:28.223: INFO: Describing Pod calico-system/calico-kube-controllers-594d54f99-4qwfq Jan 29 22:00:28.459: INFO: Creating log watcher for controller calico-system/calico-node-kvvwg, container calico-node Jan 29 22:00:28.459: INFO: Describing Pod calico-system/calico-node-kvvwg Jan 29 22:00:28.725: INFO: Creating log watcher for controller calico-system/calico-typha-758cddcc6f-fmv8r, container calico-typha Jan 29 22:00:28.725: INFO: Describing Pod calico-system/calico-typha-758cddcc6f-fmv8r Jan 29 22:00:28.945: INFO: Creating log watcher for controller calico-system/csi-node-driver-m298w, container calico-csi Jan 29 22:00:28.945: INFO: Creating log watcher for controller calico-system/csi-node-driver-m298w, container csi-node-driver-registrar Jan 29 22:00:28.945: INFO: Describing Pod calico-system/csi-node-driver-m298w Jan 29 22:00:29.164: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-djwsc, container coredns Jan 29 22:00:29.164: INFO: Describing Pod kube-system/coredns-57575c5f89-djwsc Jan 29 22:00:29.382: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-sjrdw, container coredns Jan 29 22:00:29.383: INFO: Describing Pod kube-system/coredns-57575c5f89-sjrdw Jan 29 22:00:29.607: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-wj48c, container csi-snapshotter Jan 29 22:00:29.607: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-wj48c, container liveness-probe Jan 29 22:00:29.607: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-wj48c, container csi-provisioner Jan 29 22:00:29.607: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-wj48c, container azuredisk Jan 29 22:00:29.607: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-wj48c, container csi-resizer Jan 29 22:00:29.607: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-wj48c, container csi-attacher Jan 29 22:00:29.607: INFO: Describing Pod kube-system/csi-azuredisk-controller-545d478dbf-wj48c Jan 29 22:00:29.828: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-p6q2d, container liveness-probe Jan 29 22:00:29.828: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-p6q2d, container node-driver-registrar Jan 29 22:00:29.828: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-p6q2d, container azuredisk Jan 29 22:00:29.828: INFO: Describing Pod kube-system/csi-azuredisk-node-p6q2d Jan 29 22:00:30.062: INFO: Creating log watcher for controller kube-system/etcd-self-hosted-4n8pps-control-plane-r2sh8, container etcd Jan 29 22:00:30.062: INFO: Describing Pod kube-system/etcd-self-hosted-4n8pps-control-plane-r2sh8 Jan 29 22:00:30.462: INFO: Describing Pod kube-system/kube-apiserver-self-hosted-4n8pps-control-plane-r2sh8 Jan 29 22:00:30.462: INFO: Creating log watcher for controller kube-system/kube-apiserver-self-hosted-4n8pps-control-plane-r2sh8, container kube-apiserver Jan 29 22:00:30.862: INFO: Describing Pod kube-system/kube-controller-manager-self-hosted-4n8pps-control-plane-r2sh8 Jan 29 22:00:30.862: INFO: Creating log watcher for controller kube-system/kube-controller-manager-self-hosted-4n8pps-control-plane-r2sh8, container kube-controller-manager Jan 29 22:00:31.263: INFO: Describing Pod kube-system/kube-proxy-w9dnn Jan 29 22:00:31.263: INFO: Creating log watcher for controller kube-system/kube-proxy-w9dnn, container kube-proxy Jan 29 22:00:31.662: INFO: Describing Pod kube-system/kube-scheduler-self-hosted-4n8pps-control-plane-r2sh8 Jan 29 22:00:31.662: INFO: Creating log watcher for controller kube-system/kube-scheduler-self-hosted-4n8pps-control-plane-r2sh8, container kube-scheduler Jan 29 22:00:32.063: INFO: Fetching kube-system pod logs took 5.538274008s Jan 29 22:00:32.063: INFO: Dumping workload cluster self-hosted/self-hosted-4n8pps Azure activity log Jan 29 22:00:32.063: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-ccdr4, container tigera-operator Jan 29 22:00:32.063: INFO: Describing Pod tigera-operator/tigera-operator-65d6bf4d4f-ccdr4 Jan 29 22:00:33.558: INFO: Fetching activity logs took 1.495128025s Jan 29 22:00:33.558: INFO: Dumping all the Cluster API resources in the "self-hosted" namespace Jan 29 22:00:33.922: INFO: Deleting all clusters in the self-hosted namespace STEP: Deleting cluster self-hosted-4n8pps - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/29/23 22:00:33.94 INFO: Waiting for the Cluster self-hosted/self-hosted-4n8pps to be deleted STEP: Waiting for cluster self-hosted-4n8pps to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/29/23 22:00:33.953 Jan 29 22:05:04.100: INFO: Deleting namespace used for hosting the "self-hosted" test spec INFO: Deleting namespace self-hosted Jan 29 22:05:04.118: INFO: Checking if any resources are left over in Azure for spec "self-hosted" STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:212 @ 01/29/23 22:05:04.978 < Exit [AfterEach] Running the self-hosted spec - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:190 @ 01/29/23 22:06:14.477 (7m10.49s) > Enter [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:97 @ 01/29/23 22:06:14.477 Jan 29 22:06:14.478: INFO: FAILED! Jan 29 22:06:14.478: INFO: Cleaning up after "Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster" spec STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:212 @ 01/29/23 22:06:14.478 INFO: "Should pivot the bootstrap cluster to a self-hosted cluster" started at Sun, 29 Jan 2023 22:07:34 UTC on Ginkgo node 10 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:97 @ 01/29/23 22:07:34.97 (1m20.493s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sRunning\sthe\sCluster\sAPI\sE2E\stests\sShould\ssuccessfully\sremediate\sunhealthy\smachines\swith\sMachineHealthCheck\sShould\ssuccessfully\strigger\sKCP\sremediation$'
[FAILED] Timed out after 1200.001s. Timed out waiting for 3 control plane machines to exist Expected <int>: 2 to equal <int>: 3 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:117 @ 01/29/23 21:53:55.992from junit.e2e_suite.1.xml
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/mhc-remediation-s54xzy-md-0 created cluster.cluster.x-k8s.io/mhc-remediation-s54xzy created machinedeployment.cluster.x-k8s.io/mhc-remediation-s54xzy-md-0 created machinehealthcheck.cluster.x-k8s.io/mhc-remediation-s54xzy-mhc-0 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/mhc-remediation-s54xzy-control-plane created azurecluster.infrastructure.cluster.x-k8s.io/mhc-remediation-s54xzy created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created azuremachinetemplate.infrastructure.cluster.x-k8s.io/mhc-remediation-s54xzy-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/mhc-remediation-s54xzy-md-0 created felixconfiguration.crd.projectcalico.org/default configured > Enter [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:52 @ 01/29/23 21:24:32.239 INFO: "" started at Sun, 29 Jan 2023 21:24:32 UTC on Ginkgo node 7 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:52 @ 01/29/23 21:24:32.391 (152ms) > Enter [BeforeEach] Should successfully remediate unhealthy machines with MachineHealthCheck - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/mhc_remediations.go:69 @ 01/29/23 21:24:32.391 STEP: Creating a namespace for hosting the "mhc-remediation" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 01/29/23 21:24:32.391 INFO: Creating namespace mhc-remediation-e7k8nz INFO: Creating event watcher for namespace "mhc-remediation-e7k8nz" < Exit [BeforeEach] Should successfully remediate unhealthy machines with MachineHealthCheck - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/mhc_remediations.go:69 @ 01/29/23 21:24:32.515 (124ms) > Enter [It] Should successfully trigger KCP remediation - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/mhc_remediations.go:116 @ 01/29/23 21:24:32.517 STEP: Creating a workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/mhc_remediations.go:117 @ 01/29/23 21:24:32.517 INFO: Creating the workload cluster with name "mhc-remediation-s54xzy" using the "kcp-remediation" template (Kubernetes v1.24.10, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster mhc-remediation-s54xzy --infrastructure (default) --kubernetes-version v1.24.10 --control-plane-machine-count 3 --worker-machine-count 1 --flavor kcp-remediation INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:134 @ 01/29/23 21:24:36.489 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/29/23 21:29:36.779 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:131 @ 01/29/23 21:29:36.779 Jan 29 21:31:37.177: INFO: getting history for release projectcalico Jan 29 21:31:37.285: INFO: Release projectcalico does not exist, installing it Jan 29 21:31:38.550: INFO: creating 1 resource(s) Jan 29 21:31:38.684: INFO: creating 1 resource(s) Jan 29 21:31:38.812: INFO: creating 1 resource(s) Jan 29 21:31:38.931: INFO: creating 1 resource(s) Jan 29 21:31:39.072: INFO: creating 1 resource(s) Jan 29 21:31:39.197: INFO: creating 1 resource(s) Jan 29 21:31:39.515: INFO: creating 1 resource(s) Jan 29 21:31:39.701: INFO: creating 1 resource(s) Jan 29 21:31:39.821: INFO: creating 1 resource(s) Jan 29 21:31:39.945: INFO: creating 1 resource(s) Jan 29 21:31:40.070: INFO: creating 1 resource(s) Jan 29 21:31:40.190: INFO: creating 1 resource(s) Jan 29 21:31:40.331: INFO: creating 1 resource(s) Jan 29 21:31:40.455: INFO: creating 1 resource(s) Jan 29 21:31:40.584: INFO: creating 1 resource(s) Jan 29 21:31:40.721: INFO: creating 1 resource(s) Jan 29 21:31:40.898: INFO: creating 1 resource(s) Jan 29 21:31:41.022: INFO: creating 1 resource(s) Jan 29 21:31:41.211: INFO: creating 1 resource(s) Jan 29 21:31:41.401: INFO: creating 1 resource(s) Jan 29 21:31:42.040: INFO: creating 1 resource(s) Jan 29 21:31:42.180: INFO: Clearing discovery cache Jan 29 21:31:42.180: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 29 21:31:47.452: INFO: creating 1 resource(s) Jan 29 21:31:48.291: INFO: creating 6 resource(s) Jan 29 21:31:49.569: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/29/23 21:31:50.353 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/29/23 21:31:50.803 Jan 29 21:31:50.803: INFO: starting to wait for deployment to become available Jan 29 21:32:01.020: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.217229632s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:74 @ 01/29/23 21:32:02.269 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/29/23 21:32:02.807 Jan 29 21:32:02.807: INFO: starting to wait for deployment to become available Jan 29 21:32:53.456: INFO: Deployment calico-system/calico-kube-controllers is now available, took 50.648919215s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/29/23 21:32:54.424 Jan 29 21:32:54.424: INFO: starting to wait for deployment to become available Jan 29 21:32:54.531: INFO: Deployment calico-system/calico-typha is now available, took 107.891309ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:79 @ 01/29/23 21:32:54.531 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/29/23 21:32:55.3 Jan 29 21:32:55.300: INFO: starting to wait for deployment to become available Jan 29 21:33:16.214: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 20.91357842s STEP: Waiting for Ready calico-node daemonset pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:84 @ 01/29/23 21:33:16.214 STEP: waiting for daemonset calico-system/calico-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/29/23 21:33:16.764 Jan 29 21:33:16.764: INFO: waiting for daemonset calico-system/calico-node to be complete Jan 29 21:33:16.873: INFO: 1 daemonset calico-system/calico-node pods are running, took 108.846437ms INFO: Waiting for the first control plane machine managed by mhc-remediation-e7k8nz/mhc-remediation-s54xzy-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 @ 01/29/23 21:33:16.895 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:71 @ 01/29/23 21:33:16.902 Jan 29 21:33:17.030: INFO: getting history for release azuredisk-csi-driver-oot Jan 29 21:33:17.139: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Jan 29 21:33:21.377: INFO: creating 1 resource(s) Jan 29 21:33:21.736: INFO: creating 18 resource(s) Jan 29 21:33:22.572: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:81 @ 01/29/23 21:33:22.589 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/29/23 21:33:23.03 Jan 29 21:33:23.030: INFO: starting to wait for deployment to become available Jan 29 21:33:54.673: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 31.643697162s STEP: Waiting for Running azure-disk-csi node pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:86 @ 01/29/23 21:33:54.673 STEP: waiting for daemonset kube-system/csi-azuredisk-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/29/23 21:33:55.216 Jan 29 21:33:55.216: INFO: waiting for daemonset kube-system/csi-azuredisk-node to be complete Jan 29 21:33:55.325: INFO: 1 daemonset kube-system/csi-azuredisk-node pods are running, took 108.899464ms STEP: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/29/23 21:33:55.866 Jan 29 21:33:55.866: INFO: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete Jan 29 21:33:55.975: INFO: 0 daemonset kube-system/csi-azuredisk-node-win pods are running, took 109.290796ms INFO: Waiting for control plane to be ready INFO: Waiting for the remaining control plane machines managed by mhc-remediation-e7k8nz/mhc-remediation-s54xzy-control-plane to be provisioned STEP: Waiting for all control plane nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:96 @ 01/29/23 21:33:55.991 [FAILED] Timed out after 1200.001s. Timed out waiting for 3 control plane machines to exist Expected <int>: 2 to equal <int>: 3 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:117 @ 01/29/23 21:53:55.992 < Exit [It] Should successfully trigger KCP remediation - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/mhc_remediations.go:116 @ 01/29/23 21:53:55.992 (29m23.475s) > Enter [AfterEach] Should successfully remediate unhealthy machines with MachineHealthCheck - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/mhc_remediations.go:149 @ 01/29/23 21:53:55.992 STEP: Dumping logs from the "mhc-remediation-s54xzy" workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 01/29/23 21:53:55.992 Jan 29 21:53:55.992: INFO: Dumping workload cluster mhc-remediation-e7k8nz/mhc-remediation-s54xzy logs Jan 29 21:53:56.035: INFO: Collecting logs for Linux node mhc-remediation-s54xzy-control-plane-2dndj in cluster mhc-remediation-s54xzy in namespace mhc-remediation-e7k8nz Jan 29 21:54:11.183: INFO: Collecting boot logs for AzureMachine mhc-remediation-s54xzy-control-plane-2dndj Jan 29 21:54:13.302: INFO: Collecting logs for Linux node mhc-remediation-s54xzy-control-plane-xlbmv in cluster mhc-remediation-s54xzy in namespace mhc-remediation-e7k8nz Jan 29 21:54:23.947: INFO: Collecting boot logs for AzureMachine mhc-remediation-s54xzy-control-plane-xlbmv Jan 29 21:54:24.740: INFO: Collecting logs for Linux node mhc-remediation-s54xzy-control-plane-cxzv5 in cluster mhc-remediation-s54xzy in namespace mhc-remediation-e7k8nz Jan 29 21:54:44.332: INFO: Collecting boot logs for AzureMachine mhc-remediation-s54xzy-control-plane-cxzv5 Jan 29 21:54:45.018: INFO: Collecting logs for Linux node mhc-remediation-s54xzy-md-0-47w5c in cluster mhc-remediation-s54xzy in namespace mhc-remediation-e7k8nz Jan 29 21:54:55.839: INFO: Collecting boot logs for AzureMachine mhc-remediation-s54xzy-md-0-47w5c Jan 29 21:54:56.516: INFO: Dumping workload cluster mhc-remediation-e7k8nz/mhc-remediation-s54xzy kube-system pod logs Jan 29 21:54:58.104: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-567d95df69-7wc9b, container calico-apiserver Jan 29 21:54:58.104: INFO: Describing Pod calico-apiserver/calico-apiserver-567d95df69-7wc9b Jan 29 21:54:58.342: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-567d95df69-pxc8h, container calico-apiserver Jan 29 21:54:58.342: INFO: Describing Pod calico-apiserver/calico-apiserver-567d95df69-pxc8h Jan 29 21:54:58.580: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-594d54f99-c4982, container calico-kube-controllers Jan 29 21:54:58.581: INFO: Describing Pod calico-system/calico-kube-controllers-594d54f99-c4982 Jan 29 21:54:58.834: INFO: Creating log watcher for controller calico-system/calico-node-f6vpp, container calico-node Jan 29 21:54:58.835: INFO: Describing Pod calico-system/calico-node-f6vpp Jan 29 21:54:59.145: INFO: Creating log watcher for controller calico-system/calico-node-rhn9r, container calico-node Jan 29 21:54:59.145: INFO: Describing Pod calico-system/calico-node-rhn9r Jan 29 21:54:59.383: INFO: Creating log watcher for controller calico-system/calico-node-vl9dr, container calico-node Jan 29 21:54:59.383: INFO: Describing Pod calico-system/calico-node-vl9dr Jan 29 21:54:59.613: INFO: Creating log watcher for controller calico-system/calico-node-xqlz7, container calico-node Jan 29 21:54:59.613: INFO: Describing Pod calico-system/calico-node-xqlz7 Jan 29 21:54:59.839: INFO: Creating log watcher for controller calico-system/calico-typha-7bff96ddbb-j8v5w, container calico-typha Jan 29 21:54:59.839: INFO: Describing Pod calico-system/calico-typha-7bff96ddbb-j8v5w Jan 29 21:55:00.063: INFO: Creating log watcher for controller calico-system/calico-typha-7bff96ddbb-kngn8, container calico-typha Jan 29 21:55:00.064: INFO: Describing Pod calico-system/calico-typha-7bff96ddbb-kngn8 Jan 29 21:55:00.287: INFO: Creating log watcher for controller calico-system/csi-node-driver-fdpxn, container csi-node-driver-registrar Jan 29 21:55:00.287: INFO: Describing Pod calico-system/csi-node-driver-fdpxn Jan 29 21:55:00.287: INFO: Creating log watcher for controller calico-system/csi-node-driver-fdpxn, container calico-csi Jan 29 21:55:00.512: INFO: Creating log watcher for controller calico-system/csi-node-driver-gm5dj, container calico-csi Jan 29 21:55:00.512: INFO: Creating log watcher for controller calico-system/csi-node-driver-gm5dj, container csi-node-driver-registrar Jan 29 21:55:00.512: INFO: Describing Pod calico-system/csi-node-driver-gm5dj Jan 29 21:55:00.780: INFO: Describing Pod calico-system/csi-node-driver-h7pcf Jan 29 21:55:00.780: INFO: Creating log watcher for controller calico-system/csi-node-driver-h7pcf, container calico-csi Jan 29 21:55:00.780: INFO: Creating log watcher for controller calico-system/csi-node-driver-h7pcf, container csi-node-driver-registrar Jan 29 21:55:00.903: INFO: Error starting logs stream for pod calico-system/csi-node-driver-h7pcf, container csi-node-driver-registrar: container "csi-node-driver-registrar" in pod "csi-node-driver-h7pcf" is waiting to start: ContainerCreating Jan 29 21:55:00.903: INFO: Error starting logs stream for pod calico-system/csi-node-driver-h7pcf, container calico-csi: container "calico-csi" in pod "csi-node-driver-h7pcf" is waiting to start: ContainerCreating Jan 29 21:55:01.181: INFO: Creating log watcher for controller calico-system/csi-node-driver-smh4n, container calico-csi Jan 29 21:55:01.181: INFO: Describing Pod calico-system/csi-node-driver-smh4n Jan 29 21:55:01.181: INFO: Creating log watcher for controller calico-system/csi-node-driver-smh4n, container csi-node-driver-registrar Jan 29 21:55:01.581: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-8rjqp, container coredns Jan 29 21:55:01.581: INFO: Describing Pod kube-system/coredns-57575c5f89-8rjqp Jan 29 21:55:03.179: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-xxf9q, container coredns Jan 29 21:55:03.179: INFO: Describing Pod kube-system/coredns-57575c5f89-xxf9q Jan 29 21:55:03.406: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-zr2pp, container csi-snapshotter Jan 29 21:55:03.406: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-zr2pp, container csi-provisioner Jan 29 21:55:03.407: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-zr2pp, container csi-resizer Jan 29 21:55:03.407: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-zr2pp, container csi-attacher Jan 29 21:55:03.407: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-zr2pp, container azuredisk Jan 29 21:55:03.408: INFO: Describing Pod kube-system/csi-azuredisk-controller-545d478dbf-zr2pp Jan 29 21:55:03.408: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-zr2pp, container liveness-probe Jan 29 21:55:05.083: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-b7ztc, container node-driver-registrar Jan 29 21:55:05.083: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-b7ztc, container liveness-probe Jan 29 21:55:05.083: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-b7ztc, container azuredisk Jan 29 21:55:05.083: INFO: Describing Pod kube-system/csi-azuredisk-node-b7ztc Jan 29 21:55:05.309: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-cm9dh, container node-driver-registrar Jan 29 21:55:05.309: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-cm9dh, container azuredisk Jan 29 21:55:05.310: INFO: Describing Pod kube-system/csi-azuredisk-node-cm9dh Jan 29 21:55:05.310: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-cm9dh, container liveness-probe Jan 29 21:55:05.538: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-ggcpt, container node-driver-registrar Jan 29 21:55:05.538: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-ggcpt, container azuredisk Jan 29 21:55:05.538: INFO: Describing Pod kube-system/csi-azuredisk-node-ggcpt Jan 29 21:55:05.538: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-ggcpt, container liveness-probe Jan 29 21:55:05.763: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-kbq2m, container node-driver-registrar Jan 29 21:55:05.763: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-kbq2m, container liveness-probe Jan 29 21:55:05.764: INFO: Describing Pod kube-system/csi-azuredisk-node-kbq2m Jan 29 21:55:05.764: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-kbq2m, container azuredisk Jan 29 21:55:05.987: INFO: Creating log watcher for controller kube-system/etcd-mhc-remediation-s54xzy-control-plane-2dndj, container etcd Jan 29 21:55:05.987: INFO: Describing Pod kube-system/etcd-mhc-remediation-s54xzy-control-plane-2dndj Jan 29 21:55:06.210: INFO: Creating log watcher for controller kube-system/etcd-mhc-remediation-s54xzy-control-plane-cxzv5, container etcd Jan 29 21:55:06.210: INFO: Describing Pod kube-system/etcd-mhc-remediation-s54xzy-control-plane-cxzv5 Jan 29 21:55:06.434: INFO: Creating log watcher for controller kube-system/etcd-mhc-remediation-s54xzy-control-plane-xlbmv, container etcd Jan 29 21:55:06.434: INFO: Describing Pod kube-system/etcd-mhc-remediation-s54xzy-control-plane-xlbmv Jan 29 21:55:06.659: INFO: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-s54xzy-control-plane-2dndj, container kube-apiserver Jan 29 21:55:06.659: INFO: Describing Pod kube-system/kube-apiserver-mhc-remediation-s54xzy-control-plane-2dndj Jan 29 21:55:06.891: INFO: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-s54xzy-control-plane-cxzv5, container kube-apiserver Jan 29 21:55:06.891: INFO: Describing Pod kube-system/kube-apiserver-mhc-remediation-s54xzy-control-plane-cxzv5 Jan 29 21:55:07.117: INFO: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-s54xzy-control-plane-xlbmv, container kube-apiserver Jan 29 21:55:07.118: INFO: Describing Pod kube-system/kube-apiserver-mhc-remediation-s54xzy-control-plane-xlbmv Jan 29 21:55:07.346: INFO: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-s54xzy-control-plane-2dndj, container kube-controller-manager Jan 29 21:55:07.346: INFO: Describing Pod kube-system/kube-controller-manager-mhc-remediation-s54xzy-control-plane-2dndj Jan 29 21:55:07.598: INFO: Describing Pod kube-system/kube-controller-manager-mhc-remediation-s54xzy-control-plane-xlbmv Jan 29 21:55:07.598: INFO: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-s54xzy-control-plane-xlbmv, container kube-controller-manager Jan 29 21:55:07.997: INFO: Creating log watcher for controller kube-system/kube-proxy-8kb7j, container kube-proxy Jan 29 21:55:07.997: INFO: Describing Pod kube-system/kube-proxy-8kb7j Jan 29 21:55:08.398: INFO: Describing Pod kube-system/kube-proxy-dg6db Jan 29 21:55:08.398: INFO: Creating log watcher for controller kube-system/kube-proxy-dg6db, container kube-proxy Jan 29 21:55:08.795: INFO: Describing Pod kube-system/kube-proxy-dqw8f Jan 29 21:55:08.795: INFO: Creating log watcher for controller kube-system/kube-proxy-dqw8f, container kube-proxy Jan 29 21:55:09.264: INFO: Describing Pod kube-system/kube-proxy-jr7x6 Jan 29 21:55:09.264: INFO: Creating log watcher for controller kube-system/kube-proxy-jr7x6, container kube-proxy Jan 29 21:55:09.597: INFO: Describing Pod kube-system/kube-scheduler-mhc-remediation-s54xzy-control-plane-2dndj Jan 29 21:55:09.597: INFO: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-s54xzy-control-plane-2dndj, container kube-scheduler Jan 29 21:55:09.995: INFO: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-s54xzy-control-plane-xlbmv, container kube-scheduler Jan 29 21:55:09.995: INFO: Describing Pod kube-system/kube-scheduler-mhc-remediation-s54xzy-control-plane-xlbmv Jan 29 21:55:10.400: INFO: Fetching kube-system pod logs took 13.88360447s Jan 29 21:55:10.400: INFO: Dumping workload cluster mhc-remediation-e7k8nz/mhc-remediation-s54xzy Azure activity log Jan 29 21:55:10.400: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-25hnm, container tigera-operator Jan 29 21:55:10.401: INFO: Describing Pod tigera-operator/tigera-operator-65d6bf4d4f-25hnm Jan 29 21:55:11.986: INFO: Fetching activity logs took 1.586145297s STEP: Dumping all the Cluster API resources in the "mhc-remediation-e7k8nz" namespace - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 01/29/23 21:55:11.986 STEP: Deleting cluster mhc-remediation-e7k8nz/mhc-remediation-s54xzy - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 01/29/23 21:55:12.507 STEP: Deleting cluster mhc-remediation-s54xzy - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/29/23 21:55:12.531 INFO: Waiting for the Cluster mhc-remediation-e7k8nz/mhc-remediation-s54xzy to be deleted STEP: Waiting for cluster mhc-remediation-s54xzy to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/29/23 21:55:12.549 STEP: Deleting namespace used for hosting the "mhc-remediation" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 01/29/23 22:03:12.808 INFO: Deleting namespace mhc-remediation-e7k8nz < Exit [AfterEach] Should successfully remediate unhealthy machines with MachineHealthCheck - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/mhc_remediations.go:149 @ 01/29/23 22:03:12.826 (9m16.834s) > Enter [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:97 @ 01/29/23 22:03:12.826 Jan 29 22:03:12.826: INFO: FAILED! Jan 29 22:03:12.826: INFO: Cleaning up after "Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation" spec STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:212 @ 01/29/23 22:03:12.826 INFO: "Should successfully trigger KCP remediation" started at Sun, 29 Jan 2023 22:04:25 UTC on Ginkgo node 7 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:97 @ 01/29/23 22:04:25.211 (1m12.385s)
Filter through log files
capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e [It] Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e [It] Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [It] Conformance Tests conformance-tests
capz-e2e [It] Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and machinepools [OPTIONAL] with 1 control plane node and 1 machinepool
capz-e2e [It] Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [It] Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e [It] Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e [It] Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with out-of-tree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider