Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 27 succeeded |
Started | |
Elapsed | 1h23m |
Revision | release-1.7 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sRunning\sthe\sCluster\sAPI\sE2E\stests\sShould\ssuccessfully\sremediate\sunhealthy\smachines\swith\sMachineHealthCheck\sShould\ssuccessfully\strigger\sKCP\sremediation$'
[FAILED] Timed out after 1800.001s. Expected <bool>: false to be true In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinehealthcheck_helpers.go:168 @ 01/19/23 22:01:57.066 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit.e2e_suite.1.xml
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/mhc-remediation-ea0ks1-md-0 created cluster.cluster.x-k8s.io/mhc-remediation-ea0ks1 created machinedeployment.cluster.x-k8s.io/mhc-remediation-ea0ks1-md-0 created machinehealthcheck.cluster.x-k8s.io/mhc-remediation-ea0ks1-mhc-0 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/mhc-remediation-ea0ks1-control-plane created azurecluster.infrastructure.cluster.x-k8s.io/mhc-remediation-ea0ks1 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created azuremachinetemplate.infrastructure.cluster.x-k8s.io/mhc-remediation-ea0ks1-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/mhc-remediation-ea0ks1-md-0 created felixconfiguration.crd.projectcalico.org/default configured > Enter [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:52 @ 01/19/23 21:23:11.058 INFO: "" started at Thu, 19 Jan 2023 21:23:11 UTC on Ginkgo node 5 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:52 @ 01/19/23 21:23:11.381 (323ms) > Enter [BeforeEach] Should successfully remediate unhealthy machines with MachineHealthCheck - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/mhc_remediations.go:69 @ 01/19/23 21:23:11.381 STEP: Creating a namespace for hosting the "mhc-remediation" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 01/19/23 21:23:11.381 INFO: Creating namespace mhc-remediation-u1144g INFO: Creating event watcher for namespace "mhc-remediation-u1144g" < Exit [BeforeEach] Should successfully remediate unhealthy machines with MachineHealthCheck - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/mhc_remediations.go:69 @ 01/19/23 21:23:11.502 (121ms) > Enter [It] Should successfully trigger KCP remediation - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/mhc_remediations.go:116 @ 01/19/23 21:23:11.502 STEP: Creating a workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/mhc_remediations.go:117 @ 01/19/23 21:23:11.502 INFO: Creating the workload cluster with name "mhc-remediation-ea0ks1" using the "kcp-remediation" template (Kubernetes v1.24.9, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster mhc-remediation-ea0ks1 --infrastructure (default) --kubernetes-version v1.24.9 --control-plane-machine-count 3 --worker-machine-count 1 --flavor kcp-remediation INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:134 @ 01/19/23 21:23:18.824 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/19/23 21:25:09.015 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:131 @ 01/19/23 21:25:09.015 Jan 19 21:27:09.290: INFO: getting history for release projectcalico Jan 19 21:27:09.354: INFO: Release projectcalico does not exist, installing it Jan 19 21:27:10.504: INFO: creating 1 resource(s) Jan 19 21:27:10.602: INFO: creating 1 resource(s) Jan 19 21:27:10.691: INFO: creating 1 resource(s) Jan 19 21:27:10.776: INFO: creating 1 resource(s) Jan 19 21:27:10.865: INFO: creating 1 resource(s) Jan 19 21:27:10.963: INFO: creating 1 resource(s) Jan 19 21:27:11.161: INFO: creating 1 resource(s) Jan 19 21:27:11.310: INFO: creating 1 resource(s) Jan 19 21:27:11.389: INFO: creating 1 resource(s) Jan 19 21:27:11.467: INFO: creating 1 resource(s) Jan 19 21:27:11.550: INFO: creating 1 resource(s) Jan 19 21:27:11.634: INFO: creating 1 resource(s) Jan 19 21:27:11.712: INFO: creating 1 resource(s) Jan 19 21:27:11.790: INFO: creating 1 resource(s) Jan 19 21:27:11.868: INFO: creating 1 resource(s) Jan 19 21:27:11.966: INFO: creating 1 resource(s) Jan 19 21:27:12.088: INFO: creating 1 resource(s) Jan 19 21:27:12.188: INFO: creating 1 resource(s) Jan 19 21:27:12.307: INFO: creating 1 resource(s) Jan 19 21:27:12.532: INFO: creating 1 resource(s) Jan 19 21:27:12.922: INFO: creating 1 resource(s) Jan 19 21:27:13.006: INFO: Clearing discovery cache Jan 19 21:27:13.006: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 19 21:27:17.453: INFO: creating 1 resource(s) Jan 19 21:27:18.247: INFO: creating 6 resource(s) Jan 19 21:27:19.173: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/19/23 21:27:19.666 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 21:27:19.925 Jan 19 21:27:19.925: INFO: starting to wait for deployment to become available Jan 19 21:27:30.052: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.126720337s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:74 @ 01/19/23 21:27:31.169 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 21:27:31.49 Jan 19 21:27:31.490: INFO: starting to wait for deployment to become available Jan 19 21:28:22.702: INFO: Deployment calico-system/calico-kube-controllers is now available, took 51.212298649s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 21:28:23.281 Jan 19 21:28:23.281: INFO: starting to wait for deployment to become available Jan 19 21:28:23.344: INFO: Deployment calico-system/calico-typha is now available, took 62.784647ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:79 @ 01/19/23 21:28:23.344 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 21:28:23.8 Jan 19 21:28:23.801: INFO: starting to wait for deployment to become available Jan 19 21:28:43.995: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 20.194662862s STEP: Waiting for Ready calico-node daemonset pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:84 @ 01/19/23 21:28:43.995 STEP: waiting for daemonset calico-system/calico-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 21:28:44.325 Jan 19 21:28:44.325: INFO: waiting for daemonset calico-system/calico-node to be complete Jan 19 21:28:44.388: INFO: 1 daemonset calico-system/calico-node pods are running, took 63.177706ms INFO: Waiting for the first control plane machine managed by mhc-remediation-u1144g/mhc-remediation-ea0ks1-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 @ 01/19/23 21:28:44.421 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:71 @ 01/19/23 21:28:44.431 Jan 19 21:28:44.528: INFO: getting history for release azuredisk-csi-driver-oot Jan 19 21:28:44.593: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Jan 19 21:28:48.238: INFO: creating 1 resource(s) Jan 19 21:28:48.525: INFO: creating 18 resource(s) Jan 19 21:28:49.085: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:81 @ 01/19/23 21:28:49.113 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 21:28:49.374 Jan 19 21:28:49.374: INFO: starting to wait for deployment to become available Jan 19 21:29:45.010: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 55.63573893s STEP: Waiting for Running azure-disk-csi node pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:86 @ 01/19/23 21:29:45.01 STEP: waiting for daemonset kube-system/csi-azuredisk-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 21:29:45.333 Jan 19 21:29:45.333: INFO: waiting for daemonset kube-system/csi-azuredisk-node to be complete Jan 19 21:29:55.463: INFO: 3 daemonset kube-system/csi-azuredisk-node pods are running, took 10.130239614s STEP: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 21:29:55.856 Jan 19 21:29:55.856: INFO: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete Jan 19 21:29:55.920: INFO: 0 daemonset kube-system/csi-azuredisk-node-win pods are running, took 63.972679ms INFO: Waiting for control plane to be ready INFO: Waiting for the remaining control plane machines managed by mhc-remediation-u1144g/mhc-remediation-ea0ks1-control-plane to be provisioned STEP: Waiting for all control plane nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:96 @ 01/19/23 21:29:55.937 INFO: Waiting for control plane mhc-remediation-u1144g/mhc-remediation-ea0ks1-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:165 @ 01/19/23 21:31:56.059 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:196 @ 01/19/23 21:31:56.065 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:102 @ 01/19/23 21:31:56.091 STEP: Checking all the machines controlled by mhc-remediation-ea0ks1-md-0 are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/19/23 21:31:56.102 INFO: Waiting for the machine pools to be provisioned STEP: Setting a machine unhealthy and wait for KubeadmControlPlane remediation - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/mhc_remediations.go:139 @ 01/19/23 21:31:56.152 Discovering machine health check resources Ensuring there is at least 1 Machine that MachineHealthCheck is matching Patching MachineHealthCheck unhealthy condition to one of the nodes INFO: Patching the node condition to the node Waiting for remediation Waiting until the node with unhealthy node condition is remediated [FAILED] Timed out after 1800.001s. Expected <bool>: false to be true In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinehealthcheck_helpers.go:168 @ 01/19/23 22:01:57.066 < Exit [It] Should successfully trigger KCP remediation - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/mhc_remediations.go:116 @ 01/19/23 22:01:57.066 (38m45.564s) > Enter [AfterEach] Should successfully remediate unhealthy machines with MachineHealthCheck - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/mhc_remediations.go:149 @ 01/19/23 22:01:57.066 STEP: Dumping logs from the "mhc-remediation-ea0ks1" workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 01/19/23 22:01:57.066 Jan 19 22:01:57.066: INFO: Dumping workload cluster mhc-remediation-u1144g/mhc-remediation-ea0ks1 logs Jan 19 22:01:57.115: INFO: Collecting logs for Linux node mhc-remediation-ea0ks1-control-plane-4dngj in cluster mhc-remediation-ea0ks1 in namespace mhc-remediation-u1144g Jan 19 22:02:11.253: INFO: Collecting boot logs for AzureMachine mhc-remediation-ea0ks1-control-plane-4dngj Jan 19 22:02:12.498: INFO: Collecting logs for Linux node mhc-remediation-ea0ks1-control-plane-nmjzw in cluster mhc-remediation-ea0ks1 in namespace mhc-remediation-u1144g Jan 19 22:02:22.333: INFO: Collecting boot logs for AzureMachine mhc-remediation-ea0ks1-control-plane-nmjzw Jan 19 22:02:22.982: INFO: Collecting logs for Linux node mhc-remediation-ea0ks1-control-plane-dd8x5 in cluster mhc-remediation-ea0ks1 in namespace mhc-remediation-u1144g Jan 19 22:02:35.421: INFO: Collecting boot logs for AzureMachine mhc-remediation-ea0ks1-control-plane-dd8x5 Jan 19 22:02:35.983: INFO: Collecting logs for Linux node mhc-remediation-ea0ks1-md-0-qnqh4 in cluster mhc-remediation-ea0ks1 in namespace mhc-remediation-u1144g Jan 19 22:02:44.112: INFO: Collecting boot logs for AzureMachine mhc-remediation-ea0ks1-md-0-qnqh4 Jan 19 22:02:44.652: INFO: Dumping workload cluster mhc-remediation-u1144g/mhc-remediation-ea0ks1 kube-system pod logs Jan 19 22:02:45.162: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-8574d86cd7-6xp5j, container calico-apiserver Jan 19 22:02:45.162: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-8574d86cd7-6xp5j Jan 19 22:02:45.162: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-8574d86cd7-f2c5g, container calico-apiserver Jan 19 22:02:45.162: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-8574d86cd7-f2c5g Jan 19 22:02:45.238: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-594d54f99-9gqg5, container calico-kube-controllers Jan 19 22:02:45.238: INFO: Creating log watcher for controller calico-system/calico-node-hmx9x, container calico-node Jan 19 22:02:45.238: INFO: Collecting events for Pod calico-system/calico-node-67cfn Jan 19 22:02:45.239: INFO: Collecting events for Pod calico-system/calico-kube-controllers-594d54f99-9gqg5 Jan 19 22:02:45.239: INFO: Creating log watcher for controller calico-system/calico-node-67cfn, container calico-node Jan 19 22:02:45.239: INFO: Creating log watcher for controller calico-system/csi-node-driver-dh8gh, container calico-csi Jan 19 22:02:45.239: INFO: Creating log watcher for controller calico-system/calico-typha-74dcbbd6d8-gwrms, container calico-typha Jan 19 22:02:45.239: INFO: Creating log watcher for controller calico-system/csi-node-driver-dh8gh, container csi-node-driver-registrar Jan 19 22:02:45.240: INFO: Creating log watcher for controller calico-system/calico-typha-74dcbbd6d8-xsh5n, container calico-typha Jan 19 22:02:45.240: INFO: Collecting events for Pod calico-system/calico-node-hmx9x Jan 19 22:02:45.240: INFO: Creating log watcher for controller calico-system/calico-node-cjk8p, container calico-node Jan 19 22:02:45.240: INFO: Creating log watcher for controller calico-system/calico-node-mn2nc, container calico-node Jan 19 22:02:45.240: INFO: Collecting events for Pod calico-system/calico-node-cjk8p Jan 19 22:02:45.240: INFO: Collecting events for Pod calico-system/csi-node-driver-dh8gh Jan 19 22:02:45.240: INFO: Creating log watcher for controller calico-system/csi-node-driver-fl99b, container calico-csi Jan 19 22:02:45.240: INFO: Collecting events for Pod calico-system/csi-node-driver-6fztn Jan 19 22:02:45.240: INFO: Collecting events for Pod calico-system/calico-typha-74dcbbd6d8-gwrms Jan 19 22:02:45.240: INFO: Creating log watcher for controller calico-system/csi-node-driver-6fztn, container csi-node-driver-registrar Jan 19 22:02:45.240: INFO: Creating log watcher for controller calico-system/csi-node-driver-fl99b, container csi-node-driver-registrar Jan 19 22:02:45.241: INFO: Collecting events for Pod calico-system/calico-node-mn2nc Jan 19 22:02:45.241: INFO: Collecting events for Pod calico-system/csi-node-driver-fl99b Jan 19 22:02:45.241: INFO: Collecting events for Pod calico-system/calico-typha-74dcbbd6d8-xsh5n Jan 19 22:02:45.241: INFO: Creating log watcher for controller calico-system/csi-node-driver-6fztn, container calico-csi Jan 19 22:02:45.241: INFO: Creating log watcher for controller calico-system/csi-node-driver-c4hzm, container csi-node-driver-registrar Jan 19 22:02:45.241: INFO: Collecting events for Pod calico-system/csi-node-driver-c4hzm Jan 19 22:02:45.242: INFO: Creating log watcher for controller calico-system/csi-node-driver-c4hzm, container calico-csi Jan 19 22:02:45.330: INFO: Collecting events for Pod kube-system/coredns-57575c5f89-b87zr Jan 19 22:02:45.330: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-d7tts, container coredns Jan 19 22:02:45.330: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-b87zr, container coredns Jan 19 22:02:45.330: INFO: Collecting events for Pod kube-system/csi-azuredisk-controller-545d478dbf-7tn5x Jan 19 22:02:45.330: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-l97mb, container liveness-probe Jan 19 22:02:45.330: INFO: Collecting events for Pod kube-system/coredns-57575c5f89-d7tts Jan 19 22:02:45.331: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-7tn5x, container csi-provisioner Jan 19 22:02:45.331: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-l97mb, container node-driver-registrar Jan 19 22:02:45.332: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-7tn5x, container csi-attacher Jan 19 22:02:45.332: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-jtmzd, container liveness-probe Jan 19 22:02:45.332: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-7tn5x, container csi-resizer Jan 19 22:02:45.332: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-l97mb, container azuredisk Jan 19 22:02:45.332: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-7tn5x, container liveness-probe Jan 19 22:02:45.332: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-7tn5x, container azuredisk Jan 19 22:02:45.333: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-7tn5x, container csi-snapshotter Jan 19 22:02:45.333: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-jtmzd, container node-driver-registrar Jan 19 22:02:45.333: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-l97mb Jan 19 22:02:45.333: INFO: Collecting events for Pod kube-system/kube-apiserver-mhc-remediation-ea0ks1-control-plane-nmjzw Jan 19 22:02:45.333: INFO: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-ea0ks1-control-plane-4dngj, container kube-controller-manager Jan 19 22:02:45.334: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-sd5vt, container liveness-probe Jan 19 22:02:45.334: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-jtmzd, container azuredisk Jan 19 22:02:45.334: INFO: Collecting events for Pod kube-system/etcd-mhc-remediation-ea0ks1-control-plane-4dngj Jan 19 22:02:45.334: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-sd5vt, container node-driver-registrar Jan 19 22:02:45.334: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-jtmzd Jan 19 22:02:45.334: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-w8n5c, container node-driver-registrar Jan 19 22:02:45.334: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-sd5vt Jan 19 22:02:45.334: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-sd5vt, container azuredisk Jan 19 22:02:45.334: INFO: Creating log watcher for controller kube-system/etcd-mhc-remediation-ea0ks1-control-plane-dd8x5, container etcd Jan 19 22:02:45.335: INFO: Creating log watcher for controller kube-system/kube-proxy-kk2vv, container kube-proxy Jan 19 22:02:45.335: INFO: Collecting events for Pod kube-system/kube-proxy-kk2vv Jan 19 22:02:45.335: INFO: Creating log watcher for controller kube-system/kube-proxy-pqtkl, container kube-proxy Jan 19 22:02:45.335: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-w8n5c, container azuredisk Jan 19 22:02:45.335: INFO: Collecting events for Pod kube-system/etcd-mhc-remediation-ea0ks1-control-plane-dd8x5 Jan 19 22:02:45.335: INFO: Creating log watcher for controller kube-system/etcd-mhc-remediation-ea0ks1-control-plane-nmjzw, container etcd Jan 19 22:02:45.336: INFO: Collecting events for Pod kube-system/kube-controller-manager-mhc-remediation-ea0ks1-control-plane-nmjzw Jan 19 22:02:45.336: INFO: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-ea0ks1-control-plane-dd8x5, container kube-scheduler Jan 19 22:02:45.336: INFO: Collecting events for Pod kube-system/kube-apiserver-mhc-remediation-ea0ks1-control-plane-4dngj Jan 19 22:02:45.336: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-w8n5c, container liveness-probe Jan 19 22:02:45.336: INFO: Collecting events for Pod kube-system/kube-controller-manager-mhc-remediation-ea0ks1-control-plane-4dngj Jan 19 22:02:45.336: INFO: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-ea0ks1-control-plane-dd8x5, container kube-controller-manager Jan 19 22:02:45.336: INFO: Collecting events for Pod kube-system/etcd-mhc-remediation-ea0ks1-control-plane-nmjzw Jan 19 22:02:45.336: INFO: Creating log watcher for controller kube-system/kube-proxy-2956x, container kube-proxy Jan 19 22:02:45.336: INFO: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-ea0ks1-control-plane-4dngj, container kube-apiserver Jan 19 22:02:45.336: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-w8n5c Jan 19 22:02:45.336: INFO: Creating log watcher for controller kube-system/etcd-mhc-remediation-ea0ks1-control-plane-4dngj, container etcd Jan 19 22:02:45.336: INFO: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-ea0ks1-control-plane-dd8x5 Jan 19 22:02:45.336: INFO: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-ea0ks1-control-plane-nmjzw, container kube-scheduler Jan 19 22:02:45.337: INFO: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-ea0ks1-control-plane-dd8x5, container kube-apiserver Jan 19 22:02:45.337: INFO: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-ea0ks1-control-plane-4dngj, container kube-scheduler Jan 19 22:02:45.337: INFO: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-ea0ks1-control-plane-4dngj Jan 19 22:02:45.337: INFO: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-ea0ks1-control-plane-nmjzw, container kube-controller-manager Jan 19 22:02:45.337: INFO: Collecting events for Pod kube-system/kube-controller-manager-mhc-remediation-ea0ks1-control-plane-dd8x5 Jan 19 22:02:45.337: INFO: Collecting events for Pod kube-system/kube-apiserver-mhc-remediation-ea0ks1-control-plane-dd8x5 Jan 19 22:02:45.337: INFO: Collecting events for Pod kube-system/kube-proxy-2956x Jan 19 22:02:45.337: INFO: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-ea0ks1-control-plane-nmjzw, container kube-apiserver Jan 19 22:02:45.337: INFO: Collecting events for Pod kube-system/kube-proxy-pqtkl Jan 19 22:02:45.337: INFO: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-ea0ks1-control-plane-nmjzw Jan 19 22:02:45.337: INFO: Creating log watcher for controller kube-system/kube-proxy-5v48z, container kube-proxy Jan 19 22:02:45.337: INFO: Collecting events for Pod kube-system/kube-proxy-5v48z Jan 19 22:02:45.474: INFO: Fetching kube-system pod logs took 821.502563ms Jan 19 22:02:45.474: INFO: Dumping workload cluster mhc-remediation-u1144g/mhc-remediation-ea0ks1 Azure activity log Jan 19 22:02:45.475: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-jjbhg, container tigera-operator Jan 19 22:02:45.475: INFO: Collecting events for Pod tigera-operator/tigera-operator-65d6bf4d4f-jjbhg Jan 19 22:02:49.860: INFO: Fetching activity logs took 4.385936556s STEP: Dumping all the Cluster API resources in the "mhc-remediation-u1144g" namespace - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 01/19/23 22:02:49.86 STEP: Deleting cluster mhc-remediation-u1144g/mhc-remediation-ea0ks1 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 01/19/23 22:02:50.177 STEP: Deleting cluster mhc-remediation-ea0ks1 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/19/23 22:02:50.193 INFO: Waiting for the Cluster mhc-remediation-u1144g/mhc-remediation-ea0ks1 to be deleted STEP: Waiting for cluster mhc-remediation-ea0ks1 to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/19/23 22:02:50.206 [FAILED] Timed out after 1800.001s. Expected <bool>: false to be true In [AfterEach] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:176 @ 01/19/23 22:32:50.208 < Exit [AfterEach] Should successfully remediate unhealthy machines with MachineHealthCheck - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/mhc_remediations.go:149 @ 01/19/23 22:32:50.208 (30m53.141s) > Enter [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:97 @ 01/19/23 22:32:50.208 Jan 19 22:32:50.208: INFO: FAILED! Jan 19 22:32:50.208: INFO: Cleaning up after "Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation" spec STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:212 @ 01/19/23 22:32:50.208 INFO: "Should successfully trigger KCP remediation" started at Thu, 19 Jan 2023 22:34:25 UTC on Ginkgo node 5 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:97 @ 01/19/23 22:34:25.244 (1m35.036s)
Filter through log files
capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e [It] Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e [It] Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e [It] Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [It] Conformance Tests conformance-tests
capz-e2e [It] Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and machinepools [OPTIONAL] with 1 control plane node and 1 machinepool
capz-e2e [It] Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [It] Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e [It] Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e [It] Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with out-of-tree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider
... skipping 785 lines ... Jan 19 21:33:17.504: INFO: Collecting events for Pod kube-system/kube-apiserver-mhc-remediation-5ymxn6-control-plane-s7x99 Jan 19 21:33:17.501: INFO: Collecting events for Pod kube-system/coredns-57575c5f89-ctz7c Jan 19 21:33:17.504: INFO: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-5ymxn6-control-plane-s7x99, container kube-controller-manager Jan 19 21:33:17.505: INFO: Creating log watcher for controller kube-system/kube-proxy-4hxds, container kube-proxy Jan 19 21:33:17.506: INFO: Collecting events for Pod kube-system/kube-proxy-4hxds Jan 19 21:33:17.506: INFO: Collecting events for Pod kube-system/csi-azuredisk-controller-545d478dbf-g55j7 Jan 19 21:33:17.570: INFO: Error starting logs stream for pod calico-system/csi-node-driver-pj7wq, container csi-node-driver-registrar: container "csi-node-driver-registrar" in pod "csi-node-driver-pj7wq" is waiting to start: ContainerCreating Jan 19 21:33:17.571: INFO: Error starting logs stream for pod calico-system/calico-node-zcmp7, container calico-node: container "calico-node" in pod "calico-node-zcmp7" is waiting to start: PodInitializing Jan 19 21:33:17.581: INFO: Error starting logs stream for pod calico-system/csi-node-driver-pj7wq, container calico-csi: container "calico-csi" in pod "csi-node-driver-pj7wq" is waiting to start: ContainerCreating Jan 19 21:33:17.584: INFO: Fetching kube-system pod logs took 702.266385ms Jan 19 21:33:17.584: INFO: Dumping workload cluster mhc-remediation-yujpmj/mhc-remediation-5ymxn6 Azure activity log Jan 19 21:33:17.584: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-8mfxm, container tigera-operator Jan 19 21:33:17.584: INFO: Collecting events for Pod tigera-operator/tigera-operator-65d6bf4d4f-8mfxm Jan 19 21:33:19.871: INFO: Fetching activity logs took 2.28658284s [1mSTEP:[0m Dumping all the Cluster API resources in the "mhc-remediation-yujpmj" namespace [38;5;243m@ 01/19/23 21:33:19.871[0m ... skipping 14 lines ... [38;5;243m------------------------------[0m [38;5;10m• [1016.410 seconds][0m [0mRunning the Cluster API E2E tests [38;5;243mRunning the self-hosted spec [38;5;10m[1mShould pivot the bootstrap cluster to a self-hosted cluster[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:108[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/19 21:23:11 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/self-hosted-fk5l4z-md-0 created cluster.cluster.x-k8s.io/self-hosted-fk5l4z created machinedeployment.cluster.x-k8s.io/self-hosted-fk5l4z-md-0 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/self-hosted-fk5l4z-control-plane created azurecluster.infrastructure.cluster.x-k8s.io/self-hosted-fk5l4z created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created ... skipping 236 lines ... azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created azuremachinetemplate.infrastructure.cluster.x-k8s.io/node-drain-i9w5pm-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/node-drain-i9w5pm-md-0 created felixconfiguration.crd.projectcalico.org/default configured Failed to get logs for Machine node-drain-i9w5pm-control-plane-n7zvw, Cluster node-drain-dl6m4h/node-drain-i9w5pm: dialing public load balancer at node-drain-i9w5pm-dda2cb04.westus3.cloudapp.azure.com: dial tcp 20.25.170.6:22: connect: connection timed out [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Thu, 19 Jan 2023 21:23:11 UTC on Ginkgo node 6 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating a namespace for hosting the "node-drain" test spec [38;5;243m@ 01/19/23 21:23:11.301[0m INFO: Creating namespace node-drain-dl6m4h ... skipping 200 lines ... configmap/cni-quick-start-s4i3b1-calico-windows created configmap/csi-proxy-addon created configmap/containerd-logger-quick-start-s4i3b1 created felixconfiguration.crd.projectcalico.org/default created Failed to get logs for Machine quick-start-s4i3b1-md-win-56869465bb-l5jll, Cluster quick-start-14yz1t/quick-start-s4i3b1: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Failed to get logs for Machine quick-start-s4i3b1-md-win-56869465bb-pbjv4, Cluster quick-start-14yz1t/quick-start-s4i3b1: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Thu, 19 Jan 2023 21:23:11 UTC on Ginkgo node 1 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating a namespace for hosting the "quick-start" test spec [38;5;243m@ 01/19/23 21:23:11.137[0m INFO: Creating namespace quick-start-14yz1t ... skipping 231 lines ... configmap/cni-md-scale-8udbpz-calico-windows created configmap/csi-proxy-addon created configmap/containerd-logger-md-scale-8udbpz created felixconfiguration.crd.projectcalico.org/default configured Failed to get logs for Machine md-scale-8udbpz-md-win-9d7f5b5dc-2ql2d, Cluster md-scale-opky54/md-scale-8udbpz: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Failed to get logs for Machine md-scale-8udbpz-md-win-9d7f5b5dc-7wp8w, Cluster md-scale-opky54/md-scale-8udbpz: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Thu, 19 Jan 2023 21:23:11 UTC on Ginkgo node 8 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating a namespace for hosting the "md-scale" test spec [38;5;243m@ 01/19/23 21:23:11.381[0m INFO: Creating namespace md-scale-opky54 ... skipping 383 lines ... Jan 19 21:57:35.743: INFO: Creating log watcher for controller calico-system/calico-node-windows-5567f, container calico-node-startup Jan 19 21:57:35.744: INFO: Creating log watcher for controller calico-system/csi-node-driver-b5fjj, container csi-node-driver-registrar Jan 19 21:57:35.744: INFO: Creating log watcher for controller calico-system/csi-node-driver-hhpmr, container csi-node-driver-registrar Jan 19 21:57:35.744: INFO: Collecting events for Pod calico-system/csi-node-driver-hhpmr Jan 19 21:57:35.744: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-594d54f99-qflp7, container calico-kube-controllers Jan 19 21:57:35.745: INFO: Creating log watcher for controller calico-system/calico-node-lhbkd, container calico-node Jan 19 21:57:35.809: INFO: Error starting logs stream for pod calico-system/csi-node-driver-hhpmr, container csi-node-driver-registrar: pods "machine-pool-efnoqe-mp-0000002" not found Jan 19 21:57:35.811: INFO: Error starting logs stream for pod calico-system/calico-node-dpcpc, container calico-node: pods "machine-pool-efnoqe-mp-0000002" not found Jan 19 21:57:35.811: INFO: Error starting logs stream for pod calico-system/calico-node-windows-5567f, container calico-node-felix: pods "win-p-win000002" not found Jan 19 21:57:35.833: INFO: Error starting logs stream for pod calico-system/csi-node-driver-hhpmr, container calico-csi: pods "machine-pool-efnoqe-mp-0000002" not found Jan 19 21:57:35.833: INFO: Error starting logs stream for pod calico-system/calico-node-windows-5567f, container calico-node-startup: pods "win-p-win000002" not found Jan 19 21:57:35.840: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-87lt4, container coredns Jan 19 21:57:35.840: INFO: Collecting events for Pod kube-system/coredns-57575c5f89-jmf5q Jan 19 21:57:35.840: INFO: Collecting events for Pod kube-system/coredns-57575c5f89-87lt4 Jan 19 21:57:35.840: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-jmf5q, container coredns Jan 19 21:57:35.840: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-x6cw2, container node-driver-registrar Jan 19 21:57:35.840: INFO: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-efnoqe-control-plane-75pdm, container kube-controller-manager ... skipping 29 lines ... Jan 19 21:57:35.844: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-m8kqt, container csi-resizer Jan 19 21:57:35.844: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-kbsh2, container liveness-probe Jan 19 21:57:35.844: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-m8kqt, container azuredisk Jan 19 21:57:35.845: INFO: Collecting events for Pod kube-system/csi-azuredisk-controller-545d478dbf-m8kqt Jan 19 21:57:35.845: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-m8kqt, container csi-attacher Jan 19 21:57:35.845: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-qh9x8, container liveness-probe Jan 19 21:57:35.990: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-x6cw2, container node-driver-registrar: pods "machine-pool-efnoqe-mp-0000002" not found Jan 19 21:57:35.990: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-kbsh2, container liveness-probe: pods "win-p-win000002" not found Jan 19 21:57:35.990: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-kbsh2, container azuredisk: pods "win-p-win000002" not found Jan 19 21:57:35.990: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-kbsh2, container node-driver-registrar: pods "win-p-win000002" not found Jan 19 21:57:35.990: INFO: Error starting logs stream for pod kube-system/kube-proxy-windows-vv788, container kube-proxy: pods "win-p-win000002" not found Jan 19 21:57:35.990: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-x6cw2, container liveness-probe: pods "machine-pool-efnoqe-mp-0000002" not found Jan 19 21:57:35.990: INFO: Error starting logs stream for pod kube-system/csi-proxy-v2zjx, container csi-proxy: pods "win-p-win000002" not found Jan 19 21:57:35.990: INFO: Error starting logs stream for pod kube-system/kube-proxy-nnmm9, container kube-proxy: pods "machine-pool-efnoqe-mp-0000002" not found Jan 19 21:57:35.990: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-x6cw2, container azuredisk: pods "machine-pool-efnoqe-mp-0000002" not found Jan 19 21:57:35.990: INFO: Error starting logs stream for pod kube-system/containerd-logger-6sp2j, container containerd-logger: pods "win-p-win000002" not found Jan 19 21:57:35.991: INFO: Fetching kube-system pod logs took 949.257315ms Jan 19 21:57:35.991: INFO: Dumping workload cluster machine-pool-2pwnt7/machine-pool-efnoqe Azure activity log Jan 19 21:57:35.991: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-cpvrs, container tigera-operator Jan 19 21:57:35.991: INFO: Collecting events for Pod tigera-operator/tigera-operator-65d6bf4d4f-cpvrs Jan 19 21:57:38.884: INFO: Fetching activity logs took 2.893468251s [1mSTEP:[0m Dumping all the Cluster API resources in the "machine-pool-2pwnt7" namespace [38;5;243m@ 01/19/23 21:57:38.884[0m ... skipping 35 lines ... configmap/cni-md-rollout-yeqwuo-calico-windows created configmap/csi-proxy-addon created configmap/containerd-logger-md-rollout-yeqwuo created felixconfiguration.crd.projectcalico.org/default configured Failed to get logs for Machine md-rollout-yeqwuo-md-win-6d64964dd7-72n77, Cluster md-rollout-u27nd2/md-rollout-yeqwuo: [dialing from control plane to target node at md-rollou-5wjml: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollou-5wjml' under resource group 'capz-e2e-jvbqb3' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"] Failed to get logs for Machine md-rollout-yeqwuo-md-win-6d64964dd7-s54jn, Cluster md-rollout-u27nd2/md-rollout-yeqwuo: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Failed to get logs for Machine md-rollout-yeqwuo-md-win-7789c9c8f4-drnj9, Cluster md-rollout-u27nd2/md-rollout-yeqwuo: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Thu, 19 Jan 2023 21:23:11 UTC on Ginkgo node 9 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating a namespace for hosting the "md-rollout" test spec [38;5;243m@ 01/19/23 21:23:11.219[0m INFO: Creating namespace md-rollout-u27nd2 ... skipping 225 lines ... [38;5;243m<< Timeline[0m [38;5;243m------------------------------[0m [38;5;10m[SynchronizedAfterSuite] PASSED [0.000 seconds][0m [38;5;10m[1m[SynchronizedAfterSuite] [0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:116[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [4274.186 seconds][0m [0mRunning the Cluster API E2E tests [38;5;243mShould successfully remediate unhealthy machines with MachineHealthCheck [38;5;9m[1m[It] Should successfully trigger KCP remediation[0m [38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/mhc_remediations.go:116[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/mhc-remediation-ea0ks1-md-0 created cluster.cluster.x-k8s.io/mhc-remediation-ea0ks1 created ... skipping 104 lines ... Discovering machine health check resources Ensuring there is at least 1 Machine that MachineHealthCheck is matching Patching MachineHealthCheck unhealthy condition to one of the nodes INFO: Patching the node condition to the node Waiting for remediation Waiting until the node with unhealthy node condition is remediated [38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinehealthcheck_helpers.go:168 [38;5;243m@ 01/19/23 22:01:57.066[0m [1mSTEP:[0m Dumping logs from the "mhc-remediation-ea0ks1" workload cluster [38;5;243m@ 01/19/23 22:01:57.066[0m Jan 19 22:01:57.066: INFO: Dumping workload cluster mhc-remediation-u1144g/mhc-remediation-ea0ks1 logs Jan 19 22:01:57.115: INFO: Collecting logs for Linux node mhc-remediation-ea0ks1-control-plane-4dngj in cluster mhc-remediation-ea0ks1 in namespace mhc-remediation-u1144g Jan 19 22:02:11.253: INFO: Collecting boot logs for AzureMachine mhc-remediation-ea0ks1-control-plane-4dngj ... skipping 106 lines ... Jan 19 22:02:49.860: INFO: Fetching activity logs took 4.385936556s [1mSTEP:[0m Dumping all the Cluster API resources in the "mhc-remediation-u1144g" namespace [38;5;243m@ 01/19/23 22:02:49.86[0m [1mSTEP:[0m Deleting cluster mhc-remediation-u1144g/mhc-remediation-ea0ks1 [38;5;243m@ 01/19/23 22:02:50.177[0m [1mSTEP:[0m Deleting cluster mhc-remediation-ea0ks1 [38;5;243m@ 01/19/23 22:02:50.193[0m INFO: Waiting for the Cluster mhc-remediation-u1144g/mhc-remediation-ea0ks1 to be deleted [1mSTEP:[0m Waiting for cluster mhc-remediation-ea0ks1 to be deleted [38;5;243m@ 01/19/23 22:02:50.206[0m [38;5;9m[FAILED][0m in [AfterEach] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:176 [38;5;243m@ 01/19/23 22:32:50.208[0m Jan 19 22:32:50.208: INFO: FAILED! Jan 19 22:32:50.208: INFO: Cleaning up after "Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation" spec [1mSTEP:[0m Redacting sensitive information from logs [38;5;243m@ 01/19/23 22:32:50.208[0m INFO: "Should successfully trigger KCP remediation" started at Thu, 19 Jan 2023 22:34:25 UTC on Ginkgo node 5 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [38;5;243m<< Timeline[0m [38;5;9m[FAILED] Timed out after 1800.001s. Expected <bool>: false to be true[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinehealthcheck_helpers.go:168[0m [38;5;243m@ 01/19/23 22:01:57.066[0m [38;5;9mFull Stack Trace[0m ... skipping 21 lines ... [38;5;10m[ReportAfterSuite] PASSED [0.011 seconds][0m [38;5;10m[1m[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report[0m [38;5;243mautogenerated by Ginkgo[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 1 Failure:[0m [38;5;9m[FAIL][0m [0mRunning the Cluster API E2E tests [38;5;243mShould successfully remediate unhealthy machines with MachineHealthCheck [38;5;9m[1m[It] Should successfully trigger KCP remediation[0m [38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinehealthcheck_helpers.go:168[0m [38;5;9m[1mRan 8 of 26 Specs in 4436.533 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m7 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m18 Skipped[0m [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mCurrentGinkgoTestDescription() is deprecated in Ginkgo V2. Use CurrentSpecReport() instead.[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:423[0m ... skipping 29 lines ... [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:281[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:423[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.6.0[0m --- FAIL: TestE2E (4435.10s) FAIL [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mCurrentGinkgoTestDescription() is deprecated in Ginkgo V2. Use CurrentSpecReport() instead.[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:423[0m ... skipping 62 lines ... PASS Ginkgo ran 1 suite in 1h16m14.794743546s Test Suite Failed make[1]: *** [Makefile:655: test-e2e-run] Error 1 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:664: test-e2e] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...