This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2022-09-10 20:30
Elapsed1h32m
Revisionrelease-1.4

Test Failures


capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster 43m27s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sself\-hosted\sspec\sShould\spivot\sthe\sbootstrap\scluster\sto\sa\sself\-hosted\scluster$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:108
Failed to run clusterctl move
Expected success, but got an error:
    <errors.aggregate | len:3, cap:4>: [
        <*errors.withStack | 0xc001a345a0>{
            error: <*errors.withMessage | 0xc00002ab40>{
                cause: <*errors.withStack | 0xc001a34570>{
                    error: <*errors.withMessage | 0xc00002ab20>{
                        cause: <*errors.StatusError | 0xc00067d860>{
                            ErrStatus: {
                                TypeMeta: {Kind: "Status", APIVersion: "v1"},
                                ListMeta: {
                                    SelfLink: "",
                                    ResourceVersion: "",
                                    Continue: "",
                                    RemainingItemCount: nil,
                                },
                                Status: "Failure",
                                Message: "Internal error occurred: failed calling webhook \"default.azuremachinetemplate.infrastructure.cluster.x-k8s.io\": failed to call webhook: the server could not find the requested resource",
                                Reason: "InternalError",
                                Details: {
                                    Name: "",
                                    Group: "",
                                    Kind: "",
                                    UID: "",
                                    Causes: [
                                        {Type: ..., Message: ..., Field: ...},
                                    ],
                                    RetryAfterSeconds: 0,
                                },
                                Code: 500,
                            },
                        },
                        msg: "error creating \"infrastructure.cluster.x-k8s.io/v1beta1, Kind=AzureMachineTemplate\" self-hosted/self-hosted-cjpdov-control-plane",
                    },
                    stack: [0x2a8096e, 0x2a7ed2c, 0x2a72171, 0x1fffdfb, 0x1fffed7, 0x1fffe59, 0x200079f, 0x2a72045, 0x2a7eaab, 0x2a7ba3c, 0x2a79945, 0x2ac2c6f, 0x2aca919, 0x2f34308, 0x166b731, 0x166b125, 0x166a1bb, 0x16709ea, 0x16703e7, 0x167cba8, 0x167c8c5, 0x167bf65, 0x167e5b2, 0x168b789, 0x168b596, 0x2f4f3ba, 0x1349a62, 0x1285321],
                },
                msg: "action failed after 10 attempts",
            },
            stack: [0x2a720a5, 0x2a7eaab, 0x2a7ba3c, 0x2a79945, 0x2ac2c6f, 0x2aca919, 0x2f34308, 0x166b731, 0x166b125, 0x166a1bb, 0x16709ea, 0x16703e7, 0x167cba8, 0x167c8c5, 0x167bf65, 0x167e5b2, 0x168b789, 0x168b596, 0x2f4f3ba, 0x1349a62, 0x1285321],
        },
        <*errors.withStack | 0xc0001116c8>{
            error: <*errors.withMessage | 0xc001b6b700>{
                cause: <*errors.withStack | 0xc000111698>{
                    error: <*errors.withMessage | 0xc001b6b6e0>{
                        cause: <*errors.StatusError | 0xc002f594a0>{
                            ErrStatus: {
                                TypeMeta: {Kind: "Status", APIVersion: "v1"},
                                ListMeta: {
                                    SelfLink: "",
                                    ResourceVersion: "",
                                    Continue: "",
                                    RemainingItemCount: nil,
                                },
                                Status: "Failure",
                                Message: "Internal error occurred: failed calling webhook \"default.azurecluster.infrastructure.cluster.x-k8s.io\": failed to call webhook: the server could not find the requested resource",
                                Reason: "InternalError",
                                Details: {
                                    Name: "",
                                    Group: "",
                                    Kind: "",
                                    UID: "",
                                    Causes: [
                                        {Type: ..., Message: ..., Field: ...},
                                    ],
                                    RetryAfterSeconds: 0...

Gomega truncated this representation as it exceeds 'format.MaxLength'.
Consider having the object provide a custom 'GomegaStringer' representation
or adjust the parameters in Gomega's 'format' package.

Learn more here: https://onsi.github.io/gomega/#adjusting-output

    [action failed after 10 attempts: error creating "infrastructure.cluster.x-k8s.io/v1beta1, Kind=AzureMachineTemplate" self-hosted/self-hosted-cjpdov-control-plane: Internal error occurred: failed calling webhook "default.azuremachinetemplate.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource, action failed after 10 attempts: error creating "infrastructure.cluster.x-k8s.io/v1beta1, Kind=AzureCluster" self-hosted/self-hosted-cjpdov: Internal error occurred: failed calling webhook "default.azurecluster.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource, action failed after 10 attempts: error creating "infrastructure.cluster.x-k8s.io/v1beta1, Kind=AzureMachineTemplate" self-hosted/self-hosted-cjpdov-md-0: Internal error occurred: failed calling webhook "default.azuremachinetemplate.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource]
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.4/framework/clusterctl/client.go:322
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 14 Skipped Tests

Error lines from build-log.txt

... skipping 532 lines ...
 ✓ Installing CNI 🔌
 • Installing StorageClass 💾  ...
 ✓ Installing StorageClass 💾
INFO: The kubeconfig file for the kind cluster is /tmp/e2e-kind1490116820
INFO: Loading image: "capzci.azurecr.io/cluster-api-azure-controller-amd64:20220910203100"
INFO: Loading image: "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.4"
INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.4" into the kind cluster "capz-e2e": error saving image "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.4" to "/tmp/image-tar516946085/image.tar": unable to read image data: Error response from daemon: reference does not exist
INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.4"
INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.4" into the kind cluster "capz-e2e": error saving image "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.4" to "/tmp/image-tar3953389592/image.tar": unable to read image data: Error response from daemon: reference does not exist
INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.4"
INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.4" into the kind cluster "capz-e2e": error saving image "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.4" to "/tmp/image-tar1911245956/image.tar": unable to read image data: Error response from daemon: reference does not exist
STEP: Initializing the bootstrap cluster
INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure azure
INFO: Waiting for provider controllers to be running
STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available
INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-8447dbccc5-qn2vj, container manager
STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available
... skipping 64 lines ...
Sep 10 20:50:00.066: INFO: Collecting boot logs for AzureMachine quick-start-n0i5jr-md-0-xntpk

Sep 10 20:50:00.600: INFO: Collecting logs for Windows node quick-sta-8svs4 in cluster quick-start-n0i5jr in namespace quick-start-j3kkus

Sep 10 20:51:26.220: INFO: Collecting boot logs for AzureMachine quick-start-n0i5jr-md-win-8svs4

Failed to get logs for machine quick-start-n0i5jr-md-win-79444b66fb-68vr2, cluster quick-start-j3kkus/quick-start-n0i5jr: running command "Get-Content "C:\\cni.log"": Process exited with status 1
Sep 10 20:51:26.628: INFO: Collecting logs for Windows node quick-sta-kxwxb in cluster quick-start-n0i5jr in namespace quick-start-j3kkus

Sep 10 20:52:53.957: INFO: Collecting boot logs for AzureMachine quick-start-n0i5jr-md-win-kxwxb

Failed to get logs for machine quick-start-n0i5jr-md-win-79444b66fb-d245j, cluster quick-start-j3kkus/quick-start-n0i5jr: running command "Get-Content "C:\\cni.log"": Process exited with status 1
STEP: Dumping workload cluster quick-start-j3kkus/quick-start-n0i5jr kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-node-windows-clklz, container calico-node-felix
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-25vgl
STEP: Collecting events for Pod kube-system/kube-proxy-rxp5j
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-dnz8r, container coredns
STEP: Collecting events for Pod kube-system/kube-proxy-windows-4tnrf
STEP: Collecting events for Pod kube-system/kube-apiserver-quick-start-n0i5jr-control-plane-6l7dq
STEP: Collecting events for Pod kube-system/kube-proxy-windows-rnlvs
STEP: Collecting events for Pod kube-system/containerd-logger-wbthx
STEP: Collecting events for Pod kube-system/kube-scheduler-quick-start-n0i5jr-control-plane-6l7dq
STEP: Creating log watcher for controller kube-system/calico-node-fcjwl, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-6kr2r, container calico-node-felix
STEP: failed to find events of Pod "kube-scheduler-quick-start-n0i5jr-control-plane-6l7dq"
STEP: Creating log watcher for controller kube-system/csi-proxy-c2hz8, container csi-proxy
STEP: Collecting events for Pod kube-system/calico-kube-controllers-969cf87c4-fx9w5
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-fx9w5, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/calico-node-windows-clklz
STEP: Creating log watcher for controller kube-system/calico-node-windows-6kr2r, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-rnlvs, container kube-proxy
... skipping 13 lines ...
STEP: Creating log watcher for controller kube-system/etcd-quick-start-n0i5jr-control-plane-6l7dq, container etcd
STEP: Collecting events for Pod kube-system/kube-controller-manager-quick-start-n0i5jr-control-plane-6l7dq
STEP: Creating log watcher for controller kube-system/kube-proxy-89tn6, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-25vgl, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-rxp5j, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-89tn6
STEP: failed to find events of Pod "kube-apiserver-quick-start-n0i5jr-control-plane-6l7dq"
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-dnz8r
STEP: Collecting events for Pod kube-system/calico-node-fcjwl
STEP: Creating log watcher for controller kube-system/calico-node-windows-clklz, container calico-node-startup
STEP: Collecting events for Pod kube-system/csi-proxy-c2hz8
STEP: failed to find events of Pod "etcd-quick-start-n0i5jr-control-plane-6l7dq"
STEP: Creating log watcher for controller kube-system/containerd-logger-wbthx, container containerd-logger
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-4tnrf, container kube-proxy
STEP: failed to find events of Pod "kube-controller-manager-quick-start-n0i5jr-control-plane-6l7dq"
STEP: Fetching activity logs took 1.38846674s
STEP: Dumping all the Cluster API resources in the "quick-start-j3kkus" namespace
STEP: Deleting cluster quick-start-j3kkus/quick-start-n0i5jr
STEP: Deleting cluster quick-start-n0i5jr
INFO: Waiting for the Cluster quick-start-j3kkus/quick-start-n0i5jr to be deleted
STEP: Waiting for cluster quick-start-n0i5jr to be deleted
STEP: Got error while streaming logs for pod kube-system/containerd-logger-wbthx, container containerd-logger: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-rnlvs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/csi-proxy-xcjxw, container csi-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-6kr2r, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fcjwl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/csi-proxy-c2hz8, container csi-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-quick-start-n0i5jr-control-plane-6l7dq, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-6kr2r, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-quick-start-n0i5jr-control-plane-6l7dq, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-969cf87c4-fx9w5, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-clklz, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-dnz8r, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-4tnrf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-quick-start-n0i5jr-control-plane-6l7dq, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-89tn6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-clklz, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/containerd-logger-rh5qp, container containerd-logger: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-quick-start-n0i5jr-control-plane-6l7dq, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-25vgl, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "quick-start" test spec
INFO: Deleting namespace quick-start-j3kkus
STEP: Redacting sensitive information from logs


• [SLOW TEST:1030.006 seconds]
... skipping 67 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-m8xkmz-control-plane-8b6cj, container kube-apiserver
STEP: Collecting events for Pod kube-system/kube-proxy-q655p
STEP: Fetching kube-system pod logs took 816.41793ms
STEP: Dumping workload cluster mhc-remediation-287fi2/mhc-remediation-m8xkmz Azure activity log
STEP: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-m8xkmz-control-plane-8b6cj, container kube-scheduler
STEP: Collecting events for Pod kube-system/kube-apiserver-mhc-remediation-m8xkmz-control-plane-8b6cj
STEP: failed to find events of Pod "kube-apiserver-mhc-remediation-m8xkmz-control-plane-8b6cj"
STEP: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-m8xkmz-control-plane-8b6cj, container kube-controller-manager
STEP: failed to find events of Pod "etcd-mhc-remediation-m8xkmz-control-plane-8b6cj"
STEP: Creating log watcher for controller kube-system/calico-node-kzxxf, container calico-node
STEP: Collecting events for Pod kube-system/kube-controller-manager-mhc-remediation-m8xkmz-control-plane-8b6cj
STEP: failed to find events of Pod "kube-controller-manager-mhc-remediation-m8xkmz-control-plane-8b6cj"
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-5q5pz, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-r2skf, container coredns
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-r2skf
STEP: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-m8xkmz-control-plane-8b6cj
STEP: Creating log watcher for controller kube-system/etcd-mhc-remediation-m8xkmz-control-plane-8b6cj, container etcd
STEP: Collecting events for Pod kube-system/calico-node-st2fv
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-5q5pz
STEP: Creating log watcher for controller kube-system/calico-node-st2fv, container calico-node
STEP: failed to find events of Pod "kube-scheduler-mhc-remediation-m8xkmz-control-plane-8b6cj"
STEP: Collecting events for Pod kube-system/calico-node-kzxxf
STEP: Fetching activity logs took 1.525915141s
STEP: Dumping all the Cluster API resources in the "mhc-remediation-287fi2" namespace
STEP: Deleting cluster mhc-remediation-287fi2/mhc-remediation-m8xkmz
STEP: Deleting cluster mhc-remediation-m8xkmz
INFO: Waiting for the Cluster mhc-remediation-287fi2/mhc-remediation-m8xkmz to be deleted
... skipping 77 lines ...
Sep 10 20:58:53.541: INFO: Collecting boot logs for AzureMachine md-rollout-i2u8kb-md-0-ydqt4f-vdgfd

Sep 10 20:58:54.023: INFO: Collecting logs for Windows node md-rollou-zxpxs in cluster md-rollout-i2u8kb in namespace md-rollout-ca9hub

Sep 10 21:02:12.844: INFO: Collecting boot logs for AzureMachine md-rollout-i2u8kb-md-win-zxpxs

Failed to get logs for machine md-rollout-i2u8kb-md-win-6b7d87fcfd-6dblk, cluster md-rollout-ca9hub/md-rollout-i2u8kb: [dialing from control plane to target node at md-rollou-zxpxs: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollou-zxpxs' under resource group 'capz-e2e-ls00fv' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"]
Sep 10 21:02:13.409: INFO: Collecting logs for Windows node md-rollou-lvv5j in cluster md-rollout-i2u8kb in namespace md-rollout-ca9hub

Sep 10 21:06:26.817: INFO: Collecting boot logs for AzureMachine md-rollout-i2u8kb-md-win-lvv5j

Failed to get logs for machine md-rollout-i2u8kb-md-win-6b7d87fcfd-hqc99, cluster md-rollout-ca9hub/md-rollout-i2u8kb: [dialing from control plane to target node at md-rollou-lvv5j: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollou-lvv5j' under resource group 'capz-e2e-ls00fv' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"]
Sep 10 21:06:27.323: INFO: Collecting logs for Windows node md-rollou-gzrx9 in cluster md-rollout-i2u8kb in namespace md-rollout-ca9hub

Sep 10 21:07:56.280: INFO: Collecting boot logs for AzureMachine md-rollout-i2u8kb-md-win-8b61ym-gzrx9

Failed to get logs for machine md-rollout-i2u8kb-md-win-6fc4f86b5b-qtj9f, cluster md-rollout-ca9hub/md-rollout-i2u8kb: running command "Get-Content "C:\\cni.log"": Process exited with status 1
STEP: Dumping workload cluster md-rollout-ca9hub/md-rollout-i2u8kb kube-system pod logs
STEP: Fetching kube-system pod logs took 1.157631701s
STEP: Dumping workload cluster md-rollout-ca9hub/md-rollout-i2u8kb Azure activity log
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-ldw2m
STEP: Creating log watcher for controller kube-system/calico-node-windows-ztdrb, container calico-node-startup
STEP: Creating log watcher for controller kube-system/containerd-logger-gghc4, container containerd-logger
STEP: Creating log watcher for controller kube-system/calico-node-fm7qb, container calico-node
STEP: Collecting events for Pod kube-system/containerd-logger-gghc4
STEP: Collecting events for Pod kube-system/kube-scheduler-md-rollout-i2u8kb-control-plane-8ww8j
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-ldw2m, container coredns
STEP: failed to find events of Pod "kube-scheduler-md-rollout-i2u8kb-control-plane-8ww8j"
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-glzfv, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-windows-ztdrb, container calico-node-felix
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-sh6wk, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/calico-node-fm7qb
STEP: Creating log watcher for controller kube-system/calico-node-windows-8lfjk, container calico-node-startup
STEP: Creating log watcher for controller kube-system/csi-proxy-v6l72, container csi-proxy
... skipping 6 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-windows-8lfjk, container calico-node-felix
STEP: Collecting events for Pod kube-system/csi-proxy-v6l72
STEP: Creating log watcher for controller kube-system/etcd-md-rollout-i2u8kb-control-plane-8ww8j, container etcd
STEP: Collecting events for Pod kube-system/calico-node-windows-8lfjk
STEP: Creating log watcher for controller kube-system/csi-proxy-nb7z4, container csi-proxy
STEP: Collecting events for Pod kube-system/etcd-md-rollout-i2u8kb-control-plane-8ww8j
STEP: failed to find events of Pod "etcd-md-rollout-i2u8kb-control-plane-8ww8j"
STEP: Creating log watcher for controller kube-system/kube-apiserver-md-rollout-i2u8kb-control-plane-8ww8j, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-nc2xq, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-windows-nc2xq
STEP: Creating log watcher for controller kube-system/kube-scheduler-md-rollout-i2u8kb-control-plane-8ww8j, container kube-scheduler
STEP: Collecting events for Pod kube-system/kube-controller-manager-md-rollout-i2u8kb-control-plane-8ww8j
STEP: failed to find events of Pod "kube-controller-manager-md-rollout-i2u8kb-control-plane-8ww8j"
STEP: Collecting events for Pod kube-system/kube-apiserver-md-rollout-i2u8kb-control-plane-8ww8j
STEP: failed to find events of Pod "kube-apiserver-md-rollout-i2u8kb-control-plane-8ww8j"
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-rollout-i2u8kb-control-plane-8ww8j, container kube-controller-manager
STEP: Collecting events for Pod kube-system/kube-proxy-22w6t
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-4wd4p, container kube-proxy
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-glzfv
STEP: Collecting events for Pod kube-system/kube-proxy-windows-4wd4p
STEP: Creating log watcher for controller kube-system/kube-proxy-p8l9l, container kube-proxy
... skipping 3 lines ...
STEP: Fetching activity logs took 2.361239897s
STEP: Dumping all the Cluster API resources in the "md-rollout-ca9hub" namespace
STEP: Deleting cluster md-rollout-ca9hub/md-rollout-i2u8kb
STEP: Deleting cluster md-rollout-i2u8kb
INFO: Waiting for the Cluster md-rollout-ca9hub/md-rollout-i2u8kb to be deleted
STEP: Waiting for cluster md-rollout-i2u8kb to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-8lfjk, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-ldw2m, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-969cf87c4-sh6wk, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-rollout-i2u8kb-control-plane-8ww8j, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-4wd4p, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-ztdrb, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/csi-proxy-v6l72, container csi-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-p8l9l, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/containerd-logger-gghc4, container containerd-logger: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-rollout-i2u8kb-control-plane-8ww8j, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-glzfv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-rollout-i2u8kb-control-plane-8ww8j, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-8lfjk, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-d58x7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-nc2xq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/csi-proxy-nb7z4, container csi-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-rollout-i2u8kb-control-plane-8ww8j, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-ztdrb, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/containerd-logger-7dsx9, container containerd-logger: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-rollout" test spec
INFO: Deleting namespace md-rollout-ca9hub
STEP: Redacting sensitive information from logs


• [SLOW TEST:2013.793 seconds]
... skipping 45 lines ...

STEP: Dumping workload cluster kcp-adoption-g7m7r2/kcp-adoption-eeaaws kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-lbckv, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-gw4kp
STEP: Collecting events for Pod kube-system/etcd-kcp-adoption-eeaaws-control-plane-0
STEP: Creating log watcher for controller kube-system/kube-proxy-pkts5, container kube-proxy
STEP: failed to find events of Pod "etcd-kcp-adoption-eeaaws-control-plane-0"
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-adoption-eeaaws-control-plane-0, container kube-apiserver
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-hxk82, container coredns
STEP: Fetching kube-system pod logs took 775.812438ms
STEP: Dumping workload cluster kcp-adoption-g7m7r2/kcp-adoption-eeaaws Azure activity log
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-adoption-eeaaws-control-plane-0, container kube-controller-manager
STEP: Collecting events for Pod kube-system/kube-controller-manager-kcp-adoption-eeaaws-control-plane-0
STEP: Collecting events for Pod kube-system/kube-proxy-pkts5
STEP: failed to find events of Pod "kube-controller-manager-kcp-adoption-eeaaws-control-plane-0"
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-adoption-eeaaws-control-plane-0, container kube-scheduler
STEP: Collecting events for Pod kube-system/calico-node-bbbhd
STEP: Collecting events for Pod kube-system/calico-kube-controllers-969cf87c4-lbckv
STEP: Creating log watcher for controller kube-system/calico-node-bbbhd, container calico-node
STEP: Collecting events for Pod kube-system/kube-scheduler-kcp-adoption-eeaaws-control-plane-0
STEP: failed to find events of Pod "kube-scheduler-kcp-adoption-eeaaws-control-plane-0"
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-gw4kp, container coredns
STEP: Collecting events for Pod kube-system/kube-apiserver-kcp-adoption-eeaaws-control-plane-0
STEP: failed to find events of Pod "kube-apiserver-kcp-adoption-eeaaws-control-plane-0"
STEP: Creating log watcher for controller kube-system/etcd-kcp-adoption-eeaaws-control-plane-0, container etcd
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-hxk82
STEP: Fetching activity logs took 1.758769318s
STEP: Dumping all the Cluster API resources in the "kcp-adoption-g7m7r2" namespace
STEP: Deleting cluster kcp-adoption-g7m7r2/kcp-adoption-eeaaws
STEP: Deleting cluster kcp-adoption-eeaaws
INFO: Waiting for the Cluster kcp-adoption-g7m7r2/kcp-adoption-eeaaws to be deleted
STEP: Waiting for cluster kcp-adoption-eeaaws to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-pkts5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-hxk82, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-adoption-eeaaws-control-plane-0, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-adoption-eeaaws-control-plane-0, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-bbbhd, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-969cf87c4-lbckv, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-adoption-eeaaws-control-plane-0, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-adoption-eeaaws-control-plane-0, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-gw4kp, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-adoption" test spec
INFO: Deleting namespace kcp-adoption-g7m7r2
STEP: Redacting sensitive information from logs


• [SLOW TEST:526.814 seconds]
... skipping 7 lines ...
Running the Cluster API E2E tests Running the self-hosted spec 
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:108

STEP: Creating namespace "self-hosted" for hosting the cluster
Sep 10 20:40:44.170: INFO: starting to create namespace for hosting the "self-hosted" test spec
2022/09/10 20:40:44 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-cjpdov" using the "management" template (Kubernetes v1.22.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-cjpdov --infrastructure (default) --kubernetes-version v1.22.13 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management
... skipping 54 lines ...
STEP: Collecting events for Pod kube-system/kube-apiserver-self-hosted-cjpdov-control-plane-gv5zn
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-d8kb9, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-self-hosted-cjpdov-control-plane-gv5zn, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/etcd-self-hosted-cjpdov-control-plane-gv5zn, container etcd
STEP: Dumping workload cluster self-hosted/self-hosted-cjpdov Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-5jjjc, container coredns
STEP: failed to find events of Pod "kube-apiserver-self-hosted-cjpdov-control-plane-gv5zn"
STEP: Creating log watcher for controller kube-system/kube-scheduler-self-hosted-cjpdov-control-plane-gv5zn, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-cpqbf, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-5jjjc
STEP: Collecting events for Pod kube-system/kube-controller-manager-self-hosted-cjpdov-control-plane-gv5zn
STEP: Collecting events for Pod kube-system/etcd-self-hosted-cjpdov-control-plane-gv5zn
STEP: failed to find events of Pod "etcd-self-hosted-cjpdov-control-plane-gv5zn"
STEP: Collecting events for Pod kube-system/calico-node-g67gw
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-d8kb9
STEP: Creating log watcher for controller kube-system/calico-node-g67gw, container calico-node
STEP: Collecting events for Pod kube-system/calico-kube-controllers-969cf87c4-cpqbf
STEP: Collecting events for Pod kube-system/kube-proxy-6tkqj
STEP: Creating log watcher for controller kube-system/kube-proxy-gpjgq, container kube-proxy
STEP: failed to find events of Pod "kube-controller-manager-self-hosted-cjpdov-control-plane-gv5zn"
STEP: Creating log watcher for controller kube-system/kube-proxy-6tkqj, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-self-hosted-cjpdov-control-plane-gv5zn, container kube-apiserver
STEP: Collecting events for Pod kube-system/kube-proxy-gpjgq
STEP: Collecting events for Pod kube-system/kube-scheduler-self-hosted-cjpdov-control-plane-gv5zn
STEP: failed to find events of Pod "kube-scheduler-self-hosted-cjpdov-control-plane-gv5zn"
STEP: Creating log watcher for controller kube-system/calico-node-hh4kj, container calico-node
STEP: Fetching activity logs took 1.592304649s
STEP: Dumping all the Cluster API resources in the "self-hosted" namespace
STEP: Deleting all clusters in the self-hosted namespace
STEP: Deleting cluster self-hosted-cjpdov
INFO: Waiting for the Cluster self-hosted/self-hosted-cjpdov to be deleted
... skipping 7 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:44
  Running the self-hosted spec
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:127
    Should pivot the bootstrap cluster to a self-hosted cluster [It]
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:108

    Failed to run clusterctl move
    Expected success, but got an error:
        <errors.aggregate | len:3, cap:4>: [
            <*errors.withStack | 0xc001a345a0>{
                error: <*errors.withMessage | 0xc00002ab40>{
                    cause: <*errors.withStack | 0xc001a34570>{
                        error: <*errors.withMessage | 0xc00002ab20>{
                            cause: <*errors.StatusError | 0xc00067d860>{
                                ErrStatus: {
                                    TypeMeta: {Kind: "Status", APIVersion: "v1"},
                                    ListMeta: {
                                        SelfLink: "",
                                        ResourceVersion: "",
                                        Continue: "",
                                        RemainingItemCount: nil,
                                    },
                                    Status: "Failure",
                                    Message: "Internal error occurred: failed calling webhook \"default.azuremachinetemplate.infrastructure.cluster.x-k8s.io\": failed to call webhook: the server could not find the requested resource",
                                    Reason: "InternalError",
                                    Details: {
                                        Name: "",
                                        Group: "",
                                        Kind: "",
                                        UID: "",
... skipping 2 lines ...
                                        ],
                                        RetryAfterSeconds: 0,
                                    },
                                    Code: 500,
                                },
                            },
                            msg: "error creating \"infrastructure.cluster.x-k8s.io/v1beta1, Kind=AzureMachineTemplate\" self-hosted/self-hosted-cjpdov-control-plane",
                        },
                        stack: [0x2a8096e, 0x2a7ed2c, 0x2a72171, 0x1fffdfb, 0x1fffed7, 0x1fffe59, 0x200079f, 0x2a72045, 0x2a7eaab, 0x2a7ba3c, 0x2a79945, 0x2ac2c6f, 0x2aca919, 0x2f34308, 0x166b731, 0x166b125, 0x166a1bb, 0x16709ea, 0x16703e7, 0x167cba8, 0x167c8c5, 0x167bf65, 0x167e5b2, 0x168b789, 0x168b596, 0x2f4f3ba, 0x1349a62, 0x1285321],
                    },
                    msg: "action failed after 10 attempts",
                },
                stack: [0x2a720a5, 0x2a7eaab, 0x2a7ba3c, 0x2a79945, 0x2ac2c6f, 0x2aca919, 0x2f34308, 0x166b731, 0x166b125, 0x166a1bb, 0x16709ea, 0x16703e7, 0x167cba8, 0x167c8c5, 0x167bf65, 0x167e5b2, 0x168b789, 0x168b596, 0x2f4f3ba, 0x1349a62, 0x1285321],
            },
            <*errors.withStack | 0xc0001116c8>{
                error: <*errors.withMessage | 0xc001b6b700>{
                    cause: <*errors.withStack | 0xc000111698>{
                        error: <*errors.withMessage | 0xc001b6b6e0>{
                            cause: <*errors.StatusError | 0xc002f594a0>{
                                ErrStatus: {
                                    TypeMeta: {Kind: "Status", APIVersion: "v1"},
                                    ListMeta: {
                                        SelfLink: "",
                                        ResourceVersion: "",
                                        Continue: "",
                                        RemainingItemCount: nil,
                                    },
                                    Status: "Failure",
                                    Message: "Internal error occurred: failed calling webhook \"default.azurecluster.infrastructure.cluster.x-k8s.io\": failed to call webhook: the server could not find the requested resource",
                                    Reason: "InternalError",
                                    Details: {
                                        Name: "",
                                        Group: "",
                                        Kind: "",
                                        UID: "",
... skipping 5 lines ...
    Gomega truncated this representation as it exceeds 'format.MaxLength'.
    Consider having the object provide a custom 'GomegaStringer' representation
    or adjust the parameters in Gomega's 'format' package.
    
    Learn more here: https://onsi.github.io/gomega/#adjusting-output
    
        [action failed after 10 attempts: error creating "infrastructure.cluster.x-k8s.io/v1beta1, Kind=AzureMachineTemplate" self-hosted/self-hosted-cjpdov-control-plane: Internal error occurred: failed calling webhook "default.azuremachinetemplate.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource, action failed after 10 attempts: error creating "infrastructure.cluster.x-k8s.io/v1beta1, Kind=AzureCluster" self-hosted/self-hosted-cjpdov: Internal error occurred: failed calling webhook "default.azurecluster.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource, action failed after 10 attempts: error creating "infrastructure.cluster.x-k8s.io/v1beta1, Kind=AzureMachineTemplate" self-hosted/self-hosted-cjpdov-md-0: Internal error occurred: failed calling webhook "default.azuremachinetemplate.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource]

    /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.4/framework/clusterctl/client.go:322

    Full Stack Trace
    sigs.k8s.io/cluster-api/test/framework/clusterctl.Move({0x3a73418?, 0xc0000620b0?}, {{0xc000a96f90, 0x22}, {0xc000990731, 0x31}, {0xc000990763, 0x17}, {0xc000ec8d00, 0x1d}, ...})
    	/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.4/framework/clusterctl/client.go:322 +0x4e8
... skipping 254 lines ...
STEP: Collecting events for Pod kube-system/kube-proxy-2dl6f
STEP: Creating log watcher for controller kube-system/kube-proxy-drnfc, container kube-proxy
STEP: Collecting events for Pod kube-system/calico-node-2ljrh
STEP: Creating log watcher for controller kube-system/calico-node-xvcr4, container calico-node
STEP: Collecting events for Pod kube-system/calico-node-ghk7s
STEP: Collecting events for Pod kube-system/kube-proxy-drnfc
STEP: Error starting logs stream for pod kube-system/calico-node-windows-rgtcz, container calico-node-felix: pods "win-p-win000002" not found
STEP: Error starting logs stream for pod kube-system/calico-node-bch9v, container calico-node: pods "machine-pool-s3qff6-mp-0000001" not found
STEP: Error starting logs stream for pod kube-system/calico-node-windows-rgtcz, container calico-node-startup: pods "win-p-win000002" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-4mkk9, container kube-proxy: pods "machine-pool-s3qff6-mp-0000001" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-windows-smtkt, container kube-proxy: pods "win-p-win000002" not found
STEP: Fetching activity logs took 2.241189979s
STEP: Dumping all the Cluster API resources in the "machine-pool-pei65d" namespace
STEP: Deleting cluster machine-pool-pei65d/machine-pool-s3qff6
STEP: Deleting cluster machine-pool-s3qff6
INFO: Waiting for the Cluster machine-pool-pei65d/machine-pool-s3qff6 to be deleted
STEP: Waiting for cluster machine-pool-s3qff6 to be deleted
STEP: Error starting logs stream for pod kube-system/kube-proxy-khklj, container kube-proxy: Get "https://10.1.0.4:10250/containerLogs/kube-system/kube-proxy-khklj/kube-proxy?follow=true": dial tcp 10.1.0.4:10250: i/o timeout
STEP: Error starting logs stream for pod kube-system/calico-node-xvcr4, container calico-node: Get "https://10.1.0.8:10250/containerLogs/kube-system/calico-node-xvcr4/calico-node?follow=true": dial tcp 10.1.0.8:10250: i/o timeout
STEP: Error starting logs stream for pod kube-system/calico-node-2ljrh, container calico-node: Get "https://10.1.0.4:10250/containerLogs/kube-system/calico-node-2ljrh/calico-node?follow=true": dial tcp 10.1.0.4:10250: i/o timeout
STEP: Error starting logs stream for pod kube-system/kube-proxy-2dl6f, container kube-proxy: Get "https://10.1.0.8:10250/containerLogs/kube-system/kube-proxy-2dl6f/kube-proxy?follow=true": dial tcp 10.1.0.8:10250: i/o timeout
STEP: Got error while streaming logs for pod kube-system/kube-proxy-drnfc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-s3qff6-control-plane-klx82, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ghk7s, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-ckxrv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-s3qff6-control-plane-klx82, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-969cf87c4-65m9w, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-nmvr6, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-s3qff6-control-plane-klx82, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-s3qff6-control-plane-klx82, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-pei65d
STEP: Redacting sensitive information from logs


• [SLOW TEST:1242.766 seconds]
... skipping 67 lines ...
Sep 10 21:35:21.804: INFO: Collecting boot logs for AzureMachine md-scale-xvrcbt-md-0-v85b6

Sep 10 21:35:22.618: INFO: Collecting logs for Windows node md-scale-tx2nd in cluster md-scale-xvrcbt in namespace md-scale-qvnl6w

Sep 10 21:36:52.569: INFO: Collecting boot logs for AzureMachine md-scale-xvrcbt-md-win-tx2nd

Failed to get logs for machine md-scale-xvrcbt-md-win-55d77456bf-gr6w8, cluster md-scale-qvnl6w/md-scale-xvrcbt: running command "Get-Content "C:\\cni.log"": Process exited with status 1
Sep 10 21:36:53.663: INFO: Collecting logs for Windows node md-scale-sklhw in cluster md-scale-xvrcbt in namespace md-scale-qvnl6w

Sep 10 21:38:30.242: INFO: Collecting boot logs for AzureMachine md-scale-xvrcbt-md-win-sklhw

Failed to get logs for machine md-scale-xvrcbt-md-win-55d77456bf-xqb6t, cluster md-scale-qvnl6w/md-scale-xvrcbt: running command "Get-Content "C:\\cni.log"": Process exited with status 1
STEP: Dumping workload cluster md-scale-qvnl6w/md-scale-xvrcbt kube-system pod logs
STEP: Fetching kube-system pod logs took 1.090500919s
STEP: Creating log watcher for controller kube-system/calico-node-windows-nf282, container calico-node-felix
STEP: Collecting events for Pod kube-system/csi-proxy-6jmms
STEP: Creating log watcher for controller kube-system/calico-node-dcw9p, container calico-node
STEP: Creating log watcher for controller kube-system/containerd-logger-hjpdj, container containerd-logger
... skipping 12 lines ...
STEP: Collecting events for Pod kube-system/csi-proxy-cngcd
STEP: Creating log watcher for controller kube-system/etcd-md-scale-xvrcbt-control-plane-xkmmv, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-mcvtf, container kube-proxy
STEP: Creating log watcher for controller kube-system/containerd-logger-x92lm, container containerd-logger
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-9m9df, container coredns
STEP: Collecting events for Pod kube-system/etcd-md-scale-xvrcbt-control-plane-xkmmv
STEP: failed to find events of Pod "etcd-md-scale-xvrcbt-control-plane-xkmmv"
STEP: Creating log watcher for controller kube-system/kube-apiserver-md-scale-xvrcbt-control-plane-xkmmv, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-windows-nf282, container calico-node-startup
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-9m9df
STEP: Collecting events for Pod kube-system/kube-apiserver-md-scale-xvrcbt-control-plane-xkmmv
STEP: Collecting events for Pod kube-system/calico-kube-controllers-969cf87c4-wqjkp
STEP: failed to find events of Pod "kube-apiserver-md-scale-xvrcbt-control-plane-xkmmv"
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-xvrcbt-control-plane-xkmmv, container kube-controller-manager
STEP: Collecting events for Pod kube-system/kube-proxy-windows-mcvtf
STEP: Collecting events for Pod kube-system/calico-node-b7f87
STEP: Collecting events for Pod kube-system/calico-node-windows-nf282
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-nzdbj, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-controller-manager-md-scale-xvrcbt-control-plane-xkmmv
STEP: failed to find events of Pod "kube-controller-manager-md-scale-xvrcbt-control-plane-xkmmv"
STEP: Creating log watcher for controller kube-system/kube-proxy-dt4g9, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-xgmnb
STEP: Creating log watcher for controller kube-system/csi-proxy-6jmms, container csi-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-windows-nzdbj
STEP: Creating log watcher for controller kube-system/kube-proxy-xgmnb, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-scheduler-md-scale-xvrcbt-control-plane-xkmmv
STEP: failed to find events of Pod "kube-scheduler-md-scale-xvrcbt-control-plane-xkmmv"
STEP: Creating log watcher for controller kube-system/kube-scheduler-md-scale-xvrcbt-control-plane-xkmmv, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-wqjkp, container calico-kube-controllers
STEP: Fetching activity logs took 4.128278668s
STEP: Dumping all the Cluster API resources in the "md-scale-qvnl6w" namespace
STEP: Deleting cluster md-scale-qvnl6w/md-scale-xvrcbt
STEP: Deleting cluster md-scale-xvrcbt
INFO: Waiting for the Cluster md-scale-qvnl6w/md-scale-xvrcbt to be deleted
STEP: Waiting for cluster md-scale-xvrcbt to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-scale-xvrcbt-control-plane-xkmmv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-scale-xvrcbt-control-plane-xkmmv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-scale-xvrcbt-control-plane-xkmmv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-scale-xvrcbt-control-plane-xkmmv, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-2cplx, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dt4g9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-9m9df, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-969cf87c4-wqjkp, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-dcw9p, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-scale" test spec
INFO: Deleting namespace md-scale-qvnl6w
STEP: Redacting sensitive information from logs


• [SLOW TEST:1191.608 seconds]
... skipping 56 lines ...
STEP: Dumping logs from the "node-drain-13gvwz" workload cluster
STEP: Dumping workload cluster node-drain-4j9ubf/node-drain-13gvwz logs
Sep 10 21:49:29.053: INFO: Collecting logs for Linux node node-drain-13gvwz-control-plane-s67pz in cluster node-drain-13gvwz in namespace node-drain-4j9ubf

Sep 10 21:56:03.341: INFO: Collecting boot logs for AzureMachine node-drain-13gvwz-control-plane-s67pz

Failed to get logs for machine node-drain-13gvwz-control-plane-fsdl2, cluster node-drain-4j9ubf/node-drain-13gvwz: dialing public load balancer at node-drain-13gvwz-3981537e.northeurope.cloudapp.azure.com: dial tcp 52.155.162.189:22: connect: connection timed out
STEP: Dumping workload cluster node-drain-4j9ubf/node-drain-13gvwz kube-system pod logs
STEP: Fetching kube-system pod logs took 1.037067412s
STEP: Dumping workload cluster node-drain-4j9ubf/node-drain-13gvwz Azure activity log
STEP: Creating log watcher for controller kube-system/etcd-node-drain-13gvwz-control-plane-s67pz, container etcd
STEP: Collecting events for Pod kube-system/calico-node-vsvzt
STEP: Creating log watcher for controller kube-system/kube-controller-manager-node-drain-13gvwz-control-plane-s67pz, container kube-controller-manager
... skipping 34 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Running the Cluster API E2E tests Running the self-hosted spec [It] Should pivot the bootstrap cluster to a self-hosted cluster 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.4/framework/clusterctl/client.go:322

Ran 9 of 23 Specs in 5124.413 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 14 Skipped


Ginkgo ran 1 suite in 1h27m6.546756656s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:653: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:661: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...