This job view page is being replaced by Spyglass soon. Check out the new job view.
PRnader-ziada: Use AzureClusterIdentity when running ci e2e tests
ResultFAILURE
Tests 1 failed / 10 succeeded
Started2021-06-25 13:57
Elapsed1h39m
Revision0c79eafd8d0cd5d77d526e8fc5db485418ac01ea
Refs 1360

Test Failures


capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster 26m35s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sself\-hosted\sspec\sShould\spivot\sthe\sbootstrap\scluster\sto\sa\sself\-hosted\scluster$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.0-rc.0/e2e/self_hosted.go:77
Timed out after 1200.001s.
Expected
    <string>: Provisioning
to equal
    <string>: Provisioned
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.0-rc.0/framework/cluster_helpers.go:134
				
				Click to see stdout/stderrfrom junit.e2e_suite.3.xml

Filter through log files | View test history on testgrid


Show 10 Passed Tests

Show 10 Skipped Tests

Error lines from build-log.txt

... skipping 564 lines ...
STEP: Dumping logs from the "kcp-upgrade-1a0dt5" workload cluster
STEP: Dumping workload cluster kcp-upgrade-suu38a/kcp-upgrade-1a0dt5 logs
Jun 25 14:18:13.401: INFO: INFO: Collecting logs for node kcp-upgrade-1a0dt5-control-plane-f8cpl in cluster kcp-upgrade-1a0dt5 in namespace kcp-upgrade-suu38a

Jun 25 14:20:23.709: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-1a0dt5-control-plane-f8cpl

Failed to get logs for machine kcp-upgrade-1a0dt5-control-plane-wm7cz, cluster kcp-upgrade-suu38a/kcp-upgrade-1a0dt5: dialing public load balancer at kcp-upgrade-1a0dt5-2d4f03ff.canadacentral.cloudapp.azure.com: dial tcp 20.151.130.229:22: connect: connection timed out
Jun 25 14:20:24.543: INFO: INFO: Collecting logs for node kcp-upgrade-1a0dt5-md-0-bv2tp in cluster kcp-upgrade-1a0dt5 in namespace kcp-upgrade-suu38a

Jun 25 14:22:34.784: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-1a0dt5-md-0-bv2tp

Failed to get logs for machine kcp-upgrade-1a0dt5-md-0-d4566c4-xsskd, cluster kcp-upgrade-suu38a/kcp-upgrade-1a0dt5: dialing public load balancer at kcp-upgrade-1a0dt5-2d4f03ff.canadacentral.cloudapp.azure.com: dial tcp 20.151.130.229:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-suu38a/kcp-upgrade-1a0dt5 kube-system pod logs
STEP: Fetching kube-system pod logs took 360.811664ms
STEP: Dumping workload cluster kcp-upgrade-suu38a/kcp-upgrade-1a0dt5 Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-66bff467f8-z952j, container coredns
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-8f798d946-zgswv, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-1a0dt5-control-plane-f8cpl, container kube-controller-manager
... skipping 77 lines ...
Jun 25 14:39:47.393: INFO: INFO: Collecting boot logs for AzureMachine md-upgrades-e1ndvn-control-plane-7g9tq

Jun 25 14:39:48.243: INFO: INFO: Collecting logs for node md-upgrades-e1ndvn-md-0-qqcwj in cluster md-upgrades-e1ndvn in namespace md-upgrades-4uhen6

Jun 25 14:39:51.868: INFO: INFO: Collecting boot logs for AzureMachine md-upgrades-e1ndvn-md-0-qqcwj

Failed to get logs for machine md-upgrades-e1ndvn-md-0-6779dc56df-vlxct, cluster md-upgrades-4uhen6/md-upgrades-e1ndvn: dialing from control plane to target node at md-upgrades-e1ndvn-md-0-qqcwj: ssh: rejected: connect failed (Connection refused)
Jun 25 14:39:52.115: INFO: INFO: Collecting logs for node md-upgrades-e1ndvn-md-0-dsf06f-7v5xh in cluster md-upgrades-e1ndvn in namespace md-upgrades-4uhen6

Jun 25 14:39:57.742: INFO: INFO: Collecting boot logs for AzureMachine md-upgrades-e1ndvn-md-0-dsf06f-7v5xh

STEP: Dumping workload cluster md-upgrades-4uhen6/md-upgrades-e1ndvn kube-system pod logs
STEP: Fetching kube-system pod logs took 364.358384ms
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-4674f, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-66bff467f8-dcn9f, container coredns
STEP: Creating log watcher for controller kube-system/etcd-md-upgrades-e1ndvn-control-plane-7g9tq, container etcd
STEP: Creating log watcher for controller kube-system/kube-scheduler-md-upgrades-e1ndvn-control-plane-7g9tq, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-apiserver-md-upgrades-e1ndvn-control-plane-7g9tq, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-pb6mp, container calico-node
STEP: Error starting logs stream for pod kube-system/kube-proxy-4zdtv, container kube-proxy: Get https://10.1.0.5:10250/containerLogs/kube-system/kube-proxy-4zdtv/kube-proxy?follow=true: dial tcp 10.1.0.5:10250: connect: connection refused
STEP: Error starting logs stream for pod kube-system/calico-node-lg5nz, container calico-node: Get https://10.1.0.5:10250/containerLogs/kube-system/calico-node-lg5nz/calico-node?follow=true: dial tcp 10.1.0.5:10250: connect: connection refused
STEP: Fetching activity logs took 574.231694ms
STEP: Dumping all the Cluster API resources in the "md-upgrades-4uhen6" namespace
STEP: Deleting cluster md-upgrades-4uhen6/md-upgrades-e1ndvn
STEP: Deleting cluster md-upgrades-e1ndvn
INFO: Waiting for the Cluster md-upgrades-4uhen6/md-upgrades-e1ndvn to be deleted
STEP: Waiting for cluster md-upgrades-e1ndvn to be deleted
... skipping 60 lines ...
STEP: Dumping logs from the "kcp-upgrade-esp5kv" workload cluster
STEP: Dumping workload cluster kcp-upgrade-9lshz6/kcp-upgrade-esp5kv logs
Jun 25 14:40:54.985: INFO: INFO: Collecting logs for node kcp-upgrade-esp5kv-control-plane-kjk4z in cluster kcp-upgrade-esp5kv in namespace kcp-upgrade-9lshz6

Jun 25 14:43:05.629: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-esp5kv-control-plane-kjk4z

Failed to get logs for machine kcp-upgrade-esp5kv-control-plane-5sh42, cluster kcp-upgrade-9lshz6/kcp-upgrade-esp5kv: dialing public load balancer at kcp-upgrade-esp5kv-fcd29e55.canadacentral.cloudapp.azure.com: dial tcp 20.48.234.242:22: connect: connection timed out
Jun 25 14:43:06.378: INFO: INFO: Collecting logs for node kcp-upgrade-esp5kv-control-plane-wtq2w in cluster kcp-upgrade-esp5kv in namespace kcp-upgrade-9lshz6

Jun 25 14:45:16.701: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-esp5kv-control-plane-wtq2w

Failed to get logs for machine kcp-upgrade-esp5kv-control-plane-bc5k2, cluster kcp-upgrade-9lshz6/kcp-upgrade-esp5kv: dialing public load balancer at kcp-upgrade-esp5kv-fcd29e55.canadacentral.cloudapp.azure.com: dial tcp 20.48.234.242:22: connect: connection timed out
Jun 25 14:45:17.474: INFO: INFO: Collecting logs for node kcp-upgrade-esp5kv-control-plane-nq5vg in cluster kcp-upgrade-esp5kv in namespace kcp-upgrade-9lshz6

Jun 25 14:47:27.776: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-esp5kv-control-plane-nq5vg

Failed to get logs for machine kcp-upgrade-esp5kv-control-plane-w278b, cluster kcp-upgrade-9lshz6/kcp-upgrade-esp5kv: dialing public load balancer at kcp-upgrade-esp5kv-fcd29e55.canadacentral.cloudapp.azure.com: dial tcp 20.48.234.242:22: connect: connection timed out
Jun 25 14:47:28.655: INFO: INFO: Collecting logs for node kcp-upgrade-esp5kv-md-0-2z6gq in cluster kcp-upgrade-esp5kv in namespace kcp-upgrade-9lshz6

Jun 25 14:49:38.848: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-esp5kv-md-0-2z6gq

Failed to get logs for machine kcp-upgrade-esp5kv-md-0-67844f7699-skmrx, cluster kcp-upgrade-9lshz6/kcp-upgrade-esp5kv: dialing public load balancer at kcp-upgrade-esp5kv-fcd29e55.canadacentral.cloudapp.azure.com: dial tcp 20.48.234.242:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-9lshz6/kcp-upgrade-esp5kv kube-system pod logs
STEP: Fetching kube-system pod logs took 336.607146ms
STEP: Dumping workload cluster kcp-upgrade-9lshz6/kcp-upgrade-esp5kv Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-8768w, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-66bff467f8-q9b8c, container coredns
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-esp5kv-control-plane-wtq2w, container etcd
... skipping 14 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-z225h, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-hqjk9, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-esp5kv-control-plane-nq5vg, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-esp5kv-control-plane-wtq2w, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-esp5kv-control-plane-wtq2w, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-esp5kv-control-plane-kjk4z, container kube-controller-manager
STEP: Got error while iterating over activity logs for resource group capz-e2e-ytzmy9: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000962323s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-9lshz6" namespace
STEP: Deleting cluster kcp-upgrade-9lshz6/kcp-upgrade-esp5kv
STEP: Deleting cluster kcp-upgrade-esp5kv
INFO: Waiting for the Cluster kcp-upgrade-9lshz6/kcp-upgrade-esp5kv to be deleted
STEP: Waiting for cluster kcp-upgrade-esp5kv to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-gwchg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-esp5kv-control-plane-wtq2w, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vfgd6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-esp5kv-control-plane-kjk4z, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-esp5kv-control-plane-nq5vg, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-esp5kv-control-plane-kjk4z, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-8f798d946-4pd22, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-esp5kv-control-plane-nq5vg, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-esp5kv-control-plane-wtq2w, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xq94m, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-esp5kv-control-plane-nq5vg, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-esp5kv-control-plane-kjk4z, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-esp5kv-control-plane-wtq2w, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-66bff467f8-q9b8c, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vnpvb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-esp5kv-control-plane-nq5vg, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-esp5kv-control-plane-wtq2w, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hqjk9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-esp5kv-control-plane-kjk4z, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-66bff467f8-8z6q8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-z225h, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jtfvr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8768w, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-9lshz6
STEP: Redacting sensitive information from logs


• [SLOW TEST:3130.945 seconds]
... skipping 91 lines ...
STEP: Creating log watcher for controller kube-system/coredns-66bff467f8-rb9tm, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-n2ttl, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-qc0jsy-control-plane-zmwvj, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-7hdxk, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-qc0jsy-control-plane-268r4, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-d7w7m, container calico-node
STEP: Got error while iterating over activity logs for resource group capz-e2e-8j6em3: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001194472s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-hhp223" namespace
STEP: Deleting cluster kcp-upgrade-hhp223/kcp-upgrade-qc0jsy
STEP: Deleting cluster kcp-upgrade-qc0jsy
INFO: Waiting for the Cluster kcp-upgrade-hhp223/kcp-upgrade-qc0jsy to be deleted
STEP: Waiting for cluster kcp-upgrade-qc0jsy to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-qc0jsy-control-plane-268r4, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-qc0jsy-control-plane-68r9n, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-8f798d946-xn54d, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-n2ttl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-qc0jsy-control-plane-zmwvj, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-qc0jsy-control-plane-68r9n, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-qc0jsy-control-plane-zmwvj, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-d7w7m, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-qc0jsy-control-plane-268r4, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-66bff467f8-slv2x, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-crvcc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-qc0jsy-control-plane-268r4, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-qc0jsy-control-plane-zmwvj, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-66bff467f8-rb9tm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-tvnzj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-qc0jsy-control-plane-zmwvj, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-qc0jsy-control-plane-68r9n, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-b86ds, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5zbg5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-qc0jsy-control-plane-68r9n, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-qc0jsy-control-plane-268r4, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7hdxk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2ntk9, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-hhp223
STEP: Redacting sensitive information from logs


• [SLOW TEST:2615.084 seconds]
... skipping 74 lines ...
STEP: Fetching activity logs took 546.866151ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-13k2yg" namespace
STEP: Deleting cluster mhc-remediation-13k2yg/mhc-remediation-bew0cn
STEP: Deleting cluster mhc-remediation-bew0cn
INFO: Waiting for the Cluster mhc-remediation-13k2yg/mhc-remediation-bew0cn to be deleted
STEP: Waiting for cluster mhc-remediation-bew0cn to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-xxxrs, container calico-node: http2: server sent GOAWAY and closed the connection; LastStreamID=2361, ErrCode=NO_ERROR, debug=""
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cd774, container kube-proxy: http2: server sent GOAWAY and closed the connection; LastStreamID=2361, ErrCode=NO_ERROR, debug=""
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-13k2yg
STEP: Redacting sensitive information from logs


• [SLOW TEST:948.899 seconds]
... skipping 274 lines ...
STEP: Creating log watcher for controller kube-system/coredns-f9fd979d6-g4gmz, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-adoption-e4v37g-control-plane-0, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-jgsgb, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-adoption-e4v37g-control-plane-0, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-s9qf4, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-adoption-e4v37g-control-plane-0, container kube-apiserver
STEP: Error starting logs stream for pod kube-system/calico-kube-controllers-5dc564d9d5-tkjtn, container calico-kube-controllers: container "calico-kube-controllers" in pod "calico-kube-controllers-5dc564d9d5-tkjtn" is waiting to start: ContainerCreating
STEP: Fetching activity logs took 587.984042ms
STEP: Dumping all the Cluster API resources in the "kcp-adoption-m73uzl" namespace
STEP: Deleting cluster kcp-adoption-m73uzl/kcp-adoption-e4v37g
STEP: Deleting cluster kcp-adoption-e4v37g
INFO: Waiting for the Cluster kcp-adoption-m73uzl/kcp-adoption-e4v37g to be deleted
STEP: Waiting for cluster kcp-adoption-e4v37g to be deleted
... skipping 104 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-87ff2, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-mxflq, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-nxbg7u-control-plane-w8wcl, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-s2xrz, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-nxbg7u-control-plane-w8wcl, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-bzth6, container kube-proxy
STEP: Error starting logs stream for pod kube-system/calico-node-87ff2, container calico-node: pods "machine-pool-nxbg7u-mp-0000002" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-pxc6g, container kube-proxy: pods "machine-pool-nxbg7u-mp-0000002" not found
STEP: Fetching activity logs took 717.890155ms
STEP: Dumping all the Cluster API resources in the "machine-pool-e37bmo" namespace
STEP: Deleting cluster machine-pool-e37bmo/machine-pool-nxbg7u
STEP: Deleting cluster machine-pool-nxbg7u
INFO: Waiting for the Cluster machine-pool-e37bmo/machine-pool-nxbg7u to be deleted
STEP: Waiting for cluster machine-pool-nxbg7u to be deleted
... skipping 77 lines ...
STEP: Creating log watcher for controller kube-system/coredns-f9fd979d6-68xst, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-2prsw, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-559kf, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-md-scale-hsj6ox-control-plane-7jjj2, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-5sg5q, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-hsj6ox-control-plane-7jjj2, container kube-controller-manager
STEP: Error starting logs stream for pod kube-system/kube-proxy-559kf, container kube-proxy: pods "md-scale-hsj6ox-md-0-srnh9" not found
STEP: Error starting logs stream for pod kube-system/calico-node-5sg5q, container calico-node: pods "md-scale-hsj6ox-md-0-986nm" not found
STEP: Error starting logs stream for pod kube-system/calico-node-2prsw, container calico-node: pods "md-scale-hsj6ox-md-0-srnh9" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-92nwk, container kube-proxy: pods "md-scale-hsj6ox-md-0-986nm" not found
STEP: Fetching activity logs took 611.94394ms
STEP: Dumping all the Cluster API resources in the "md-scale-abh72g" namespace
STEP: Deleting cluster md-scale-abh72g/md-scale-hsj6ox
STEP: Deleting cluster md-scale-hsj6ox
INFO: Waiting for the Cluster md-scale-abh72g/md-scale-hsj6ox to be deleted
STEP: Waiting for cluster md-scale-hsj6ox to be deleted
... skipping 13 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Running the Cluster API E2E tests Running the self-hosted spec [It] Should pivot the bootstrap cluster to a self-hosted cluster 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.0-rc.0/framework/cluster_helpers.go:134

Ran 11 of 21 Specs in 5583.947 seconds
FAIL! -- 10 Passed | 1 Failed | 0 Pending | 10 Skipped


Ginkgo ran 1 suite in 1h34m29.627770064s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...