This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2022-09-16 20:33
Elapsed1h27m
Revisionrelease-1.4

Test Failures


capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster 45m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sself\-hosted\sspec\sShould\spivot\sthe\sbootstrap\scluster\sto\sa\sself\-hosted\scluster$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:108
Timed out after 1500.003s.
Expected
    <int>: 0
to equal
    <int>: 1
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.4/framework/machinedeployment_helpers.go:129
				
				Click to see stdout/stderrfrom junit.e2e_suite.2.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 14 Skipped Tests

Error lines from build-log.txt

... skipping 523 lines ...
 ✓ Installing CNI 🔌
 • Installing StorageClass 💾  ...
 ✓ Installing StorageClass 💾
INFO: The kubeconfig file for the kind cluster is /tmp/e2e-kind678931553
INFO: Loading image: "capzci.azurecr.io/cluster-api-azure-controller-amd64:20220916203346"
INFO: Loading image: "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.4"
INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.4" into the kind cluster "capz-e2e": error saving image "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.4" to "/tmp/image-tar2499140145/image.tar": unable to read image data: Error response from daemon: reference does not exist
INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.4"
INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.4" into the kind cluster "capz-e2e": error saving image "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.4" to "/tmp/image-tar1002301926/image.tar": unable to read image data: Error response from daemon: reference does not exist
INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.4"
INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.4" into the kind cluster "capz-e2e": error saving image "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.4" to "/tmp/image-tar2599715741/image.tar": unable to read image data: Error response from daemon: reference does not exist
STEP: Initializing the bootstrap cluster
INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure azure
INFO: Waiting for provider controllers to be running
STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available
INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-8447dbccc5-ccw48, container manager
STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available
... skipping 71 lines ...
Sep 16 20:55:48.064: INFO: Collecting boot logs for AzureMachine quick-start-cwjm86-md-0-7q2zp

Sep 16 20:55:49.372: INFO: Collecting logs for Windows node quick-sta-cw28l in cluster quick-start-cwjm86 in namespace quick-start-3zuuh6

Sep 16 20:58:26.163: INFO: Collecting boot logs for AzureMachine quick-start-cwjm86-md-win-cw28l

Failed to get logs for machine quick-start-cwjm86-md-win-77dc5bd855-jbbwr, cluster quick-start-3zuuh6/quick-start-cwjm86: running command "Get-Content "C:\\cni.log"": Process exited with status 1
Sep 16 20:58:27.350: INFO: Collecting logs for Windows node quick-sta-wdq2d in cluster quick-start-cwjm86 in namespace quick-start-3zuuh6

Sep 16 20:59:55.445: INFO: Collecting boot logs for AzureMachine quick-start-cwjm86-md-win-wdq2d

Failed to get logs for machine quick-start-cwjm86-md-win-77dc5bd855-ssq7s, cluster quick-start-3zuuh6/quick-start-cwjm86: running command "Get-Content "C:\\cni.log"": Process exited with status 1
STEP: Dumping workload cluster quick-start-3zuuh6/quick-start-cwjm86 kube-system pod logs
STEP: Fetching kube-system pod logs took 3.23136596s
STEP: Dumping workload cluster quick-start-3zuuh6/quick-start-cwjm86 Azure activity log
STEP: Collecting events for Pod kube-system/kube-controller-manager-quick-start-cwjm86-control-plane-gjh7b
STEP: Creating log watcher for controller kube-system/calico-node-windows-q2v76, container calico-node-startup
STEP: Collecting events for Pod kube-system/etcd-quick-start-cwjm86-control-plane-gjh7b
STEP: Collecting events for Pod kube-system/kube-scheduler-quick-start-cwjm86-control-plane-gjh7b
STEP: Creating log watcher for controller kube-system/calico-node-windows-2lqvr, container calico-node-felix
STEP: Creating log watcher for controller kube-system/calico-node-trlfc, container calico-node
STEP: failed to find events of Pod "kube-scheduler-quick-start-cwjm86-control-plane-gjh7b"
STEP: Creating log watcher for controller kube-system/calico-node-mn2lc, container calico-node
STEP: failed to find events of Pod "etcd-quick-start-cwjm86-control-plane-gjh7b"
STEP: Collecting events for Pod kube-system/calico-node-mn2lc
STEP: Collecting events for Pod kube-system/calico-kube-controllers-969cf87c4-qzp6z
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-qzp6z, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-windows-2lqvr, container calico-node-startup
STEP: Collecting events for Pod kube-system/calico-node-trlfc
STEP: Creating log watcher for controller kube-system/containerd-logger-md5tl, container containerd-logger
... skipping 8 lines ...
STEP: Collecting events for Pod kube-system/csi-proxy-mq7b2
STEP: Collecting events for Pod kube-system/containerd-logger-md5tl
STEP: Collecting events for Pod kube-system/kube-proxy-b9ns8
STEP: Creating log watcher for controller kube-system/kube-proxy-b9ns8, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-quick-start-cwjm86-control-plane-gjh7b, container kube-apiserver
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-c76lz, container coredns
STEP: failed to find events of Pod "kube-controller-manager-quick-start-cwjm86-control-plane-gjh7b"
STEP: Creating log watcher for controller kube-system/kube-proxy-h8svp, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-apiserver-quick-start-cwjm86-control-plane-gjh7b
STEP: failed to find events of Pod "kube-apiserver-quick-start-cwjm86-control-plane-gjh7b"
STEP: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-cwjm86-control-plane-gjh7b, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-ptmqk, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-h8svp
STEP: Collecting events for Pod kube-system/kube-proxy-windows-plcx9
STEP: Collecting events for Pod kube-system/kube-proxy-windows-ptmqk
STEP: Creating log watcher for controller kube-system/kube-scheduler-quick-start-cwjm86-control-plane-gjh7b, container kube-scheduler
... skipping 87 lines ...
Sep 16 21:06:22.297: INFO: Collecting boot logs for AzureMachine md-rollout-6z7p4g-md-0-ehfusq-qbc5l

Sep 16 21:06:23.002: INFO: Collecting logs for Windows node md-rollou-ml577 in cluster md-rollout-6z7p4g in namespace md-rollout-h88fg7

Sep 16 21:07:52.934: INFO: Collecting boot logs for AzureMachine md-rollout-6z7p4g-md-win-ml577

Failed to get logs for machine md-rollout-6z7p4g-md-win-68c5f468cc-gcnmw, cluster md-rollout-h88fg7/md-rollout-6z7p4g: running command "Get-Content "C:\\cni.log"": Process exited with status 1
Failed to get logs for machine md-rollout-6z7p4g-md-win-68c5f468cc-k7kjq, cluster md-rollout-h88fg7/md-rollout-6z7p4g: azuremachines.infrastructure.cluster.x-k8s.io "md-rollout-6z7p4g-md-win-jhs9r" not found
Sep 16 21:07:54.117: INFO: Collecting logs for Windows node md-rollou-7zbcr in cluster md-rollout-6z7p4g in namespace md-rollout-h88fg7

Sep 16 21:10:54.851: INFO: Collecting boot logs for AzureMachine md-rollout-6z7p4g-md-win-bph62d-7zbcr

Failed to get logs for machine md-rollout-6z7p4g-md-win-77b7c8c978-qxvdc, cluster md-rollout-h88fg7/md-rollout-6z7p4g: running command "Get-Content "C:\\cni.log"": Process exited with status 1
STEP: Dumping workload cluster md-rollout-h88fg7/md-rollout-6z7p4g kube-system pod logs
STEP: Collecting events for Pod kube-system/calico-node-hbqqh
STEP: Creating log watcher for controller kube-system/calico-node-windows-glnqw, container calico-node-startup
STEP: Collecting events for Pod kube-system/csi-proxy-h6jpz
STEP: Creating log watcher for controller kube-system/containerd-logger-2qmcj, container containerd-logger
STEP: Fetching kube-system pod logs took 1.035701105s
... skipping 40 lines ...
STEP: Collecting events for Pod kube-system/kube-apiserver-md-rollout-6z7p4g-control-plane-j52rf
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-g96f6
STEP: Creating log watcher for controller kube-system/calico-node-windows-ljmgs, container calico-node-felix
STEP: Creating log watcher for controller kube-system/csi-proxy-wgs4l, container csi-proxy
STEP: Creating log watcher for controller kube-system/containerd-logger-7znml, container containerd-logger
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-g96f6, container coredns
STEP: Error starting logs stream for pod kube-system/containerd-logger-9bdfm, container containerd-logger: container "containerd-logger" in pod "containerd-logger-9bdfm" is waiting to start: trying and failing to pull image
STEP: Error starting logs stream for pod kube-system/calico-node-windows-ljmgs, container calico-node-startup: container "calico-node-startup" in pod "calico-node-windows-ljmgs" is waiting to start: PodInitializing
STEP: Error starting logs stream for pod kube-system/csi-proxy-h6jpz, container csi-proxy: container "csi-proxy" in pod "csi-proxy-h6jpz" is waiting to start: ContainerCreating
STEP: Error starting logs stream for pod kube-system/calico-node-windows-ljmgs, container calico-node-felix: container "calico-node-felix" in pod "calico-node-windows-ljmgs" is waiting to start: PodInitializing
STEP: Got error while iterating over activity logs for resource group capz-e2e-v7pvvt: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001134547s
STEP: Dumping all the Cluster API resources in the "md-rollout-h88fg7" namespace
STEP: Deleting cluster md-rollout-h88fg7/md-rollout-6z7p4g
STEP: Deleting cluster md-rollout-6z7p4g
INFO: Waiting for the Cluster md-rollout-h88fg7/md-rollout-6z7p4g to be deleted
STEP: Waiting for cluster md-rollout-6z7p4g to be deleted
STEP: Got error while streaming logs for pod kube-system/csi-proxy-wgs4l, container csi-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xdc8x, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-rollout-6z7p4g-control-plane-j52rf, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-c9bfk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-m84kx, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/containerd-logger-7znml, container containerd-logger: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-g96f6, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-rollout-6z7p4g-control-plane-j52rf, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-zmqlm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-m84kx, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-rollout-6z7p4g-control-plane-j52rf, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wdt8q, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-w9vgz, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-glnqw, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/csi-proxy-pgtn4, container csi-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/containerd-logger-2qmcj, container containerd-logger: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-rollout-6z7p4g-control-plane-j52rf, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-hvtvh, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-glnqw, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-969cf87c4-xfrks, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-rollout" test spec
INFO: Deleting namespace md-rollout-h88fg7
STEP: Redacting sensitive information from logs


• [SLOW TEST:1915.847 seconds]
... skipping 146 lines ...
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-bztsn, container calico-kube-controllers
STEP: Fetching kube-system pod logs took 762.667459ms
STEP: Dumping workload cluster kcp-adoption-k18xh4/kcp-adoption-r6nuve Azure activity log
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-fc874
STEP: Creating log watcher for controller kube-system/etcd-kcp-adoption-r6nuve-control-plane-0, container etcd
STEP: Collecting events for Pod kube-system/kube-controller-manager-kcp-adoption-r6nuve-control-plane-0
STEP: failed to find events of Pod "kube-controller-manager-kcp-adoption-r6nuve-control-plane-0"
STEP: Collecting events for Pod kube-system/etcd-kcp-adoption-r6nuve-control-plane-0
STEP: failed to find events of Pod "etcd-kcp-adoption-r6nuve-control-plane-0"
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-adoption-r6nuve-control-plane-0, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-adoption-r6nuve-control-plane-0, container kube-scheduler
STEP: Collecting events for Pod kube-system/kube-apiserver-kcp-adoption-r6nuve-control-plane-0
STEP: Collecting events for Pod kube-system/kube-scheduler-kcp-adoption-r6nuve-control-plane-0
STEP: failed to find events of Pod "kube-apiserver-kcp-adoption-r6nuve-control-plane-0"
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-adoption-r6nuve-control-plane-0, container kube-controller-manager
STEP: failed to find events of Pod "kube-scheduler-kcp-adoption-r6nuve-control-plane-0"
STEP: Creating log watcher for controller kube-system/kube-proxy-czs9s, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-czs9s
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-68sdx, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-2dlbz, container calico-node
STEP: Collecting events for Pod kube-system/calico-node-2dlbz
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-68sdx
... skipping 19 lines ...
Running the Cluster API E2E tests Running the self-hosted spec 
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:108

STEP: Creating namespace "self-hosted" for hosting the cluster
Sep 16 20:44:50.020: INFO: starting to create namespace for hosting the "self-hosted" test spec
2022/09/16 20:44:50 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-zt7uj1" using the "management" template (Kubernetes v1.22.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-zt7uj1 --infrastructure (default) --kubernetes-version v1.22.13 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management
... skipping 232 lines ...
STEP: Fetching activity logs took 526.264711ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-mumlzt" namespace
STEP: Deleting cluster mhc-remediation-mumlzt/mhc-remediation-s83cuj
STEP: Deleting cluster mhc-remediation-s83cuj
INFO: Waiting for the Cluster mhc-remediation-mumlzt/mhc-remediation-s83cuj to be deleted
STEP: Waiting for cluster mhc-remediation-s83cuj to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-969cf87c4-q4d4r, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-s83cuj-control-plane-h2cdx, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-s83cuj-control-plane-sp2nt, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-s83cuj-control-plane-sp2nt, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-s83cuj-control-plane-h2cdx, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-d7znc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wwm6c, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-s83cuj-control-plane-h2cdx, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-s83cuj-control-plane-wnxdb, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-s83cuj-control-plane-h2cdx, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-s83cuj-control-plane-sp2nt, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-hpxgb, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-s83cuj-control-plane-wnxdb, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ld4gj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-s83cuj-control-plane-wnxdb, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-hqb82, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-vgz65, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-t4gck, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-s83cuj-control-plane-sp2nt, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-shx2h, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-s83cuj-control-plane-wnxdb, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-mumlzt
STEP: Redacting sensitive information from logs


• [SLOW TEST:1206.870 seconds]
... skipping 65 lines ...
Sep 16 21:41:13.210: INFO: Collecting boot logs for AzureMachine md-scale-mqwmqi-md-0-8bvkh

Sep 16 21:41:14.084: INFO: Collecting logs for Windows node md-scale-ktkcv in cluster md-scale-mqwmqi in namespace md-scale-fmza0x

Sep 16 21:42:48.411: INFO: Collecting boot logs for AzureMachine md-scale-mqwmqi-md-win-ktkcv

Failed to get logs for machine md-scale-mqwmqi-md-win-69d97b78fb-5prhx, cluster md-scale-fmza0x/md-scale-mqwmqi: running command "Get-Content "C:\\cni.log"": Process exited with status 1
Sep 16 21:42:49.580: INFO: Collecting logs for Windows node md-scale-l2hw7 in cluster md-scale-mqwmqi in namespace md-scale-fmza0x

Sep 16 21:44:19.884: INFO: Collecting boot logs for AzureMachine md-scale-mqwmqi-md-win-l2hw7

Failed to get logs for machine md-scale-mqwmqi-md-win-69d97b78fb-f47fm, cluster md-scale-fmza0x/md-scale-mqwmqi: running command "Get-Content "C:\\cni.log"": Process exited with status 1
STEP: Dumping workload cluster md-scale-fmza0x/md-scale-mqwmqi kube-system pod logs
STEP: Fetching kube-system pod logs took 4.428090419s
STEP: Collecting events for Pod kube-system/kube-apiserver-md-scale-mqwmqi-control-plane-f9m2h
STEP: Creating log watcher for controller kube-system/calico-node-windows-52mmz, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-proxy-cw446, container kube-proxy
STEP: Collecting events for Pod kube-system/calico-node-windows-vjhvp
STEP: Creating log watcher for controller kube-system/etcd-md-scale-mqwmqi-control-plane-f9m2h, container etcd
STEP: Collecting events for Pod kube-system/etcd-md-scale-mqwmqi-control-plane-f9m2h
STEP: Collecting events for Pod kube-system/containerd-logger-zj77f
STEP: Collecting events for Pod kube-system/calico-node-windows-52mmz
STEP: failed to find events of Pod "etcd-md-scale-mqwmqi-control-plane-f9m2h"
STEP: Creating log watcher for controller kube-system/calico-node-windows-vjhvp, container calico-node-startup
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-l6jlv, container coredns
STEP: Collecting events for Pod kube-system/kube-scheduler-md-scale-mqwmqi-control-plane-f9m2h
STEP: failed to find events of Pod "kube-scheduler-md-scale-mqwmqi-control-plane-f9m2h"
STEP: Creating log watcher for controller kube-system/kube-proxy-4j4vf, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-cw446
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-4qr2w, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-b89x7, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-vjhvp, container calico-node-felix
STEP: Creating log watcher for controller kube-system/calico-node-2rl6r, container calico-node
STEP: Creating log watcher for controller kube-system/csi-proxy-pvr6r, container csi-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-md-scale-mqwmqi-control-plane-f9m2h, container kube-apiserver
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-4qr2w
STEP: Collecting events for Pod kube-system/kube-controller-manager-md-scale-mqwmqi-control-plane-f9m2h
STEP: Collecting events for Pod kube-system/csi-proxy-pvr6r
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-mqwmqi-control-plane-f9m2h, container kube-controller-manager
STEP: failed to find events of Pod "kube-controller-manager-md-scale-mqwmqi-control-plane-f9m2h"
STEP: Collecting events for Pod kube-system/csi-proxy-dpjfn
STEP: Collecting events for Pod kube-system/calico-kube-controllers-969cf87c4-ph2m2
STEP: Collecting events for Pod kube-system/kube-proxy-4j4vf
STEP: Collecting events for Pod kube-system/calico-node-l5b45
STEP: Collecting events for Pod kube-system/calico-node-2rl6r
STEP: Creating log watcher for controller kube-system/calico-node-l5b45, container calico-node
... skipping 12 lines ...
STEP: Fetching activity logs took 592.44148ms
STEP: Dumping all the Cluster API resources in the "md-scale-fmza0x" namespace
STEP: Deleting cluster md-scale-fmza0x/md-scale-mqwmqi
STEP: Deleting cluster md-scale-mqwmqi
INFO: Waiting for the Cluster md-scale-fmza0x/md-scale-mqwmqi to be deleted
STEP: Waiting for cluster md-scale-mqwmqi to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cw446, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-4qr2w, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-l6jlv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-vjhvp, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-52mmz, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2rl6r, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-scale-mqwmqi-control-plane-f9m2h, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-969cf87c4-ph2m2, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/containerd-logger-zj77f, container containerd-logger: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/csi-proxy-dpjfn, container csi-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-b89x7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/csi-proxy-pvr6r, container csi-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-scale-mqwmqi-control-plane-f9m2h, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-52mmz, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-b9j4n, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-scale-mqwmqi-control-plane-f9m2h, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-vjhvp, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/containerd-logger-wl268, container containerd-logger: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-scale-mqwmqi-control-plane-f9m2h, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-scale" test spec
INFO: Deleting namespace md-scale-fmza0x
STEP: Redacting sensitive information from logs


• [SLOW TEST:1325.652 seconds]
... skipping 77 lines ...
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-dnrh5
STEP: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-a8wqrs-control-plane-8xtzm, container kube-controller-manager
STEP: Collecting events for Pod kube-system/kube-proxy-6nd5p
STEP: Creating log watcher for controller kube-system/calico-node-mbktn, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-drxwd, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-scheduler-machine-pool-a8wqrs-control-plane-8xtzm
STEP: failed to find events of Pod "kube-scheduler-machine-pool-a8wqrs-control-plane-8xtzm"
STEP: Creating log watcher for controller kube-system/calico-node-sn66k, container calico-node
STEP: Collecting events for Pod kube-system/kube-controller-manager-machine-pool-a8wqrs-control-plane-8xtzm
STEP: failed to find events of Pod "kube-controller-manager-machine-pool-a8wqrs-control-plane-8xtzm"
STEP: Creating log watcher for controller kube-system/etcd-machine-pool-a8wqrs-control-plane-8xtzm, container etcd
STEP: Collecting events for Pod kube-system/calico-kube-controllers-969cf87c4-lc5jf
STEP: Creating log watcher for controller kube-system/calico-node-z276v, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-5rvjp, container calico-node
STEP: Collecting events for Pod kube-system/etcd-machine-pool-a8wqrs-control-plane-8xtzm
STEP: failed to find events of Pod "etcd-machine-pool-a8wqrs-control-plane-8xtzm"
STEP: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-a8wqrs-control-plane-8xtzm, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-n94cb, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-ts5jr, container coredns
STEP: Collecting events for Pod kube-system/calico-node-sn66k
STEP: Collecting events for Pod kube-system/calico-node-5rvjp
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-lc5jf, container calico-kube-controllers
... skipping 6 lines ...
STEP: Fetching activity logs took 493.420835ms
STEP: Dumping all the Cluster API resources in the "machine-pool-ysz08j" namespace
STEP: Deleting cluster machine-pool-ysz08j/machine-pool-a8wqrs
STEP: Deleting cluster machine-pool-a8wqrs
INFO: Waiting for the Cluster machine-pool-ysz08j/machine-pool-a8wqrs to be deleted
STEP: Waiting for cluster machine-pool-a8wqrs to be deleted
STEP: Error starting logs stream for pod kube-system/calico-node-mbktn, container calico-node: Get "https://10.1.0.8:10250/containerLogs/kube-system/calico-node-mbktn/calico-node?follow=true": dial tcp 10.1.0.8:10250: i/o timeout
STEP: Error starting logs stream for pod kube-system/kube-proxy-swcmr, container kube-proxy: Get "https://10.1.0.4:10250/containerLogs/kube-system/kube-proxy-swcmr/kube-proxy?follow=true": dial tcp 10.1.0.4:10250: i/o timeout
STEP: Error starting logs stream for pod kube-system/calico-node-5rvjp, container calico-node: Get "https://10.1.0.5:10250/containerLogs/kube-system/calico-node-5rvjp/calico-node?follow=true": dial tcp 10.1.0.5:10250: i/o timeout
STEP: Error starting logs stream for pod kube-system/calico-node-z276v, container calico-node: Get "https://10.1.0.4:10250/containerLogs/kube-system/calico-node-z276v/calico-node?follow=true": dial tcp 10.1.0.4:10250: i/o timeout
STEP: Error starting logs stream for pod kube-system/kube-proxy-drxwd, container kube-proxy: Get "https://10.1.0.8:10250/containerLogs/kube-system/kube-proxy-drxwd/kube-proxy?follow=true": dial tcp 10.1.0.8:10250: i/o timeout
STEP: Error starting logs stream for pod kube-system/kube-proxy-n94cb, container kube-proxy: Get "https://10.1.0.5:10250/containerLogs/kube-system/kube-proxy-n94cb/kube-proxy?follow=true": dial tcp 10.1.0.5:10250: i/o timeout
STEP: Got error while streaming logs for pod kube-system/calico-node-sn66k, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-dnrh5, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-a8wqrs-control-plane-8xtzm, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-969cf87c4-lc5jf, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-a8wqrs-control-plane-8xtzm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-a8wqrs-control-plane-8xtzm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-ts5jr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-a8wqrs-control-plane-8xtzm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6nd5p, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-ysz08j
STEP: Redacting sensitive information from logs


• [SLOW TEST:1432.238 seconds]
... skipping 89 lines ...
STEP: Collecting events for Pod kube-system/kube-scheduler-node-drain-ypzu0a-control-plane-mpx2z
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-qpj59
STEP: Collecting events for Pod kube-system/kube-scheduler-node-drain-ypzu0a-control-plane-knqs7
STEP: Collecting events for Pod kube-system/kube-proxy-f42pw
STEP: Creating log watcher for controller kube-system/kube-scheduler-node-drain-ypzu0a-control-plane-mpx2z, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-qpj59, container coredns
STEP: Error starting logs stream for pod kube-system/etcd-node-drain-ypzu0a-control-plane-mpx2z, container etcd: pods "node-drain-ypzu0a-control-plane-mpx2z" not found
STEP: Error starting logs stream for pod kube-system/kube-scheduler-node-drain-ypzu0a-control-plane-mpx2z, container kube-scheduler: pods "node-drain-ypzu0a-control-plane-mpx2z" not found
STEP: Error starting logs stream for pod kube-system/calico-node-tw4kk, container calico-node: pods "node-drain-ypzu0a-control-plane-mpx2z" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-f42pw, container kube-proxy: pods "node-drain-ypzu0a-control-plane-mpx2z" not found
STEP: Error starting logs stream for pod kube-system/kube-apiserver-node-drain-ypzu0a-control-plane-mpx2z, container kube-apiserver: pods "node-drain-ypzu0a-control-plane-mpx2z" not found
STEP: Error starting logs stream for pod kube-system/kube-controller-manager-node-drain-ypzu0a-control-plane-mpx2z, container kube-controller-manager: pods "node-drain-ypzu0a-control-plane-mpx2z" not found
STEP: Fetching activity logs took 740.659917ms
STEP: Dumping all the Cluster API resources in the "node-drain-glw2zx" namespace
STEP: Deleting cluster node-drain-glw2zx/node-drain-ypzu0a
STEP: Deleting cluster node-drain-ypzu0a
INFO: Waiting for the Cluster node-drain-glw2zx/node-drain-ypzu0a to be deleted
STEP: Waiting for cluster node-drain-ypzu0a to be deleted
... skipping 13 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Running the Cluster API E2E tests Running the self-hosted spec [It] Should pivot the bootstrap cluster to a self-hosted cluster 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.4/framework/machinedeployment_helpers.go:129

Ran 9 of 23 Specs in 4745.626 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 14 Skipped


Ginkgo ran 1 suite in 1h21m46.472737332s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:653: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:661: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...