This job view page is being replaced by Spyglass soon. Check out the new job view.
PRshysank: v1alpha4 -> v1beta1 clusterctl upgrade test
ResultFAILURE
Tests 1 failed / 13 succeeded
Started2021-11-15 23:36
Elapsed2h14m
Revision73c13e69ead4360d127aee7f64ac13472e6c31b5
Refs 1810

Test Failures


capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd 28m6s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sKCP\supgrade\sspec\sin\sa\sHA\scluster\sShould\ssuccessfully\supgrade\sKubernetes\,\sDNS\,\skube\-proxy\,\sand\setcd$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.1/e2e/kcp_upgrade.go:75
Timed out after 1200.001s.
Expected
    <int>: 0
to equal
    <int>: 1
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.1/framework/machinedeployment_helpers.go:121
				
				Click to see stdout/stderrfrom junit.e2e_suite.3.xml

Filter through log files | View test history on testgrid


Show 13 Passed Tests

Show 10 Skipped Tests

Error lines from build-log.txt

... skipping 477 lines ...
Nov 15 23:49:19.998: INFO: INFO: Collecting boot logs for AzureMachine quick-start-sea37b-md-0-fh278

Nov 15 23:49:20.315: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster quick-start-sea37b in namespace quick-start-c426pq

Nov 15 23:49:43.961: INFO: INFO: Collecting boot logs for AzureMachine quick-start-sea37b-md-win-dbvgf

Failed to get logs for machine quick-start-sea37b-md-win-856cb78b58-c9s5h, cluster quick-start-c426pq/quick-start-sea37b: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 15 23:49:44.217: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster quick-start-sea37b in namespace quick-start-c426pq

Nov 15 23:50:08.143: INFO: INFO: Collecting boot logs for AzureMachine quick-start-sea37b-md-win-nh2t8

Failed to get logs for machine quick-start-sea37b-md-win-856cb78b58-v4mgm, cluster quick-start-c426pq/quick-start-sea37b: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster quick-start-c426pq/quick-start-sea37b kube-system pod logs
STEP: Fetching kube-system pod logs took 430.923651ms
STEP: Dumping workload cluster quick-start-c426pq/quick-start-sea37b Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-bwm65, container calico-node
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-llrm4, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-qg4sd, container coredns
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-96zrd, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-5xrqr, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-sea37b-control-plane-jvm8m, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-k5lzm, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-quick-start-sea37b-control-plane-jvm8m, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-kbrkz, container kube-proxy
STEP: Error starting logs stream for pod kube-system/calico-node-windows-flqzw, container calico-node-felix: container "calico-node-felix" in pod "calico-node-windows-flqzw" is waiting to start: PodInitializing
STEP: Error starting logs stream for pod kube-system/calico-node-windows-flqzw, container calico-node-startup: container "calico-node-startup" in pod "calico-node-windows-flqzw" is waiting to start: PodInitializing
STEP: Error starting logs stream for pod kube-system/kube-proxy-windows-96zrd, container kube-proxy: container "kube-proxy" in pod "kube-proxy-windows-96zrd" is waiting to start: ContainerCreating
STEP: Error starting logs stream for pod kube-system/calico-node-windows-qg8c5, container calico-node-startup: container "calico-node-startup" in pod "calico-node-windows-qg8c5" is waiting to start: PodInitializing
STEP: Error starting logs stream for pod kube-system/calico-node-windows-qg8c5, container calico-node-felix: container "calico-node-felix" in pod "calico-node-windows-qg8c5" is waiting to start: PodInitializing
STEP: Fetching activity logs took 543.371009ms
STEP: Dumping all the Cluster API resources in the "quick-start-c426pq" namespace
STEP: Deleting cluster quick-start-c426pq/quick-start-sea37b
STEP: Deleting cluster quick-start-sea37b
INFO: Waiting for the Cluster quick-start-c426pq/quick-start-sea37b to be deleted
STEP: Waiting for cluster quick-start-sea37b to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-quick-start-sea37b-control-plane-jvm8m, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qj4bg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-quick-start-sea37b-control-plane-jvm8m, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-kbrkz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-qg4sd, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-quick-start-sea37b-control-plane-jvm8m, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-quick-start-sea37b-control-plane-jvm8m, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-bwm65, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5xrqr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cvh59, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-k5lzm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-llrm4, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "quick-start" test spec
INFO: Deleting namespace quick-start-c426pq
STEP: Redacting sensitive information from logs


• [SLOW TEST:755.855 seconds]
... skipping 170 lines ...
STEP: Dumping logs from the "kcp-upgrade-kg853a" workload cluster
STEP: Dumping workload cluster kcp-upgrade-vgix6c/kcp-upgrade-kg853a logs
Nov 15 23:57:56.412: INFO: INFO: Collecting logs for node kcp-upgrade-kg853a-control-plane-h4x9j in cluster kcp-upgrade-kg853a in namespace kcp-upgrade-vgix6c

Nov 16 00:00:06.575: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-kg853a-control-plane-h4x9j

Failed to get logs for machine kcp-upgrade-kg853a-control-plane-vtwqx, cluster kcp-upgrade-vgix6c/kcp-upgrade-kg853a: dialing public load balancer at kcp-upgrade-kg853a-7682eba7.eastus2.cloudapp.azure.com: dial tcp 20.75.62.139:22: connect: connection timed out
Nov 16 00:00:07.560: INFO: INFO: Collecting logs for node kcp-upgrade-kg853a-md-0-fd76w in cluster kcp-upgrade-kg853a in namespace kcp-upgrade-vgix6c

Nov 16 00:02:17.643: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-kg853a-md-0-fd76w

Failed to get logs for machine kcp-upgrade-kg853a-md-0-54f496b-thcbc, cluster kcp-upgrade-vgix6c/kcp-upgrade-kg853a: dialing public load balancer at kcp-upgrade-kg853a-7682eba7.eastus2.cloudapp.azure.com: dial tcp 20.75.62.139:22: connect: connection timed out
Nov 16 00:02:18.517: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster kcp-upgrade-kg853a in namespace kcp-upgrade-vgix6c

Nov 16 00:08:50.859: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-kg853a-md-win-6s2gz

Failed to get logs for machine kcp-upgrade-kg853a-md-win-7c99dd7957-2w6j7, cluster kcp-upgrade-vgix6c/kcp-upgrade-kg853a: dialing public load balancer at kcp-upgrade-kg853a-7682eba7.eastus2.cloudapp.azure.com: dial tcp 20.75.62.139:22: connect: connection timed out
Nov 16 00:08:51.627: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster kcp-upgrade-kg853a in namespace kcp-upgrade-vgix6c

Nov 16 00:15:24.075: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-kg853a-md-win-ns7vr

Failed to get logs for machine kcp-upgrade-kg853a-md-win-7c99dd7957-gjvht, cluster kcp-upgrade-vgix6c/kcp-upgrade-kg853a: dialing public load balancer at kcp-upgrade-kg853a-7682eba7.eastus2.cloudapp.azure.com: dial tcp 20.75.62.139:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-vgix6c/kcp-upgrade-kg853a kube-system pod logs
STEP: Fetching kube-system pod logs took 389.428767ms
STEP: Creating log watcher for controller kube-system/calico-node-shf29, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-kg853a-control-plane-h4x9j, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-vk64d, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-kg853a-control-plane-h4x9j, container kube-controller-manager
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-windows-kbn7g, container calico-node-felix
STEP: Creating log watcher for controller kube-system/calico-node-windows-x6mkb, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-windows-x6mkb, container calico-node-felix
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-4jhwx, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-g5dnn, container coredns
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-kg853a-control-plane-h4x9j, container etcd
STEP: Got error while iterating over activity logs for resource group capz-e2e-1lt2zl: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000304498s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-vgix6c" namespace
STEP: Deleting cluster kcp-upgrade-vgix6c/kcp-upgrade-kg853a
STEP: Deleting cluster kcp-upgrade-kg853a
INFO: Waiting for the Cluster kcp-upgrade-vgix6c/kcp-upgrade-kg853a to be deleted
STEP: Waiting for cluster kcp-upgrade-kg853a to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-kbn7g, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-kbn7g, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-t96nw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-2959z, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-x6mkb, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-shf29, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-sc2v8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-x6mkb, container calico-node-felix: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-vgix6c
STEP: Redacting sensitive information from logs


• [SLOW TEST:2320.086 seconds]
... skipping 91 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-flr9x, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-96qpt, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-tw4d7, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-pf6pc, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-rjmxd, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-7ekju7-control-plane-tr75r, container etcd
STEP: Got error while iterating over activity logs for resource group capz-e2e-x3lg3y: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000856239s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-fhd7ww" namespace
STEP: Deleting cluster kcp-upgrade-fhd7ww/kcp-upgrade-7ekju7
STEP: Deleting cluster kcp-upgrade-7ekju7
INFO: Waiting for the Cluster kcp-upgrade-fhd7ww/kcp-upgrade-7ekju7 to be deleted
STEP: Waiting for cluster kcp-upgrade-7ekju7 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-7ekju7-control-plane-tr75r, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-7ekju7-control-plane-g2zqs, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-7ekju7-control-plane-tr75r, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-cczdh, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-96qpt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-7ekju7-control-plane-g2zqs, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-pf6pc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-ddb97, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-7ekju7-control-plane-tr75r, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-rjmxd, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-flr9x, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-tw4d7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xdjx6, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-7ekju7-control-plane-tr75r, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-7ekju7-control-plane-g2zqs, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-7ekju7-control-plane-g2zqs, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-zl84r, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-fhd7ww
STEP: Redacting sensitive information from logs


• [SLOW TEST:2093.024 seconds]
... skipping 7 lines ...
Running the Cluster API E2E tests Running the self-hosted spec 
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107

STEP: Creating namespace "self-hosted" for hosting the cluster
Nov 16 00:21:52.670: INFO: starting to create namespace for hosting the "self-hosted" test spec
2021/11/16 00:21:52 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-kzeymv" using the "management" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-kzeymv --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management
... skipping 73 lines ...
STEP: Fetching activity logs took 474.871121ms
STEP: Dumping all the Cluster API resources in the "self-hosted" namespace
STEP: Deleting all clusters in the self-hosted namespace
STEP: Deleting cluster self-hosted-kzeymv
INFO: Waiting for the Cluster self-hosted/self-hosted-kzeymv to be deleted
STEP: Waiting for cluster self-hosted-kzeymv to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-79lg2, container calico-node: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-nncxb, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-649d794c-wh4zn, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-z24hv, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6xmtm, container kube-proxy: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-4sthj, container manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "self-hosted" test spec
INFO: Deleting namespace self-hosted
STEP: Checking if any resources are left over in Azure for spec "self-hosted"
STEP: Redacting sensitive information from logs
STEP: Redacting sensitive information from logs

... skipping 68 lines ...
Nov 16 00:22:10.230: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-p5lfh3-md-0-vic11w-tq787

Nov 16 00:22:10.546: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster md-rollout-p5lfh3 in namespace md-rollout-sdzptw

Nov 16 00:23:05.941: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-p5lfh3-md-win-69mlk

Failed to get logs for machine md-rollout-p5lfh3-md-win-5f576b8646-2pw9d, cluster md-rollout-sdzptw/md-rollout-p5lfh3: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 16 00:23:06.203: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster md-rollout-p5lfh3 in namespace md-rollout-sdzptw

Nov 16 00:24:09.936: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-p5lfh3-md-win-hbnhx

Failed to get logs for machine md-rollout-p5lfh3-md-win-5f576b8646-dxl82, cluster md-rollout-sdzptw/md-rollout-p5lfh3: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 16 00:24:10.450: INFO: INFO: Collecting logs for node 10.1.0.8 in cluster md-rollout-p5lfh3 in namespace md-rollout-sdzptw

Nov 16 00:24:30.811: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-p5lfh3-md-win-7te2xb-rz2qc

Failed to get logs for machine md-rollout-p5lfh3-md-win-cc674d84c-mhwwz, cluster md-rollout-sdzptw/md-rollout-p5lfh3: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-rollout-sdzptw/md-rollout-p5lfh3 kube-system pod logs
STEP: Fetching kube-system pod logs took 403.635472ms
STEP: Creating log watcher for controller kube-system/calico-node-vrwk9, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-rollout-p5lfh3-control-plane-v7m9t, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-apiserver-md-rollout-p5lfh3-control-plane-v7m9t, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-windows-hzzwv, container calico-node-felix
... skipping 17 lines ...
STEP: Fetching activity logs took 614.385281ms
STEP: Dumping all the Cluster API resources in the "md-rollout-sdzptw" namespace
STEP: Deleting cluster md-rollout-sdzptw/md-rollout-p5lfh3
STEP: Deleting cluster md-rollout-p5lfh3
INFO: Waiting for the Cluster md-rollout-sdzptw/md-rollout-p5lfh3 to be deleted
STEP: Waiting for cluster md-rollout-p5lfh3 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-hzzwv, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-dhctn, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-dhctn, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-c8bds, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-82zt7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-hzzwv, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-pxl7t, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-f8cl9, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-f8cl9, container calico-node-startup: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-rollout" test spec
INFO: Deleting namespace md-rollout-sdzptw
STEP: Redacting sensitive information from logs


• [SLOW TEST:1650.890 seconds]
... skipping 68 lines ...
STEP: Dumping workload cluster mhc-remediation-vs7mnu/mhc-remediation-kmv9o9 Azure activity log
STEP: Creating log watcher for controller kube-system/etcd-mhc-remediation-kmv9o9-control-plane-cwtzw, container etcd
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-5xch7, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-kmszx, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-hpjxp, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-pl7q2, container kube-proxy
STEP: Error starting logs stream for pod kube-system/calico-node-dxchp, container calico-node: container "calico-node" in pod "calico-node-dxchp" is waiting to start: PodInitializing
STEP: Fetching activity logs took 602.97861ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-vs7mnu" namespace
STEP: Deleting cluster mhc-remediation-vs7mnu/mhc-remediation-kmv9o9
STEP: Deleting cluster mhc-remediation-kmv9o9
INFO: Waiting for the Cluster mhc-remediation-vs7mnu/mhc-remediation-kmv9o9 to be deleted
STEP: Waiting for cluster mhc-remediation-kmv9o9 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-kmv9o9-control-plane-cwtzw, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6thln, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5xch7, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-kmv9o9-control-plane-cwtzw, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-kmv9o9-control-plane-cwtzw, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-kmv9o9-control-plane-cwtzw, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-tcbfz, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-hpjxp, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-kmszx, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-vs7mnu
STEP: Redacting sensitive information from logs


• [SLOW TEST:1033.184 seconds]
... skipping 58 lines ...
STEP: Fetching activity logs took 493.583882ms
STEP: Dumping all the Cluster API resources in the "kcp-adoption-3drmze" namespace
STEP: Deleting cluster kcp-adoption-3drmze/kcp-adoption-shdapa
STEP: Deleting cluster kcp-adoption-shdapa
INFO: Waiting for the Cluster kcp-adoption-3drmze/kcp-adoption-shdapa to be deleted
STEP: Waiting for cluster kcp-adoption-shdapa to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-9rrdd, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2dclm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qbthn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-adoption-shdapa-control-plane-0, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-bqvzl, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-adoption-shdapa-control-plane-0, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-adoption-shdapa-control-plane-0, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-adoption-shdapa-control-plane-0, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kbdjh, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-adoption" test spec
INFO: Deleting namespace kcp-adoption-3drmze
STEP: Redacting sensitive information from logs


• [SLOW TEST:620.881 seconds]
... skipping 168 lines ...
Nov 16 01:10:00.525: INFO: INFO: Collecting boot logs for AzureMachine machine-pool-3betdz-control-plane-7pqzr

Nov 16 01:10:01.491: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-3betdz in namespace machine-pool-uzyjlg

Nov 16 01:10:16.591: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set machine-pool-3betdz-mp-0

Failed to get logs for machine pool machine-pool-3betdz-mp-0, cluster machine-pool-uzyjlg/machine-pool-3betdz: [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1]
Nov 16 01:10:16.993: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-3betdz in namespace machine-pool-uzyjlg

Nov 16 01:10:50.243: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set win-p-win

Failed to get logs for machine pool machine-pool-3betdz-mp-win, cluster machine-pool-uzyjlg/machine-pool-3betdz: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster machine-pool-uzyjlg/machine-pool-3betdz kube-system pod logs
STEP: Fetching kube-system pod logs took 369.567012ms
STEP: Creating log watcher for controller kube-system/calico-node-5zrrf, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-ndpvp, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-3betdz-control-plane-7pqzr, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-8h4kt, container calico-node
... skipping 11 lines ...
STEP: Fetching activity logs took 648.711022ms
STEP: Dumping all the Cluster API resources in the "machine-pool-uzyjlg" namespace
STEP: Deleting cluster machine-pool-uzyjlg/machine-pool-3betdz
STEP: Deleting cluster machine-pool-3betdz
INFO: Waiting for the Cluster machine-pool-uzyjlg/machine-pool-3betdz to be deleted
STEP: Waiting for cluster machine-pool-3betdz to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-8h4kt, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-3betdz-control-plane-7pqzr, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-ndpvp, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-3betdz-control-plane-7pqzr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5zrrf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-hdbjt, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-x79mm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-ndpvp, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-wcvps, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-lwnd2, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-3betdz-control-plane-7pqzr, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-n2mm6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-3betdz-control-plane-7pqzr, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-uzyjlg
STEP: Redacting sensitive information from logs


• [SLOW TEST:1832.669 seconds]
... skipping 61 lines ...
Nov 16 01:05:05.058: INFO: INFO: Collecting boot logs for AzureMachine md-scale-8j3q2r-md-0-wwlzg

Nov 16 01:05:05.457: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster md-scale-8j3q2r in namespace md-scale-3zkibn

Nov 16 01:05:47.765: INFO: INFO: Collecting boot logs for AzureMachine md-scale-8j3q2r-md-win-l59wh

Failed to get logs for machine md-scale-8j3q2r-md-win-7958ffc849-2skb6, cluster md-scale-3zkibn/md-scale-8j3q2r: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 16 01:05:48.107: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster md-scale-8j3q2r in namespace md-scale-3zkibn

Nov 16 01:06:29.104: INFO: INFO: Collecting boot logs for AzureMachine md-scale-8j3q2r-md-win-9wf24

Failed to get logs for machine md-scale-8j3q2r-md-win-7958ffc849-vjspj, cluster md-scale-3zkibn/md-scale-8j3q2r: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-scale-3zkibn/md-scale-8j3q2r kube-system pod logs
STEP: Fetching kube-system pod logs took 382.705513ms
STEP: Dumping workload cluster md-scale-3zkibn/md-scale-8j3q2r Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-windows-9tp4c, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-windows-9tp4c, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-6b7jl, container kube-proxy
... skipping 14 lines ...
STEP: Fetching activity logs took 1.190887013s
STEP: Dumping all the Cluster API resources in the "md-scale-3zkibn" namespace
STEP: Deleting cluster md-scale-3zkibn/md-scale-8j3q2r
STEP: Deleting cluster md-scale-8j3q2r
INFO: Waiting for the Cluster md-scale-3zkibn/md-scale-8j3q2r to be deleted
STEP: Waiting for cluster md-scale-8j3q2r to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-scale-8j3q2r-control-plane-92rdc, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-87hh2, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-scale-8j3q2r-control-plane-92rdc, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-tn8jb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-52k5f, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-9tp4c, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-6b7jl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-9tp4c, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-pkpg6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-scale-8j3q2r-control-plane-92rdc, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-tz2zf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-87hh2, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-f2hb9, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-cp5jw, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-scale-8j3q2r-control-plane-92rdc, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-scale" test spec
INFO: Deleting namespace md-scale-3zkibn
STEP: Redacting sensitive information from logs


• [SLOW TEST:1924.350 seconds]
... skipping 56 lines ...
STEP: Dumping logs from the "node-drain-ne4x2s" workload cluster
STEP: Dumping workload cluster node-drain-2fis6o/node-drain-ne4x2s logs
Nov 16 01:19:18.147: INFO: INFO: Collecting logs for node node-drain-ne4x2s-control-plane-vqscq in cluster node-drain-ne4x2s in namespace node-drain-2fis6o

Nov 16 01:21:29.007: INFO: INFO: Collecting boot logs for AzureMachine node-drain-ne4x2s-control-plane-vqscq

Failed to get logs for machine node-drain-ne4x2s-control-plane-787qm, cluster node-drain-2fis6o/node-drain-ne4x2s: dialing public load balancer at node-drain-ne4x2s-7de0f189.eastus2.cloudapp.azure.com: dial tcp 20.88.104.18:22: connect: connection timed out
STEP: Dumping workload cluster node-drain-2fis6o/node-drain-ne4x2s kube-system pod logs
STEP: Fetching kube-system pod logs took 361.894965ms
STEP: Dumping workload cluster node-drain-2fis6o/node-drain-ne4x2s Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-h82hr, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-zpb4d, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-node-drain-ne4x2s-control-plane-vqscq, container etcd
... skipping 6 lines ...
STEP: Fetching activity logs took 2.62262501s
STEP: Dumping all the Cluster API resources in the "node-drain-2fis6o" namespace
STEP: Deleting cluster node-drain-2fis6o/node-drain-ne4x2s
STEP: Deleting cluster node-drain-ne4x2s
INFO: Waiting for the Cluster node-drain-2fis6o/node-drain-ne4x2s to be deleted
STEP: Waiting for cluster node-drain-ne4x2s to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-q69zn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-vs8x9, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-zjhwl, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-node-drain-ne4x2s-control-plane-vqscq, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-node-drain-ne4x2s-control-plane-vqscq, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-node-drain-ne4x2s-control-plane-vqscq, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-zpb4d, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-h82hr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-node-drain-ne4x2s-control-plane-vqscq, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "node-drain" test spec
INFO: Deleting namespace node-drain-2fis6o
STEP: Redacting sensitive information from logs


• [SLOW TEST:1879.911 seconds]
... skipping 143 lines ...
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-4rsd7, container coredns
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-cl78s, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-58hg6, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-tnww6, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-clusterctl-upgrade-s56foe-control-plane-mcx6n, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-clusterctl-upgrade-s56foe-control-plane-mcx6n, container kube-scheduler
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 209.627936ms
STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-stly3c" namespace
STEP: Deleting cluster clusterctl-upgrade-stly3c/clusterctl-upgrade-s56foe
STEP: Deleting cluster clusterctl-upgrade-s56foe
INFO: Waiting for the Cluster clusterctl-upgrade-stly3c/clusterctl-upgrade-s56foe to be deleted
STEP: Waiting for cluster clusterctl-upgrade-s56foe to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-tkx4h, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-cl78s, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-tnww6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-clusterctl-upgrade-s56foe-control-plane-mcx6n, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-79rb8, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-clusterctl-upgrade-s56foe-control-plane-mcx6n, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-58hg6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-ctr65, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-clusterctl-upgrade-s56foe-control-plane-mcx6n, container kube-scheduler: http2: client connection lost
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-x27tc, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-clusterctl-upgrade-s56foe-control-plane-mcx6n, container etcd: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-brxm5, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-4rsd7, container coredns: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-649d794c-6d7z2, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-d5hcd, container manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
INFO: Deleting namespace clusterctl-upgrade-stly3c
STEP: Redacting sensitive information from logs


• [SLOW TEST:1736.993 seconds]
... skipping 124 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-jqmhq, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-clusterctl-upgrade-t6rjdx-control-plane-t9thv, container kube-apiserver
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-66wcv, container coredns
STEP: Creating log watcher for controller kube-system/kube-scheduler-clusterctl-upgrade-t6rjdx-control-plane-t9thv, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-controller-manager-clusterctl-upgrade-t6rjdx-control-plane-t9thv, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-qnnxm, container coredns
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 266.893808ms
STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-bku5ju" namespace
STEP: Deleting cluster clusterctl-upgrade-bku5ju/clusterctl-upgrade-t6rjdx
STEP: Deleting cluster clusterctl-upgrade-t6rjdx
INFO: Waiting for the Cluster clusterctl-upgrade-bku5ju/clusterctl-upgrade-t6rjdx to be deleted
STEP: Waiting for cluster clusterctl-upgrade-t6rjdx to be deleted
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-lgnsh, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-bw9m7, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vw5v8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-clusterctl-upgrade-t6rjdx-control-plane-t9thv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-9hhp4, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-jqmhq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-clusterctl-upgrade-t6rjdx-control-plane-t9thv, container etcd: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-649d794c-kjtgr, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-c254k, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-clusterctl-upgrade-t6rjdx-control-plane-t9thv, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-clusterctl-upgrade-t6rjdx-control-plane-t9thv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-66wcv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-qnnxm, container coredns: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-j76h6, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-jsg2k, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
INFO: Deleting namespace clusterctl-upgrade-bku5ju
STEP: Redacting sensitive information from logs


• [SLOW TEST:1693.816 seconds]
... skipping 7 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster [It] Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.1/framework/machinedeployment_helpers.go:121

Ran 14 of 24 Specs in 7698.812 seconds
FAIL! -- 13 Passed | 1 Failed | 0 Pending | 10 Skipped


Ginkgo ran 1 suite in 2h9m47.682574795s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:176: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:184: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...