This job view page is being replaced by Spyglass soon. Check out the new job view.
PRshysank: [WIP] Increase parallelism for e2e tests
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2021-11-06 18:54
Elapsed4h15m
Revision71773565512673c7857e1d7ac9d7cce30eabde82
Refs 1816

Test Failures


capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation 24m45s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sShould\ssuccessfully\sremediate\sunhealthy\smachines\swith\sMachineHealthCheck\sShould\ssuccessfully\strigger\sKCP\sremediation$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/mhc_remediations.go:115
Timed out after 1200.001s.
Expected
    <int>: 2
to equal
    <int>: 3
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/controlplane_helpers.go:108
				
				Click to see stdout/stderrfrom junit.e2e_suite.3.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Show 1 Skipped Tests

Error lines from build-log.txt

... skipping 517 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-r9ldw3-control-plane-5nkj4, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-r9ldw3-control-plane-swmzg, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-hjrcl, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-r9ldw3-control-plane-jtqz9, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-nfmv4, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-r9ldw3-control-plane-5nkj4, container kube-apiserver
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 206.928413ms
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-2i8991" namespace
STEP: Deleting cluster kcp-upgrade-2i8991/kcp-upgrade-r9ldw3
STEP: Deleting cluster kcp-upgrade-r9ldw3
INFO: Waiting for the Cluster kcp-upgrade-2i8991/kcp-upgrade-r9ldw3 to be deleted
STEP: Waiting for cluster kcp-upgrade-r9ldw3 to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-r9ldw3-control-plane-swmzg, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-r9ldw3-control-plane-5nkj4, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8668g, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-sqds2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-r9ldw3-control-plane-swmzg, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-r9ldw3-control-plane-5nkj4, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-r9ldw3-control-plane-5nkj4, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-gv9vx, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-r9ldw3-control-plane-swmzg, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-24sjr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hjrcl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qfjvh, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vl75n, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-r9ldw3-control-plane-5nkj4, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-r9ldw3-control-plane-swmzg, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6jcql, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-88cvw, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-2i8991
STEP: Redacting sensitive information from logs


• [SLOW TEST:2121.210 seconds]
... skipping 57 lines ...
STEP: Dumping logs from the "kcp-upgrade-9dnrmj" workload cluster
STEP: Dumping workload cluster kcp-upgrade-rs8jn7/kcp-upgrade-9dnrmj logs
Nov  6 19:15:12.676: INFO: INFO: Collecting logs for node kcp-upgrade-9dnrmj-control-plane-w7vmq in cluster kcp-upgrade-9dnrmj in namespace kcp-upgrade-rs8jn7

Nov  6 19:17:23.089: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-9dnrmj-control-plane-w7vmq

Failed to get logs for machine kcp-upgrade-9dnrmj-control-plane-fcckr, cluster kcp-upgrade-rs8jn7/kcp-upgrade-9dnrmj: dialing public load balancer at kcp-upgrade-9dnrmj-d19197d7.northcentralus.cloudapp.azure.com: dial tcp 157.56.30.135:22: connect: connection timed out
Nov  6 19:17:23.894: INFO: INFO: Collecting logs for node kcp-upgrade-9dnrmj-md-0-vtgr8 in cluster kcp-upgrade-9dnrmj in namespace kcp-upgrade-rs8jn7

Nov  6 19:19:34.157: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-9dnrmj-md-0-vtgr8

Failed to get logs for machine kcp-upgrade-9dnrmj-md-0-6fdbbd7645-mjqxz, cluster kcp-upgrade-rs8jn7/kcp-upgrade-9dnrmj: dialing public load balancer at kcp-upgrade-9dnrmj-d19197d7.northcentralus.cloudapp.azure.com: dial tcp 157.56.30.135:22: connect: connection timed out
Nov  6 19:19:34.866: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-9dnrmj in namespace kcp-upgrade-rs8jn7

Nov  6 19:26:07.373: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-9dnrmj-md-win-7zwtg

Failed to get logs for machine kcp-upgrade-9dnrmj-md-win-9696474cd-4bt9j, cluster kcp-upgrade-rs8jn7/kcp-upgrade-9dnrmj: dialing public load balancer at kcp-upgrade-9dnrmj-d19197d7.northcentralus.cloudapp.azure.com: dial tcp 157.56.30.135:22: connect: connection timed out
Nov  6 19:26:07.944: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster kcp-upgrade-9dnrmj in namespace kcp-upgrade-rs8jn7

Nov  6 19:32:40.589: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-9dnrmj-md-win-qn9vp

Failed to get logs for machine kcp-upgrade-9dnrmj-md-win-9696474cd-ttxw2, cluster kcp-upgrade-rs8jn7/kcp-upgrade-9dnrmj: dialing public load balancer at kcp-upgrade-9dnrmj-d19197d7.northcentralus.cloudapp.azure.com: dial tcp 157.56.30.135:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-rs8jn7/kcp-upgrade-9dnrmj kube-system pod logs
STEP: Fetching kube-system pod logs took 228.084278ms
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-jgp5t, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-sjhgv, container coredns
STEP: Dumping workload cluster kcp-upgrade-rs8jn7/kcp-upgrade-9dnrmj Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-k7tv6, container calico-node
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-cr4p4, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-9dnrmj-control-plane-w7vmq, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-windows-fc6cn, container calico-node-felix
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-9dnrmj-control-plane-w7vmq, container etcd
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-n8kdr, container coredns
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-9dnrmj-control-plane-w7vmq, container kube-apiserver
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 189.54524ms
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-rs8jn7" namespace
STEP: Deleting cluster kcp-upgrade-rs8jn7/kcp-upgrade-9dnrmj
STEP: Deleting cluster kcp-upgrade-9dnrmj
INFO: Waiting for the Cluster kcp-upgrade-rs8jn7/kcp-upgrade-9dnrmj to be deleted
STEP: Waiting for cluster kcp-upgrade-9dnrmj to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-659hd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-7c22l, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-cr4p4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-sjhgv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-7c22l, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-fc6cn, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-fc6cn, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-9dnrmj-control-plane-w7vmq, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-rs8jn7
STEP: Redacting sensitive information from logs


• [SLOW TEST:2201.718 seconds]
... skipping 75 lines ...
Nov  6 19:32:24.742: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-11wbqc-md-0-qvptm

Nov  6 19:32:24.982: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster kcp-upgrade-11wbqc in namespace kcp-upgrade-ylqwsm

Nov  6 19:32:49.195: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-11wbqc-md-win-8dpg4

Failed to get logs for machine kcp-upgrade-11wbqc-md-win-f579698ff-9p65s, cluster kcp-upgrade-ylqwsm/kcp-upgrade-11wbqc: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov  6 19:32:49.641: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-11wbqc in namespace kcp-upgrade-ylqwsm

Nov  6 19:33:13.962: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-11wbqc-md-win-7rdzk

Failed to get logs for machine kcp-upgrade-11wbqc-md-win-f579698ff-qxv2h, cluster kcp-upgrade-ylqwsm/kcp-upgrade-11wbqc: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster kcp-upgrade-ylqwsm/kcp-upgrade-11wbqc kube-system pod logs
STEP: Fetching kube-system pod logs took 186.253965ms
STEP: Dumping workload cluster kcp-upgrade-ylqwsm/kcp-upgrade-11wbqc Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-9g4jw, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-11wbqc-control-plane-vvstv, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-windows-7mj8k, container calico-node-felix
... skipping 20 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-26xhp, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-vfgf6, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-11wbqc-control-plane-vvstv, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-hdv8n, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-11wbqc-control-plane-sthg5, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-4dl4h, container kube-proxy
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 205.764285ms
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-ylqwsm" namespace
STEP: Deleting cluster kcp-upgrade-ylqwsm/kcp-upgrade-11wbqc
STEP: Deleting cluster kcp-upgrade-11wbqc
INFO: Waiting for the Cluster kcp-upgrade-ylqwsm/kcp-upgrade-11wbqc to be deleted
STEP: Waiting for cluster kcp-upgrade-11wbqc to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-11wbqc-control-plane-6fl8q, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-11wbqc-control-plane-6fl8q, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vfgf6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hszjb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-11wbqc-control-plane-6fl8q, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-11wbqc-control-plane-sthg5, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-66ww4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-11wbqc-control-plane-vvstv, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-5hrf4, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-m5pf8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-11wbqc-control-plane-sthg5, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-26xhp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-11wbqc-control-plane-vvstv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-sj6jc, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-11wbqc-control-plane-vvstv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-11wbqc-control-plane-vvstv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-9g4jw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mxn5b, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-fvvzk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-11wbqc-control-plane-sthg5, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-wgzsw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-11wbqc-control-plane-6fl8q, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-nkwhf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4dl4h, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-7mj8k, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-hdv8n, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-7mj8k, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-11wbqc-control-plane-sthg5, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-5hrf4, container calico-node-startup: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-ylqwsm
STEP: Redacting sensitive information from logs


• [SLOW TEST:2357.689 seconds]
... skipping 114 lines ...
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:110

Node Id (1 Indexed): 5
STEP: Creating namespace "self-hosted" for hosting the cluster
Nov  6 19:37:09.065: INFO: starting to create namespace for hosting the "self-hosted" test spec
2021/11/06 19:37:09 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-nr72i8" using the "management" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-nr72i8 --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management
... skipping 73 lines ...
STEP: Fetching activity logs took 625.415176ms
STEP: Dumping all the Cluster API resources in the "self-hosted" namespace
STEP: Deleting all clusters in the self-hosted namespace
STEP: Deleting cluster self-hosted-nr72i8
INFO: Waiting for the Cluster self-hosted/self-hosted-nr72i8 to be deleted
STEP: Waiting for cluster self-hosted-nr72i8 to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-n2dzk, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-cq244, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-jw2wq, container calico-kube-controllers: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-b96d5f4c5-hctfh, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8dm97, container coredns: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-tv84k, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-self-hosted-nr72i8-control-plane-mbj5q, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-x456t, container calico-node: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-9hlt7, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-g4vlg, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vfx8d, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-self-hosted-nr72i8-control-plane-mbj5q, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xsn5c, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-self-hosted-nr72i8-control-plane-mbj5q, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-self-hosted-nr72i8-control-plane-mbj5q, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "self-hosted" test spec
INFO: Deleting namespace self-hosted
STEP: Checking if any resources are left over in Azure for spec "self-hosted"
STEP: Redacting sensitive information from logs
STEP: Redacting sensitive information from logs

... skipping 55 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-adoption-fm1f1c-control-plane-0, container kube-apiserver
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-p9crh, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-mzsh2, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-adoption-fm1f1c-control-plane-0, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-k7wn9, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-proxy-cd7ll, container kube-proxy
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 200.841923ms
STEP: Dumping all the Cluster API resources in the "kcp-adoption-97ki5b" namespace
STEP: Deleting cluster kcp-adoption-97ki5b/kcp-adoption-fm1f1c
STEP: Deleting cluster kcp-adoption-fm1f1c
INFO: Waiting for the Cluster kcp-adoption-97ki5b/kcp-adoption-fm1f1c to be deleted
STEP: Waiting for cluster kcp-adoption-fm1f1c to be deleted
... skipping 75 lines ...
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-w5grd, container coredns
STEP: Creating log watcher for controller kube-system/etcd-mhc-remediation-1jbsql-control-plane-kfcqz, container etcd
STEP: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-1jbsql-control-plane-kfcqz, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-wlzcp, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-ss8mk, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-hffcm, container coredns
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 202.321362ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-lrbncq" namespace
STEP: Deleting cluster mhc-remediation-lrbncq/mhc-remediation-1jbsql
STEP: Deleting cluster mhc-remediation-1jbsql
INFO: Waiting for the Cluster mhc-remediation-lrbncq/mhc-remediation-1jbsql to be deleted
STEP: Waiting for cluster mhc-remediation-1jbsql to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-hffcm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-1jbsql-control-plane-kfcqz, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-1jbsql-control-plane-kfcqz, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-94j6k, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hv28w, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-w5grd, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-1jbsql-control-plane-kfcqz, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-1jbsql-control-plane-kfcqz, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-8wpkg, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-lrbncq
STEP: Redacting sensitive information from logs


• [SLOW TEST:2341.979 seconds]
... skipping 61 lines ...
Nov  6 20:20:11.880: INFO: INFO: Collecting boot logs for AzureMachine machine-pool-afyoc2-control-plane-fhlxl

Nov  6 20:20:12.604: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-afyoc2 in namespace machine-pool-gk7oah

Nov  6 20:20:22.856: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set machine-pool-afyoc2-mp-0

Failed to get logs for machine pool machine-pool-afyoc2-mp-0, cluster machine-pool-gk7oah/machine-pool-afyoc2: [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1]
Nov  6 20:20:23.121: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-afyoc2 in namespace machine-pool-gk7oah

Nov  6 20:20:48.756: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set win-p-win

Failed to get logs for machine pool machine-pool-afyoc2-mp-win, cluster machine-pool-gk7oah/machine-pool-afyoc2: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster machine-pool-gk7oah/machine-pool-afyoc2 kube-system pod logs
STEP: Fetching kube-system pod logs took 201.140279ms
STEP: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-afyoc2-control-plane-fhlxl, container kube-apiserver
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-65fqk, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-windows-wkzqt, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-k6ph2, container calico-node
... skipping 9 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-9w88h, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-88dwl, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-vkvdx, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-afyoc2-control-plane-fhlxl, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-fbkgf, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-machine-pool-afyoc2-control-plane-fhlxl, container etcd
STEP: Error starting logs stream for pod kube-system/calico-node-k6ph2, container calico-node: pods "machine-pool-afyoc2-mp-0000000" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-vkvdx, container kube-proxy: pods "machine-pool-afyoc2-mp-0000000" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-4wd4v, container kube-proxy: pods "machine-pool-afyoc2-mp-0000001" not found
STEP: Error starting logs stream for pod kube-system/calico-node-n52c4, container calico-node: pods "machine-pool-afyoc2-mp-0000001" not found
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 234.188229ms
STEP: Dumping all the Cluster API resources in the "machine-pool-gk7oah" namespace
STEP: Deleting cluster machine-pool-gk7oah/machine-pool-afyoc2
STEP: Deleting cluster machine-pool-afyoc2
INFO: Waiting for the Cluster machine-pool-gk7oah/machine-pool-afyoc2 to be deleted
STEP: Waiting for cluster machine-pool-afyoc2 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-2q7zm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-wkzqt, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-65fqk, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-fbkgf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-afyoc2-control-plane-fhlxl, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-zd2cf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-cjggs, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9w88h, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-afyoc2-control-plane-fhlxl, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-afyoc2-control-plane-fhlxl, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-afyoc2-control-plane-fhlxl, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-wkzqt, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-88dwl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-ffmjl, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-gk7oah
STEP: Redacting sensitive information from logs


• [SLOW TEST:1323.361 seconds]
... skipping 62 lines ...
Nov  6 20:27:34.731: INFO: INFO: Collecting boot logs for AzureMachine md-scale-16tw75-md-0-wm659

Nov  6 20:27:34.967: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster md-scale-16tw75 in namespace md-scale-slxywf

Nov  6 20:28:50.077: INFO: INFO: Collecting boot logs for AzureMachine md-scale-16tw75-md-win-6q2z8

Failed to get logs for machine md-scale-16tw75-md-win-7ff668d86f-gqfqh, cluster md-scale-slxywf/md-scale-16tw75: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov  6 20:28:50.314: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster md-scale-16tw75 in namespace md-scale-slxywf

Nov  6 20:29:38.369: INFO: INFO: Collecting boot logs for AzureMachine md-scale-16tw75-md-win-mwpnr

Failed to get logs for machine md-scale-16tw75-md-win-7ff668d86f-prq2p, cluster md-scale-slxywf/md-scale-16tw75: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-scale-slxywf/md-scale-16tw75 kube-system pod logs
STEP: Fetching kube-system pod logs took 196.8861ms
STEP: Dumping workload cluster md-scale-slxywf/md-scale-16tw75 Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-cx2nz, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-2jxkf, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-apiserver-md-scale-16tw75-control-plane-f2rdl, container kube-apiserver
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-16tw75-control-plane-f2rdl, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-zpdst, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-gcbb4, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-windows-8dlnh, container calico-node-felix
STEP: Creating log watcher for controller kube-system/calico-node-vmwlp, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-ndrtl, container coredns
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 219.150909ms
STEP: Dumping all the Cluster API resources in the "md-scale-slxywf" namespace
STEP: Deleting cluster md-scale-slxywf/md-scale-16tw75
STEP: Deleting cluster md-scale-16tw75
INFO: Waiting for the Cluster md-scale-slxywf/md-scale-16tw75 to be deleted
STEP: Waiting for cluster md-scale-16tw75 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-cx2nz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-ndrtl, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-7tltp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-scale-16tw75-control-plane-f2rdl, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-lkhmx, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-scale-16tw75-control-plane-f2rdl, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-vmwlp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-2jxkf, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-zpdst, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-8dlnh, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-d557r, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-2jxkf, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-gcbb4, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qh4ng, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-scale-16tw75-control-plane-f2rdl, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-scale-16tw75-control-plane-f2rdl, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-8dlnh, container calico-node-felix: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-scale" test spec
INFO: Deleting namespace md-scale-slxywf
STEP: Redacting sensitive information from logs


• [SLOW TEST:1386.188 seconds]
... skipping 57 lines ...
STEP: Dumping logs from the "node-drain-jpijaj" workload cluster
STEP: Dumping workload cluster node-drain-pof1il/node-drain-jpijaj logs
Nov  6 20:38:37.911: INFO: INFO: Collecting logs for node node-drain-jpijaj-control-plane-ntdzp in cluster node-drain-jpijaj in namespace node-drain-pof1il

Nov  6 20:40:48.397: INFO: INFO: Collecting boot logs for AzureMachine node-drain-jpijaj-control-plane-ntdzp

Failed to get logs for machine node-drain-jpijaj-control-plane-fhkmg, cluster node-drain-pof1il/node-drain-jpijaj: dialing public load balancer at node-drain-jpijaj-f34ab755.northcentralus.cloudapp.azure.com: dial tcp 23.96.180.237:22: connect: connection timed out
STEP: Dumping workload cluster node-drain-pof1il/node-drain-jpijaj kube-system pod logs
STEP: Fetching kube-system pod logs took 257.477615ms
STEP: Dumping workload cluster node-drain-pof1il/node-drain-jpijaj Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-pt9m2, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-7vs4m, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-node-drain-jpijaj-control-plane-ntdzp, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/etcd-node-drain-jpijaj-control-plane-ntdzp, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-ckhlh, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-fhpv4, container coredns
STEP: Creating log watcher for controller kube-system/kube-apiserver-node-drain-jpijaj-control-plane-ntdzp, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-scheduler-node-drain-jpijaj-control-plane-ntdzp, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-t9w7v, container coredns
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 251.088614ms
STEP: Dumping all the Cluster API resources in the "node-drain-pof1il" namespace
STEP: Deleting cluster node-drain-pof1il/node-drain-jpijaj
STEP: Deleting cluster node-drain-jpijaj
INFO: Waiting for the Cluster node-drain-pof1il/node-drain-jpijaj to be deleted
STEP: Waiting for cluster node-drain-jpijaj to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-node-drain-jpijaj-control-plane-ntdzp, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-node-drain-jpijaj-control-plane-ntdzp, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-fhpv4, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ckhlh, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-node-drain-jpijaj-control-plane-ntdzp, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-node-drain-jpijaj-control-plane-ntdzp, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-t9w7v, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-7vs4m, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-pt9m2, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "node-drain" test spec
INFO: Deleting namespace node-drain-pof1il
STEP: Redacting sensitive information from logs


• [SLOW TEST:1839.121 seconds]
... skipping 141 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-clusterctl-upgrade-i759q8-control-plane-5zkt6, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-vkvbk, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-clusterctl-upgrade-i759q8-control-plane-5zkt6, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-wk9ph, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-fz9r5, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-clusterctl-upgrade-i759q8-control-plane-5zkt6, container kube-apiserver
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 290.966306ms
STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-ckz1ok" namespace
STEP: Deleting cluster clusterctl-upgrade-ckz1ok/clusterctl-upgrade-i759q8
STEP: Deleting cluster clusterctl-upgrade-i759q8
INFO: Waiting for the Cluster clusterctl-upgrade-ckz1ok/clusterctl-upgrade-i759q8 to be deleted
STEP: Waiting for cluster clusterctl-upgrade-i759q8 to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-j4j4r, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fz9r5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-clusterctl-upgrade-i759q8-control-plane-5zkt6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-kbszz, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-wk9ph, container calico-node: http2: client connection lost
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-ftgxw, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-b96d5f4c5-jpr65, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vkvbk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-clusterctl-upgrade-i759q8-control-plane-5zkt6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-clusterctl-upgrade-i759q8-control-plane-5zkt6, container kube-scheduler: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-v4ss8, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-mhfvq, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-clusterctl-upgrade-i759q8-control-plane-5zkt6, container kube-apiserver: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-lcxhc, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dr4p5, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
INFO: Deleting namespace clusterctl-upgrade-ckz1ok
STEP: Redacting sensitive information from logs


• [SLOW TEST:1722.179 seconds]
Running the Cluster API E2E tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:40
  upgrade from v1alpha3 to v1beta1, and scale workload clusters created in v1alpha3
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:234
    Should create a management cluster and then upgrade all the providers
    /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/clusterctl_upgrade.go:145
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2021-11-06T22:54:40Z"}
++ early_exit_handler
++ '[' -n 164 ']'
++ kill -TERM 164
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 12 lines ...
Cleaning up after docker
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
================================================================================
Done cleaning up after docker in docker.
All sensitive variables are redacted
{"component":"entrypoint","file":"prow/entrypoint/run.go:255","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2021-11-06T23:09:40Z"}
{"component":"entrypoint","error":"os: process already finished","file":"prow/entrypoint/run.go:257","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2021-11-06T23:09:40Z"}