This job view page is being replaced by Spyglass soon. Check out the new job view.
PRshysank: [WIP] Increase parallelism for e2e tests
ResultABORTED
Tests 0 failed / 8 succeeded
Started2021-11-05 20:12
Elapsed3h2m
Revision71773565512673c7857e1d7ac9d7cce30eabde82
Refs 1816

No Test Failures!


Show 8 Passed Tests

Show 10 Skipped Tests

Error lines from build-log.txt

... skipping 473 lines ...
Nov  5 20:27:21.053: INFO: INFO: Collecting boot logs for AzureMachine quick-start-ggdug0-md-0-5tnsr

Nov  5 20:27:21.616: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster quick-start-ggdug0 in namespace quick-start-i6kqb1

Nov  5 20:27:53.052: INFO: INFO: Collecting boot logs for AzureMachine quick-start-ggdug0-md-win-bj7gw

Failed to get logs for machine quick-start-ggdug0-md-win-68bf799945-9gjq7, cluster quick-start-i6kqb1/quick-start-ggdug0: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov  5 20:27:53.494: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster quick-start-ggdug0 in namespace quick-start-i6kqb1

Nov  5 20:28:23.555: INFO: INFO: Collecting boot logs for AzureMachine quick-start-ggdug0-md-win-66wvb

Failed to get logs for machine quick-start-ggdug0-md-win-68bf799945-gm94q, cluster quick-start-i6kqb1/quick-start-ggdug0: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster quick-start-i6kqb1/quick-start-ggdug0 kube-system pod logs
STEP: Fetching kube-system pod logs took 1.267596324s
STEP: Creating log watcher for controller kube-system/kube-proxy-lx5b6, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-6hdbf, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-n5rpj, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-nl5pt, container coredns
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-nx2rr, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-quick-start-ggdug0-control-plane-ssm8l, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-86hs4, container coredns
STEP: Dumping workload cluster quick-start-i6kqb1/quick-start-ggdug0 Azure activity log
STEP: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-ggdug0-control-plane-ssm8l, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-8r522, container kube-proxy
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 201.446098ms
STEP: Dumping all the Cluster API resources in the "quick-start-i6kqb1" namespace
STEP: Deleting cluster quick-start-i6kqb1/quick-start-ggdug0
STEP: Deleting cluster quick-start-ggdug0
INFO: Waiting for the Cluster quick-start-i6kqb1/quick-start-ggdug0 to be deleted
STEP: Waiting for cluster quick-start-ggdug0 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-quick-start-ggdug0-control-plane-ssm8l, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-65vqk, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-5vhnh, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-9cshd, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-8r522, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-86hs4, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nx2rr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-6hdbf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-2fckh, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-nl5pt, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lx5b6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-quick-start-ggdug0-control-plane-ssm8l, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-2fckh, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-n5rpj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-quick-start-ggdug0-control-plane-ssm8l, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-9cshd, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-quick-start-ggdug0-control-plane-ssm8l, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "quick-start" test spec
INFO: Deleting namespace quick-start-i6kqb1
STEP: Redacting sensitive information from logs


• [SLOW TEST:1233.174 seconds]
... skipping 8 lines ...
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:110

Node Id (1 Indexed): 1
STEP: Creating namespace "self-hosted" for hosting the cluster
Nov  5 20:40:03.625: INFO: starting to create namespace for hosting the "self-hosted" test spec
2021/11/05 20:40:03 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-dsjfgo" using the "management" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-dsjfgo --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management
... skipping 144 lines ...
STEP: Dumping logs from the "kcp-upgrade-1uikin" workload cluster
STEP: Dumping workload cluster kcp-upgrade-q6e8k4/kcp-upgrade-1uikin logs
Nov  5 20:38:56.575: INFO: INFO: Collecting logs for node kcp-upgrade-1uikin-control-plane-86fng in cluster kcp-upgrade-1uikin in namespace kcp-upgrade-q6e8k4

Nov  5 20:41:07.624: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-1uikin-control-plane-86fng

Failed to get logs for machine kcp-upgrade-1uikin-control-plane-m7gp8, cluster kcp-upgrade-q6e8k4/kcp-upgrade-1uikin: dialing public load balancer at kcp-upgrade-1uikin-70fd2395.westeurope.cloudapp.azure.com: dial tcp 20.76.139.137:22: connect: connection timed out
Nov  5 20:41:09.088: INFO: INFO: Collecting logs for node kcp-upgrade-1uikin-md-0-brj6d in cluster kcp-upgrade-1uikin in namespace kcp-upgrade-q6e8k4

Nov  5 20:43:18.692: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-1uikin-md-0-brj6d

Failed to get logs for machine kcp-upgrade-1uikin-md-0-5bcdb9f766-bsjkh, cluster kcp-upgrade-q6e8k4/kcp-upgrade-1uikin: dialing public load balancer at kcp-upgrade-1uikin-70fd2395.westeurope.cloudapp.azure.com: dial tcp 20.76.139.137:22: connect: connection timed out
Nov  5 20:43:20.133: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster kcp-upgrade-1uikin in namespace kcp-upgrade-q6e8k4

Nov  5 20:49:51.907: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-1uikin-md-win-295nj

Failed to get logs for machine kcp-upgrade-1uikin-md-win-7bfc7c97fc-9xpkc, cluster kcp-upgrade-q6e8k4/kcp-upgrade-1uikin: dialing public load balancer at kcp-upgrade-1uikin-70fd2395.westeurope.cloudapp.azure.com: dial tcp 20.76.139.137:22: connect: connection timed out
Nov  5 20:49:52.977: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster kcp-upgrade-1uikin in namespace kcp-upgrade-q6e8k4

Nov  5 20:56:25.124: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-1uikin-md-win-57g2j

Failed to get logs for machine kcp-upgrade-1uikin-md-win-7bfc7c97fc-zbsnl, cluster kcp-upgrade-q6e8k4/kcp-upgrade-1uikin: dialing public load balancer at kcp-upgrade-1uikin-70fd2395.westeurope.cloudapp.azure.com: dial tcp 20.76.139.137:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-q6e8k4/kcp-upgrade-1uikin kube-system pod logs
STEP: Fetching kube-system pod logs took 1.034867948s
STEP: Dumping workload cluster kcp-upgrade-q6e8k4/kcp-upgrade-1uikin Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-9jlmh, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-xm7fx, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-g25vt, container calico-node
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-94fmz, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-27qxt, container calico-node-felix
STEP: Creating log watcher for controller kube-system/calico-node-windows-tk5kw, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-n5jjv, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-cpwqj, container coredns
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-1uikin-control-plane-86fng, container kube-scheduler
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 205.085203ms
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-q6e8k4" namespace
STEP: Deleting cluster kcp-upgrade-q6e8k4/kcp-upgrade-1uikin
STEP: Deleting cluster kcp-upgrade-1uikin
INFO: Waiting for the Cluster kcp-upgrade-q6e8k4/kcp-upgrade-1uikin to be deleted
STEP: Waiting for cluster kcp-upgrade-1uikin to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-cpwqj, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-1uikin-control-plane-86fng, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-xm7fx, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-1uikin-control-plane-86fng, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-tk5kw, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-tk7dr, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-48lh9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-1uikin-control-plane-86fng, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-g25vt, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-n5jjv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-94fmz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-tk5kw, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-27qxt, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-9jlmh, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-27qxt, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-lxms6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-1uikin-control-plane-86fng, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-q6e8k4
STEP: Redacting sensitive information from logs


• [SLOW TEST:2575.480 seconds]
... skipping 92 lines ...
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-3ijp1u-control-plane-jqq6c, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-3ijp1u-control-plane-qglkn, container kube-apiserver
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-64gwd, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-sg65t, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-6ks5n, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-3ijp1u-control-plane-jqq6c, container kube-apiserver
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 223.230195ms
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-ybot4l" namespace
STEP: Deleting cluster kcp-upgrade-ybot4l/kcp-upgrade-3ijp1u
STEP: Deleting cluster kcp-upgrade-3ijp1u
INFO: Waiting for the Cluster kcp-upgrade-ybot4l/kcp-upgrade-3ijp1u to be deleted
STEP: Waiting for cluster kcp-upgrade-3ijp1u to be deleted
... skipping 75 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-cmhqar-control-plane-slhct, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-jj9sv, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-654qc, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-x629l, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-cmhqar-control-plane-slhct, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-cmhqar-control-plane-slhct, container kube-scheduler
STEP: Error starting logs stream for pod kube-system/calico-node-4wgrn, container calico-node: container "calico-node" in pod "calico-node-4wgrn" is waiting to start: PodInitializing
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 217.976144ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-u5flrz" namespace
STEP: Deleting cluster mhc-remediation-u5flrz/mhc-remediation-cmhqar
STEP: Deleting cluster mhc-remediation-cmhqar
INFO: Waiting for the Cluster mhc-remediation-u5flrz/mhc-remediation-cmhqar to be deleted
STEP: Waiting for cluster mhc-remediation-cmhqar to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jj9sv, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-u5flrz
STEP: Redacting sensitive information from logs


• [SLOW TEST:982.790 seconds]
... skipping 53 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-kmlqx, container calico-node
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-dsqp5, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/etcd-kcp-adoption-1fe6fr-control-plane-0, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-xwhlx, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-adoption-1fe6fr-control-plane-0, container kube-controller-manager
STEP: Dumping workload cluster kcp-adoption-lazo0e/kcp-adoption-1fe6fr Azure activity log
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 246.938078ms
STEP: Dumping all the Cluster API resources in the "kcp-adoption-lazo0e" namespace
STEP: Deleting cluster kcp-adoption-lazo0e/kcp-adoption-1fe6fr
STEP: Deleting cluster kcp-adoption-1fe6fr
INFO: Waiting for the Cluster kcp-adoption-lazo0e/kcp-adoption-1fe6fr to be deleted
STEP: Waiting for cluster kcp-adoption-1fe6fr to be deleted
... skipping 97 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-fle2o7-control-plane-tn9ls, container kube-apiserver
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-sd7m6, container coredns
STEP: Creating log watcher for controller kube-system/etcd-mhc-remediation-fle2o7-control-plane-tn9ls, container etcd
STEP: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-fle2o7-control-plane-tn9ls, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-fle2o7-control-plane-tn9ls, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-p4d4t, container kube-proxy
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 201.95869ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-wfwy14" namespace
STEP: Deleting cluster mhc-remediation-wfwy14/mhc-remediation-fle2o7
STEP: Deleting cluster mhc-remediation-fle2o7
INFO: Waiting for the Cluster mhc-remediation-wfwy14/mhc-remediation-fle2o7 to be deleted
STEP: Waiting for cluster mhc-remediation-fle2o7 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-fle2o7-control-plane-5wrt4, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-fle2o7-control-plane-5wrt4, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-fle2o7-control-plane-5wrt4, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5w279, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-p4d4t, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-fle2o7-control-plane-5wrt4, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-wfwy14
STEP: Redacting sensitive information from logs


• [SLOW TEST:1225.536 seconds]
... skipping 59 lines ...
STEP: Dumping logs from the "kcp-upgrade-yruqxa" workload cluster
STEP: Dumping workload cluster kcp-upgrade-j6bp9o/kcp-upgrade-yruqxa logs
Nov  5 21:03:34.711: INFO: INFO: Collecting logs for node kcp-upgrade-yruqxa-control-plane-hh2r9 in cluster kcp-upgrade-yruqxa in namespace kcp-upgrade-j6bp9o

Nov  5 21:05:44.231: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-yruqxa-control-plane-hh2r9

Failed to get logs for machine kcp-upgrade-yruqxa-control-plane-hg8pc, cluster kcp-upgrade-j6bp9o/kcp-upgrade-yruqxa: dialing public load balancer at kcp-upgrade-yruqxa-24a74b73.westeurope.cloudapp.azure.com: dial tcp 20.76.140.101:22: connect: connection timed out
Nov  5 21:05:45.646: INFO: INFO: Collecting logs for node kcp-upgrade-yruqxa-control-plane-qqnk5 in cluster kcp-upgrade-yruqxa in namespace kcp-upgrade-j6bp9o

Nov  5 21:07:55.300: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-yruqxa-control-plane-qqnk5

Failed to get logs for machine kcp-upgrade-yruqxa-control-plane-qqq2z, cluster kcp-upgrade-j6bp9o/kcp-upgrade-yruqxa: dialing public load balancer at kcp-upgrade-yruqxa-24a74b73.westeurope.cloudapp.azure.com: dial tcp 20.76.140.101:22: connect: connection timed out
Nov  5 21:07:56.685: INFO: INFO: Collecting logs for node kcp-upgrade-yruqxa-control-plane-bgthr in cluster kcp-upgrade-yruqxa in namespace kcp-upgrade-j6bp9o

Nov  5 21:10:06.372: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-yruqxa-control-plane-bgthr

Failed to get logs for machine kcp-upgrade-yruqxa-control-plane-srg2c, cluster kcp-upgrade-j6bp9o/kcp-upgrade-yruqxa: dialing public load balancer at kcp-upgrade-yruqxa-24a74b73.westeurope.cloudapp.azure.com: dial tcp 20.76.140.101:22: connect: connection timed out
Nov  5 21:10:07.556: INFO: INFO: Collecting logs for node kcp-upgrade-yruqxa-md-0-k9h2x in cluster kcp-upgrade-yruqxa in namespace kcp-upgrade-j6bp9o

Nov  5 21:12:17.444: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-yruqxa-md-0-k9h2x

Failed to get logs for machine kcp-upgrade-yruqxa-md-0-5b9b8db4fc-qjl8g, cluster kcp-upgrade-j6bp9o/kcp-upgrade-yruqxa: dialing public load balancer at kcp-upgrade-yruqxa-24a74b73.westeurope.cloudapp.azure.com: dial tcp 20.76.140.101:22: connect: connection timed out
Nov  5 21:12:18.661: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-yruqxa in namespace kcp-upgrade-j6bp9o

Nov  5 21:18:50.659: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-yruqxa-md-win-wsgfh

Failed to get logs for machine kcp-upgrade-yruqxa-md-win-8c7758d6b-6nh7w, cluster kcp-upgrade-j6bp9o/kcp-upgrade-yruqxa: dialing public load balancer at kcp-upgrade-yruqxa-24a74b73.westeurope.cloudapp.azure.com: dial tcp 20.76.140.101:22: connect: connection timed out
Nov  5 21:18:51.730: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster kcp-upgrade-yruqxa in namespace kcp-upgrade-j6bp9o

Nov  5 21:25:23.876: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-yruqxa-md-win-dgpt6

Failed to get logs for machine kcp-upgrade-yruqxa-md-win-8c7758d6b-rtdjc, cluster kcp-upgrade-j6bp9o/kcp-upgrade-yruqxa: dialing public load balancer at kcp-upgrade-yruqxa-24a74b73.westeurope.cloudapp.azure.com: dial tcp 20.76.140.101:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-j6bp9o/kcp-upgrade-yruqxa kube-system pod logs
STEP: Fetching kube-system pod logs took 901.305429ms
STEP: Dumping workload cluster kcp-upgrade-j6bp9o/kcp-upgrade-yruqxa Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-7hczv, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-b9w82, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-yruqxa-control-plane-qqnk5, container kube-apiserver
... skipping 20 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-windows-48n46, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-bg7gg, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-yruqxa-control-plane-qqnk5, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-yruqxa-control-plane-hh2r9, container kube-scheduler
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-yruqxa-control-plane-qqnk5, container etcd
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-yruqxa-control-plane-hh2r9, container etcd
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 287.553596ms
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-j6bp9o" namespace
STEP: Deleting cluster kcp-upgrade-j6bp9o/kcp-upgrade-yruqxa
STEP: Deleting cluster kcp-upgrade-yruqxa
INFO: Waiting for the Cluster kcp-upgrade-j6bp9o/kcp-upgrade-yruqxa to be deleted
STEP: Waiting for cluster kcp-upgrade-yruqxa to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-tdjwx, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-yruqxa-control-plane-hh2r9, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-yruqxa-control-plane-qqnk5, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-ncl6p, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-yruqxa-control-plane-bgthr, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2zk7s, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-d5d7t, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-yruqxa-control-plane-qqnk5, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-yruqxa-control-plane-bgthr, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-yruqxa-control-plane-bgthr, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-yruqxa-control-plane-bgthr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-sgxjd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-yruqxa-control-plane-hh2r9, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-b9w82, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6xv78, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-bg7gg, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-48n46, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-48n46, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-lzrzj, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-lzrzj, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-yruqxa-control-plane-qqnk5, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-yruqxa-control-plane-hh2r9, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-p76vc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-tls72, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-yruqxa-control-plane-hh2r9, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-yruqxa-control-plane-qqnk5, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-7hczv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-z88bh, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-lwnj2, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-j6bp9o
STEP: Redacting sensitive information from logs


• [SLOW TEST:4352.900 seconds]
... skipping 62 lines ...
Nov  5 21:26:04.686: INFO: INFO: Collecting boot logs for AzureMachine md-scale-8ifhg3-md-0-dph9w

Nov  5 21:26:05.262: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster md-scale-8ifhg3 in namespace md-scale-mt9y07

Nov  5 21:26:52.331: INFO: INFO: Collecting boot logs for AzureMachine md-scale-8ifhg3-md-win-9vwhb

Failed to get logs for machine md-scale-8ifhg3-md-win-7fbd88fd4-l5zhk, cluster md-scale-mt9y07/md-scale-8ifhg3: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov  5 21:26:52.883: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster md-scale-8ifhg3 in namespace md-scale-mt9y07

Nov  5 21:28:15.448: INFO: INFO: Collecting boot logs for AzureMachine md-scale-8ifhg3-md-win-vd885

Failed to get logs for machine md-scale-8ifhg3-md-win-7fbd88fd4-v6cv7, cluster md-scale-mt9y07/md-scale-8ifhg3: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-scale-mt9y07/md-scale-8ifhg3 kube-system pod logs
STEP: Fetching kube-system pod logs took 1.073991544s
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-ws95w, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-6cn7l, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-5zmq8, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-fk27k, container coredns
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-jgmtp, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-mgsw5, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-jt5hh, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-windows-jt5hh, container calico-node-felix
STEP: Creating log watcher for controller kube-system/calico-node-windows-v5lxd, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-windows-v5lxd, container calico-node-felix
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 214.737275ms
STEP: Dumping all the Cluster API resources in the "md-scale-mt9y07" namespace
STEP: Deleting cluster md-scale-mt9y07/md-scale-8ifhg3
STEP: Deleting cluster md-scale-8ifhg3
INFO: Waiting for the Cluster md-scale-mt9y07/md-scale-8ifhg3 to be deleted
STEP: Waiting for cluster md-scale-8ifhg3 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-scale-8ifhg3-control-plane-8l9f6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-jpkww, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-v5lxd, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-jt5hh, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-scale-8ifhg3-control-plane-8l9f6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-jgmtp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-scale-8ifhg3-control-plane-8l9f6, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-ws95w, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-d2j9g, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-6cn7l, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-v5lxd, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-scale-8ifhg3-control-plane-8l9f6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5zmq8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-jt5hh, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-fk27k, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-scale" test spec
INFO: Deleting namespace md-scale-mt9y07
STEP: Redacting sensitive information from logs


• [SLOW TEST:1201.014 seconds]
... skipping 63 lines ...
Nov  5 21:36:29.871: INFO: INFO: Collecting boot logs for AzureMachine machine-pool-7n6o03-control-plane-9tgc8

Nov  5 21:36:31.155: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-7n6o03 in namespace machine-pool-w02tl3

Nov  5 21:36:54.613: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set machine-pool-7n6o03-mp-0

Failed to get logs for machine pool machine-pool-7n6o03-mp-0, cluster machine-pool-w02tl3/machine-pool-7n6o03: [running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1]
Nov  5 21:36:55.156: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-7n6o03 in namespace machine-pool-w02tl3

Nov  5 21:37:40.011: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set win-p-win

Failed to get logs for machine pool machine-pool-7n6o03-mp-win, cluster machine-pool-w02tl3/machine-pool-7n6o03: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster machine-pool-w02tl3/machine-pool-7n6o03 kube-system pod logs
STEP: Fetching kube-system pod logs took 1.027729944s
STEP: Dumping workload cluster machine-pool-w02tl3/machine-pool-7n6o03 Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-h75hv, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-machine-pool-7n6o03-control-plane-9tgc8, container etcd
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-p76c8, container calico-kube-controllers
... skipping 5 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-7n6o03-control-plane-9tgc8, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-7n6o03-control-plane-9tgc8, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-mn8dt, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-wqx5b, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-vfnjp, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-c86gv, container calico-node
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 264.019229ms
STEP: Dumping all the Cluster API resources in the "machine-pool-w02tl3" namespace
STEP: Deleting cluster machine-pool-w02tl3/machine-pool-7n6o03
STEP: Deleting cluster machine-pool-7n6o03
INFO: Waiting for the Cluster machine-pool-w02tl3/machine-pool-7n6o03 to be deleted
STEP: Waiting for cluster machine-pool-7n6o03 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-xt92l, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-7n6o03-control-plane-9tgc8, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mn8dt, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-c86gv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-7n6o03-control-plane-9tgc8, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-tqqtn, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vfnjp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-7n6o03-control-plane-9tgc8, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-p76c8, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-7n6o03-control-plane-9tgc8, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-tqqtn, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-2jw9t, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-h75hv, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-w02tl3
STEP: Redacting sensitive information from logs


• [SLOW TEST:1904.201 seconds]
... skipping 57 lines ...
STEP: Dumping logs from the "node-drain-5lb8xm" workload cluster
STEP: Dumping workload cluster node-drain-nbtpny/node-drain-5lb8xm logs
Nov  5 21:43:48.783: INFO: INFO: Collecting logs for node node-drain-5lb8xm-control-plane-zvrzj in cluster node-drain-5lb8xm in namespace node-drain-nbtpny

Nov  5 21:45:58.823: INFO: INFO: Collecting boot logs for AzureMachine node-drain-5lb8xm-control-plane-zvrzj

Failed to get logs for machine node-drain-5lb8xm-control-plane-mpw5s, cluster node-drain-nbtpny/node-drain-5lb8xm: dialing public load balancer at node-drain-5lb8xm-15adca85.westeurope.cloudapp.azure.com: dial tcp 20.67.124.63:22: connect: connection timed out
STEP: Dumping workload cluster node-drain-nbtpny/node-drain-5lb8xm kube-system pod logs
STEP: Fetching kube-system pod logs took 931.533842ms
STEP: Dumping workload cluster node-drain-nbtpny/node-drain-5lb8xm Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-bqdlm, container coredns
STEP: Creating log watcher for controller kube-system/etcd-node-drain-5lb8xm-control-plane-zvrzj, container etcd
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-wpqk9, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-proxy-58bcq, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-6bjv2, container coredns
STEP: Creating log watcher for controller kube-system/kube-scheduler-node-drain-5lb8xm-control-plane-zvrzj, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-controller-manager-node-drain-5lb8xm-control-plane-zvrzj, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-rmgfg, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-node-drain-5lb8xm-control-plane-zvrzj, container kube-apiserver
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 297.76631ms
STEP: Dumping all the Cluster API resources in the "node-drain-nbtpny" namespace
STEP: Deleting cluster node-drain-nbtpny/node-drain-5lb8xm
STEP: Deleting cluster node-drain-5lb8xm
INFO: Waiting for the Cluster node-drain-nbtpny/node-drain-5lb8xm to be deleted
STEP: Waiting for cluster node-drain-5lb8xm to be deleted
... skipping 145 lines ...
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-gr2s5, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-proxy-66hwm, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-6gng4, container coredns
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-wsh5p, container coredns
STEP: Creating log watcher for controller kube-system/etcd-clusterctl-upgrade-9xuqc8-control-plane-95nzl, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-clusterctl-upgrade-9xuqc8-control-plane-95nzl, container kube-apiserver
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 247.073078ms
STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-q7lpyc" namespace
STEP: Deleting cluster clusterctl-upgrade-q7lpyc/clusterctl-upgrade-9xuqc8
STEP: Deleting cluster clusterctl-upgrade-9xuqc8
INFO: Waiting for the Cluster clusterctl-upgrade-q7lpyc/clusterctl-upgrade-9xuqc8 to be deleted
STEP: Waiting for cluster clusterctl-upgrade-9xuqc8 to be deleted
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-v87nc, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-66hwm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8vvmq, container calico-node: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-k68l4, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-c6dcf76d4-kgbnb, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-kvrfg, container manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
INFO: Deleting namespace clusterctl-upgrade-q7lpyc
STEP: Redacting sensitive information from logs


• [SLOW TEST:1780.858 seconds]
Running the Cluster API E2E tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:40
  upgrade from v1alpha3 to v1beta1, and scale workload clusters created in v1alpha3
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:234
    Should create a management cluster and then upgrade all the providers
    /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/clusterctl_upgrade.go:145
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:169","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2021-11-05T23:00:33Z"}
++ early_exit_handler
++ '[' -n 163 ']'
++ kill -TERM 163
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 5 lines ...