This job view page is being replaced by Spyglass soon. Check out the new job view.
PRshysank: [WIP] Increase parallelism for e2e tests
ResultFAILURE
Tests 0 failed / 12 succeeded
Started2021-11-05 23:00
Elapsed4h15m
Revision71773565512673c7857e1d7ac9d7cce30eabde82
Refs 1816

No Test Failures!


Show 12 Passed Tests

Show 10 Skipped Tests

Error lines from build-log.txt

... skipping 485 lines ...
Nov  5 23:20:59.627: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-kplfns-md-0-ay4aih-cfrqr

Nov  5 23:21:00.486: INFO: INFO: Collecting logs for node 10.1.0.8 in cluster md-rollout-kplfns in namespace md-rollout-orida9

Nov  5 23:21:34.627: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-kplfns-md-win-mnu655-xwj6b

Failed to get logs for machine md-rollout-kplfns-md-win-5cf69d8bd9-prh5t, cluster md-rollout-orida9/md-rollout-kplfns: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Failed to get logs for machine md-rollout-kplfns-md-win-7449c589d8-8tkqc, cluster md-rollout-orida9/md-rollout-kplfns: azuremachines.infrastructure.cluster.x-k8s.io "md-rollout-kplfns-md-win-6cjf5" not found
Nov  5 23:21:35.228: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster md-rollout-kplfns in namespace md-rollout-orida9

Nov  5 23:22:47.051: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-kplfns-md-win-kd2dh

Failed to get logs for machine md-rollout-kplfns-md-win-7449c589d8-qvdpg, cluster md-rollout-orida9/md-rollout-kplfns: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-rollout-orida9/md-rollout-kplfns kube-system pod logs
STEP: Fetching kube-system pod logs took 1.00104816s
STEP: Creating log watcher for controller kube-system/calico-node-mkdqz, container calico-node
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-c84mc, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-dw6pk, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-windows-v4frk, container calico-node-startup
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-md-rollout-kplfns-control-plane-fzn8l, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-zkl6s, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-q4fnm, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-rollout-kplfns-control-plane-fzn8l, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-md-rollout-kplfns-control-plane-fzn8l, container kube-scheduler
STEP: Dumping workload cluster md-rollout-orida9/md-rollout-kplfns Azure activity log
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 210.540005ms
STEP: Dumping all the Cluster API resources in the "md-rollout-orida9" namespace
STEP: Deleting cluster md-rollout-orida9/md-rollout-kplfns
STEP: Deleting cluster md-rollout-kplfns
INFO: Waiting for the Cluster md-rollout-orida9/md-rollout-kplfns to be deleted
STEP: Waiting for cluster md-rollout-kplfns to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-rollout-kplfns-control-plane-fzn8l, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-rollout-kplfns-control-plane-fzn8l, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-x64mz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-dw6pk, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jhbbx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-zkl6s, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-v4frk, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-q4fnm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-c2vtf, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mkdqz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-c2vtf, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-c84mc, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-rollout-kplfns-control-plane-fzn8l, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-rollout-kplfns-control-plane-fzn8l, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-28lnz, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-v4frk, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-hl2pl, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-rollout" test spec
INFO: Deleting namespace md-rollout-orida9
STEP: Redacting sensitive information from logs


• [SLOW TEST:1365.378 seconds]
... skipping 92 lines ...
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-nkscq, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-f45nd, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-9dnqh, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-4lc4hn-control-plane-rtkts, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-ljbnd, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-4lc4hn-control-plane-brbh9, container kube-controller-manager
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 326.894794ms
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-ix2gfa" namespace
STEP: Deleting cluster kcp-upgrade-ix2gfa/kcp-upgrade-4lc4hn
STEP: Deleting cluster kcp-upgrade-4lc4hn
INFO: Waiting for the Cluster kcp-upgrade-ix2gfa/kcp-upgrade-4lc4hn to be deleted
STEP: Waiting for cluster kcp-upgrade-4lc4hn to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-59vqt, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-4lc4hn-control-plane-8dgmk, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-4lc4hn-control-plane-8dgmk, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-4lc4hn-control-plane-8dgmk, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-hkngz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-4lc4hn-control-plane-8dgmk, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-czblc, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-ix2gfa
STEP: Redacting sensitive information from logs


• [SLOW TEST:2007.615 seconds]
... skipping 8 lines ...
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:110

Node Id (1 Indexed): 4
STEP: Creating namespace "self-hosted" for hosting the cluster
Nov  5 23:30:30.489: INFO: starting to create namespace for hosting the "self-hosted" test spec
2021/11/05 23:30:30 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-xn4zai" using the "management" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-xn4zai --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management
... skipping 144 lines ...
STEP: Dumping logs from the "kcp-upgrade-vs4gyl" workload cluster
STEP: Dumping workload cluster kcp-upgrade-locp89/kcp-upgrade-vs4gyl logs
Nov  5 23:23:04.560: INFO: INFO: Collecting logs for node kcp-upgrade-vs4gyl-control-plane-64gvt in cluster kcp-upgrade-vs4gyl in namespace kcp-upgrade-locp89

Nov  5 23:25:15.096: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-vs4gyl-control-plane-64gvt

Failed to get logs for machine kcp-upgrade-vs4gyl-control-plane-g8l8m, cluster kcp-upgrade-locp89/kcp-upgrade-vs4gyl: dialing public load balancer at kcp-upgrade-vs4gyl-392ef205.northeurope.cloudapp.azure.com: dial tcp 20.67.174.73:22: connect: connection timed out
Nov  5 23:25:16.450: INFO: INFO: Collecting logs for node kcp-upgrade-vs4gyl-md-0-phsln in cluster kcp-upgrade-vs4gyl in namespace kcp-upgrade-locp89

Nov  5 23:27:26.163: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-vs4gyl-md-0-phsln

Failed to get logs for machine kcp-upgrade-vs4gyl-md-0-5597c5c664-tcq5p, cluster kcp-upgrade-locp89/kcp-upgrade-vs4gyl: dialing public load balancer at kcp-upgrade-vs4gyl-392ef205.northeurope.cloudapp.azure.com: dial tcp 20.67.174.73:22: connect: connection timed out
Nov  5 23:27:27.567: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster kcp-upgrade-vs4gyl in namespace kcp-upgrade-locp89

Nov  5 23:33:59.379: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-vs4gyl-md-win-67bbb

Failed to get logs for machine kcp-upgrade-vs4gyl-md-win-68f7d6967f-2sjns, cluster kcp-upgrade-locp89/kcp-upgrade-vs4gyl: dialing public load balancer at kcp-upgrade-vs4gyl-392ef205.northeurope.cloudapp.azure.com: dial tcp 20.67.174.73:22: connect: connection timed out
Nov  5 23:34:00.532: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-vs4gyl in namespace kcp-upgrade-locp89

Nov  5 23:40:32.595: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-vs4gyl-md-win-zqf8p

Failed to get logs for machine kcp-upgrade-vs4gyl-md-win-68f7d6967f-mpmnz, cluster kcp-upgrade-locp89/kcp-upgrade-vs4gyl: dialing public load balancer at kcp-upgrade-vs4gyl-392ef205.northeurope.cloudapp.azure.com: dial tcp 20.67.174.73:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-locp89/kcp-upgrade-vs4gyl kube-system pod logs
STEP: Fetching kube-system pod logs took 1.031344752s
STEP: Dumping workload cluster kcp-upgrade-locp89/kcp-upgrade-vs4gyl Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-mmf6j, container coredns
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-bx5d6, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-rdxh9, container coredns
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-t8xgz, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-vs4gyl-control-plane-64gvt, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-windows-8547k, container calico-node-felix
STEP: Creating log watcher for controller kube-system/calico-node-windows-zvw5c, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-wrrnv, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-vs4gyl-control-plane-64gvt, container kube-scheduler
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 224.850099ms
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-locp89" namespace
STEP: Deleting cluster kcp-upgrade-locp89/kcp-upgrade-vs4gyl
STEP: Deleting cluster kcp-upgrade-vs4gyl
INFO: Waiting for the Cluster kcp-upgrade-locp89/kcp-upgrade-vs4gyl to be deleted
STEP: Waiting for cluster kcp-upgrade-vs4gyl to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5zgf4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-gb6qx, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-wrrnv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-vs4gyl-control-plane-64gvt, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-zvw5c, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fgq79, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mmf6j, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-t8xgz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-rdxh9, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-8547k, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-bx5d6, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-zvw5c, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-vs4gyl-control-plane-64gvt, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lxfq7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-vs4gyl-control-plane-64gvt, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-8547k, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-vs4gyl-control-plane-64gvt, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-locp89
STEP: Redacting sensitive information from logs


• [SLOW TEST:2388.859 seconds]
... skipping 53 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-adoption-jt5pro-control-plane-0, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-n9pjm, container coredns
STEP: Creating log watcher for controller kube-system/etcd-kcp-adoption-jt5pro-control-plane-0, container etcd
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-adoption-jt5pro-control-plane-0, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-z878n, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-dxh2k, container coredns
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 211.243303ms
STEP: Dumping all the Cluster API resources in the "kcp-adoption-fzaivo" namespace
STEP: Error starting logs stream for pod kube-system/coredns-78fcd69978-n9pjm, container coredns: container "coredns" in pod "coredns-78fcd69978-n9pjm" is waiting to start: ContainerCreating
STEP: Error starting logs stream for pod kube-system/coredns-78fcd69978-dxh2k, container coredns: container "coredns" in pod "coredns-78fcd69978-dxh2k" is waiting to start: ContainerCreating
STEP: Error starting logs stream for pod kube-system/calico-kube-controllers-846b5f484d-jflxj, container calico-kube-controllers: container "calico-kube-controllers" in pod "calico-kube-controllers-846b5f484d-jflxj" is waiting to start: ContainerCreating
STEP: Deleting cluster kcp-adoption-fzaivo/kcp-adoption-jt5pro
STEP: Deleting cluster kcp-adoption-jt5pro
INFO: Waiting for the Cluster kcp-adoption-fzaivo/kcp-adoption-jt5pro to be deleted
STEP: Waiting for cluster kcp-adoption-jt5pro to be deleted
STEP: Deleting namespace used for hosting the "kcp-adoption" test spec
INFO: Deleting namespace kcp-adoption-fzaivo
... skipping 73 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-nncxt, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-mhc-remediation-aklt2r-control-plane-q8t7f, container etcd
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-2ll7r, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-jwppq, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-aklt2r-control-plane-q8t7f, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-aklt2r-control-plane-q8t7f, container kube-scheduler
STEP: Error starting logs stream for pod kube-system/calico-node-nncxt, container calico-node: container "calico-node" in pod "calico-node-nncxt" is waiting to start: PodInitializing
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 385.779306ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-53a7h6" namespace
STEP: Deleting cluster mhc-remediation-53a7h6/mhc-remediation-aklt2r
STEP: Deleting cluster mhc-remediation-aklt2r
INFO: Waiting for the Cluster mhc-remediation-53a7h6/mhc-remediation-aklt2r to be deleted
STEP: Waiting for cluster mhc-remediation-aklt2r to be deleted
... skipping 65 lines ...
STEP: Dumping logs from the "kcp-upgrade-gj5ko8" workload cluster
STEP: Dumping workload cluster kcp-upgrade-o7ksn8/kcp-upgrade-gj5ko8 logs
Nov  5 23:35:29.292: INFO: INFO: Collecting logs for node kcp-upgrade-gj5ko8-control-plane-pjb4f in cluster kcp-upgrade-gj5ko8 in namespace kcp-upgrade-o7ksn8

Nov  5 23:37:38.519: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-gj5ko8-control-plane-pjb4f

Failed to get logs for machine kcp-upgrade-gj5ko8-control-plane-mxpfz, cluster kcp-upgrade-o7ksn8/kcp-upgrade-gj5ko8: dialing public load balancer at kcp-upgrade-gj5ko8-f3520c93.northeurope.cloudapp.azure.com: dial tcp 20.67.173.68:22: connect: connection timed out
Nov  5 23:37:40.050: INFO: INFO: Collecting logs for node kcp-upgrade-gj5ko8-control-plane-b8jrv in cluster kcp-upgrade-gj5ko8 in namespace kcp-upgrade-o7ksn8

Nov  5 23:39:49.587: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-gj5ko8-control-plane-b8jrv

Failed to get logs for machine kcp-upgrade-gj5ko8-control-plane-qckx6, cluster kcp-upgrade-o7ksn8/kcp-upgrade-gj5ko8: dialing public load balancer at kcp-upgrade-gj5ko8-f3520c93.northeurope.cloudapp.azure.com: dial tcp 20.67.173.68:22: connect: connection timed out
Nov  5 23:39:51.028: INFO: INFO: Collecting logs for node kcp-upgrade-gj5ko8-control-plane-frw5q in cluster kcp-upgrade-gj5ko8 in namespace kcp-upgrade-o7ksn8

Nov  5 23:42:00.664: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-gj5ko8-control-plane-frw5q

Failed to get logs for machine kcp-upgrade-gj5ko8-control-plane-z22q6, cluster kcp-upgrade-o7ksn8/kcp-upgrade-gj5ko8: dialing public load balancer at kcp-upgrade-gj5ko8-f3520c93.northeurope.cloudapp.azure.com: dial tcp 20.67.173.68:22: connect: connection timed out
Nov  5 23:42:02.122: INFO: INFO: Collecting logs for node kcp-upgrade-gj5ko8-md-0-hzsc7 in cluster kcp-upgrade-gj5ko8 in namespace kcp-upgrade-o7ksn8

Nov  5 23:44:11.732: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-gj5ko8-md-0-hzsc7

Failed to get logs for machine kcp-upgrade-gj5ko8-md-0-575f6b6669-cqmt7, cluster kcp-upgrade-o7ksn8/kcp-upgrade-gj5ko8: dialing public load balancer at kcp-upgrade-gj5ko8-f3520c93.northeurope.cloudapp.azure.com: dial tcp 20.67.173.68:22: connect: connection timed out
Nov  5 23:44:13.010: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-gj5ko8 in namespace kcp-upgrade-o7ksn8

Nov  5 23:50:44.948: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-gj5ko8-md-win-wtkg5

Failed to get logs for machine kcp-upgrade-gj5ko8-md-win-7767b46dc8-94dkq, cluster kcp-upgrade-o7ksn8/kcp-upgrade-gj5ko8: dialing public load balancer at kcp-upgrade-gj5ko8-f3520c93.northeurope.cloudapp.azure.com: dial tcp 20.67.173.68:22: connect: connection timed out
Nov  5 23:50:46.266: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster kcp-upgrade-gj5ko8 in namespace kcp-upgrade-o7ksn8

Nov  5 23:57:18.164: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-gj5ko8-md-win-lqrgq

Failed to get logs for machine kcp-upgrade-gj5ko8-md-win-7767b46dc8-qwv6l, cluster kcp-upgrade-o7ksn8/kcp-upgrade-gj5ko8: dialing public load balancer at kcp-upgrade-gj5ko8-f3520c93.northeurope.cloudapp.azure.com: dial tcp 20.67.173.68:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-o7ksn8/kcp-upgrade-gj5ko8 kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-node-c7hn9, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-gffzw, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-wv5sb, container coredns
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-2mv7v, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-windows-npbqx, container calico-node-startup
... skipping 20 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-pjftj, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-gj5ko8-control-plane-frw5q, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-njsrp, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-gj5ko8-control-plane-b8jrv, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-gj5ko8-control-plane-frw5q, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-gj5ko8-control-plane-pjb4f, container kube-controller-manager
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 436.16461ms
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-o7ksn8" namespace
STEP: Deleting cluster kcp-upgrade-o7ksn8/kcp-upgrade-gj5ko8
STEP: Deleting cluster kcp-upgrade-gj5ko8
INFO: Waiting for the Cluster kcp-upgrade-o7ksn8/kcp-upgrade-gj5ko8 to be deleted
STEP: Waiting for cluster kcp-upgrade-gj5ko8 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-gj5ko8-control-plane-pjb4f, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-gj5ko8-control-plane-pjb4f, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mcwhs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-njsrp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-gj5ko8-control-plane-pjb4f, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-hjdh9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-gj5ko8-control-plane-pjb4f, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-pjftj, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-o7ksn8
STEP: Redacting sensitive information from logs


• [SLOW TEST:3408.326 seconds]
... skipping 91 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-m2zpfz-control-plane-sdchg, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-87jmr, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-l9ntz, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-kj76m, container coredns
STEP: Creating log watcher for controller kube-system/etcd-mhc-remediation-m2zpfz-control-plane-42glm, container etcd
STEP: Creating log watcher for controller kube-system/etcd-mhc-remediation-m2zpfz-control-plane-7jnbr, container etcd
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 182.819624ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-f94s6k" namespace
STEP: Error starting logs stream for pod kube-system/calico-node-wf5zj, container calico-node: container "calico-node" in pod "calico-node-wf5zj" is waiting to start: PodInitializing
STEP: Deleting cluster mhc-remediation-f94s6k/mhc-remediation-m2zpfz
STEP: Deleting cluster mhc-remediation-m2zpfz
INFO: Waiting for the Cluster mhc-remediation-f94s6k/mhc-remediation-m2zpfz to be deleted
STEP: Waiting for cluster mhc-remediation-m2zpfz to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-m2zpfz-control-plane-7jnbr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jcb2w, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-m2zpfz-control-plane-42glm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-m2zpfz-control-plane-sdchg, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-f65ng, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-kc5mr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-m2zpfz-control-plane-42glm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-m2zpfz-control-plane-sdchg, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-m2zpfz-control-plane-sdchg, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-m2zpfz-control-plane-42glm, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-m2zpfz-control-plane-sdchg, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-87jmr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-l9ntz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-bs89z, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-m2zpfz-control-plane-7jnbr, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-m2zpfz-control-plane-42glm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-87ln5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-m2zpfz-control-plane-7jnbr, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8fn2q, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4mwxw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-m2zpfz-control-plane-7jnbr, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kj76m, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-f94s6k
STEP: Redacting sensitive information from logs


• [SLOW TEST:1270.393 seconds]
... skipping 61 lines ...
Nov  6 00:18:09.384: INFO: INFO: Collecting boot logs for AzureMachine machine-pool-fd40j8-control-plane-bkmvs

Nov  6 00:18:10.635: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-fd40j8 in namespace machine-pool-n56hp3

Nov  6 00:18:21.074: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set machine-pool-fd40j8-mp-0

Failed to get logs for machine pool machine-pool-fd40j8-mp-0, cluster machine-pool-n56hp3/machine-pool-fd40j8: [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1]
Nov  6 00:18:21.697: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-fd40j8 in namespace machine-pool-n56hp3

Nov  6 00:18:44.446: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set win-p-win

Failed to get logs for machine pool machine-pool-fd40j8-mp-win, cluster machine-pool-n56hp3/machine-pool-fd40j8: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster machine-pool-n56hp3/machine-pool-fd40j8 kube-system pod logs
STEP: Fetching kube-system pod logs took 1.032042667s
STEP: Dumping workload cluster machine-pool-n56hp3/machine-pool-fd40j8 Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-w6dbm, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-bjvm4, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-machine-pool-fd40j8-control-plane-bkmvs, container etcd
... skipping 9 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-fd40j8-control-plane-bkmvs, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-m7rrt, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-l8f4z, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-phr9z, container calico-node-felix
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-ph7k2, container coredns
STEP: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-fd40j8-control-plane-bkmvs, container kube-apiserver
STEP: Error starting logs stream for pod kube-system/calico-node-s7kq6, container calico-node: pods "machine-pool-fd40j8-mp-0000000" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-lk27h, container kube-proxy: pods "machine-pool-fd40j8-mp-0000000" not found
STEP: Error starting logs stream for pod kube-system/calico-node-l8f4z, container calico-node: pods "machine-pool-fd40j8-mp-0000001" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-m7rrt, container kube-proxy: pods "machine-pool-fd40j8-mp-0000001" not found
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 211.981688ms
STEP: Dumping all the Cluster API resources in the "machine-pool-n56hp3" namespace
STEP: Deleting cluster machine-pool-n56hp3/machine-pool-fd40j8
STEP: Deleting cluster machine-pool-fd40j8
INFO: Waiting for the Cluster machine-pool-n56hp3/machine-pool-fd40j8 to be deleted
STEP: Waiting for cluster machine-pool-fd40j8 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-vhxt9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-phr9z, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-7fmvs, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-fd40j8-control-plane-bkmvs, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-fd40j8-control-plane-bkmvs, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-phr9z, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-k8wrz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-fd40j8-control-plane-bkmvs, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-ph7k2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-fd40j8-control-plane-bkmvs, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-w6dbm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bjvm4, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-n56hp3
STEP: Redacting sensitive information from logs


• [SLOW TEST:1646.509 seconds]
... skipping 64 lines ...
Nov  6 00:11:07.876: INFO: INFO: Collecting boot logs for AzureMachine md-scale-bxcoe7-md-0-vp5b4

Nov  6 00:11:08.304: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster md-scale-bxcoe7 in namespace md-scale-g304iv

Nov  6 00:11:46.191: INFO: INFO: Collecting boot logs for AzureMachine md-scale-bxcoe7-md-win-28d9x

Failed to get logs for machine md-scale-bxcoe7-md-win-7bfc997f79-6vw6v, cluster md-scale-g304iv/md-scale-bxcoe7: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov  6 00:11:46.619: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster md-scale-bxcoe7 in namespace md-scale-g304iv

Nov  6 00:12:17.610: INFO: INFO: Collecting boot logs for AzureMachine md-scale-bxcoe7-md-win-rxv92

Failed to get logs for machine md-scale-bxcoe7-md-win-7bfc997f79-9v5sr, cluster md-scale-g304iv/md-scale-bxcoe7: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-scale-g304iv/md-scale-bxcoe7 kube-system pod logs
STEP: Fetching kube-system pod logs took 1.082031327s
STEP: Creating log watcher for controller kube-system/etcd-md-scale-bxcoe7-control-plane-vq5wz, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-md-scale-bxcoe7-control-plane-vq5wz, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-bxcoe7-control-plane-vq5wz, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-windows-qs4nf, container calico-node-startup
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-windows-qs4nf, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-proxy-ms7nl, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-m8ffh, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-sk6cz, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-nzb9w, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-md-scale-bxcoe7-control-plane-vq5wz, container kube-scheduler
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 200.504659ms
STEP: Dumping all the Cluster API resources in the "md-scale-g304iv" namespace
STEP: Deleting cluster md-scale-g304iv/md-scale-bxcoe7
STEP: Deleting cluster md-scale-bxcoe7
INFO: Waiting for the Cluster md-scale-g304iv/md-scale-bxcoe7 to be deleted
STEP: Waiting for cluster md-scale-bxcoe7 to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-k6xkl, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-m2nhr, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-qs4nf, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-gbcvs, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-cm82j, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-nzb9w, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-m8ffh, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-scale-bxcoe7-control-plane-vq5wz, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-scale-bxcoe7-control-plane-vq5wz, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-sk6cz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-scale-bxcoe7-control-plane-vq5wz, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-qs4nf, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-sfn4l, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-29sd9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-scale-bxcoe7-control-plane-vq5wz, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-cm82j, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ms7nl, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-scale" test spec
INFO: Deleting namespace md-scale-g304iv
STEP: Redacting sensitive information from logs


• [SLOW TEST:1585.680 seconds]
... skipping 57 lines ...
STEP: Dumping logs from the "node-drain-o2gilr" workload cluster
STEP: Dumping workload cluster node-drain-f3e9qb/node-drain-o2gilr logs
Nov  6 00:23:40.263: INFO: INFO: Collecting logs for node node-drain-o2gilr-control-plane-dkdpr in cluster node-drain-o2gilr in namespace node-drain-f3e9qb

Nov  6 00:25:50.296: INFO: INFO: Collecting boot logs for AzureMachine node-drain-o2gilr-control-plane-dkdpr

Failed to get logs for machine node-drain-o2gilr-control-plane-b4whj, cluster node-drain-f3e9qb/node-drain-o2gilr: dialing public load balancer at node-drain-o2gilr-e930dad5.northeurope.cloudapp.azure.com: dial tcp 20.67.189.13:22: connect: connection timed out
STEP: Dumping workload cluster node-drain-f3e9qb/node-drain-o2gilr kube-system pod logs
STEP: Fetching kube-system pod logs took 895.757616ms
STEP: Dumping workload cluster node-drain-f3e9qb/node-drain-o2gilr Azure activity log
STEP: Creating log watcher for controller kube-system/kube-controller-manager-node-drain-o2gilr-control-plane-dkdpr, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/etcd-node-drain-o2gilr-control-plane-dkdpr, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-dcnh7, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-node-drain-o2gilr-control-plane-dkdpr, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-apiserver-node-drain-o2gilr-control-plane-dkdpr, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-t5f2z, container calico-node
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-8fsg2, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-k9dth, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-vwsbx, container coredns
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 293.205591ms
STEP: Dumping all the Cluster API resources in the "node-drain-f3e9qb" namespace
STEP: Deleting cluster node-drain-f3e9qb/node-drain-o2gilr
STEP: Deleting cluster node-drain-o2gilr
INFO: Waiting for the Cluster node-drain-f3e9qb/node-drain-o2gilr to be deleted
STEP: Waiting for cluster node-drain-o2gilr to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-node-drain-o2gilr-control-plane-dkdpr, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-vwsbx, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-t5f2z, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-node-drain-o2gilr-control-plane-dkdpr, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-k9dth, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-node-drain-o2gilr-control-plane-dkdpr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dcnh7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-8fsg2, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-node-drain-o2gilr-control-plane-dkdpr, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "node-drain" test spec
INFO: Deleting namespace node-drain-f3e9qb
STEP: Redacting sensitive information from logs


• [SLOW TEST:1728.889 seconds]
... skipping 140 lines ...
STEP: Dumping workload cluster clusterctl-upgrade-bcrf55/clusterctl-upgrade-60ijsu Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-fmwbf, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-czjgv, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-t9jpt, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-6d9w7, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-clusterctl-upgrade-60ijsu-control-plane-nsb8m, container kube-controller-manager
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 229.322967ms
STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-bcrf55" namespace
STEP: Deleting cluster clusterctl-upgrade-bcrf55/clusterctl-upgrade-60ijsu
STEP: Deleting cluster clusterctl-upgrade-60ijsu
INFO: Waiting for the Cluster clusterctl-upgrade-bcrf55/clusterctl-upgrade-60ijsu to be deleted
STEP: Waiting for cluster clusterctl-upgrade-60ijsu to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-clusterctl-upgrade-60ijsu-control-plane-nsb8m, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-6d9w7, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-swnpf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rpgg6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-t9jpt, container calico-node: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-hmxvn, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-czjgv, container calico-node: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-q8p6j, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-clusterctl-upgrade-60ijsu-control-plane-nsb8m, container kube-controller-manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-wtbxs, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-7d74cfdb6d-skbzc, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-clusterctl-upgrade-60ijsu-control-plane-nsb8m, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-fmwbf, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-tx5sg, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-clusterctl-upgrade-60ijsu-control-plane-nsb8m, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
INFO: Deleting namespace clusterctl-upgrade-bcrf55
STEP: Redacting sensitive information from logs


• [SLOW TEST:2021.641 seconds]
Running the Cluster API E2E tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:40
  upgrade from v1alpha3 to v1beta1, and scale workload clusters created in v1alpha3
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:234
    Should create a management cluster and then upgrade all the providers
    /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/clusterctl_upgrade.go:145
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2021-11-06T03:00:45Z"}
++ early_exit_handler
++ '[' -n 164 ']'
++ kill -TERM 164
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 12 lines ...
Cleaning up after docker
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
================================================================================
Done cleaning up after docker in docker.
All sensitive variables are redacted
{"component":"entrypoint","file":"prow/entrypoint/run.go:255","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2021-11-06T03:15:45Z"}
{"component":"entrypoint","error":"os: process already finished","file":"prow/entrypoint/run.go:257","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2021-11-06T03:15:45Z"}