This job view page is being replaced by Spyglass soon. Check out the new job view.
PRshysank: [WIP] Increase parallelism for e2e tests
ResultFAILURE
Tests 1 failed / 9 succeeded
Started2021-11-04 19:47
Elapsed4h15m
Revision7b00ee7f5707a906b201fecc37d6baae80f68e92
Refs 1816

Test Failures


capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster 15m13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sself\-hosted\sspec\sShould\spivot\sthe\sbootstrap\scluster\sto\sa\sself\-hosted\scluster$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:188
The resource group in Azure still exists. After deleting the cluster all of the Azure resources should also be deleted.
Expected
    <bool>: false
to be true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:182
				
				Click to see stdout/stderrfrom junit.e2e_suite.5.xml

Filter through log files | View test history on testgrid


Show 9 Passed Tests

Show 3 Skipped Tests

Error lines from build-log.txt

... skipping 491 lines ...
Nov  4 20:13:11.750: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-5826wq-md-0-lxycjf-vvfgk

Nov  4 20:13:12.466: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster md-rollout-5826wq in namespace md-rollout-z6mem6

Nov  4 20:14:35.066: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-5826wq-md-win-r5k4g

Failed to get logs for machine md-rollout-5826wq-md-win-6cf4f69d75-29p6t, cluster md-rollout-z6mem6/md-rollout-5826wq: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Failed to get logs for machine md-rollout-5826wq-md-win-6cf4f69d75-jwnbz, cluster md-rollout-z6mem6/md-rollout-5826wq: azuremachines.infrastructure.cluster.x-k8s.io "md-rollout-5826wq-md-win-kdlwt" not found
Nov  4 20:14:35.700: INFO: INFO: Collecting logs for node 10.1.0.8 in cluster md-rollout-5826wq in namespace md-rollout-z6mem6

Nov  4 20:15:42.193: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-5826wq-md-win-v3p7vl-xq242

Failed to get logs for machine md-rollout-5826wq-md-win-6d4b8d58cd-msg4l, cluster md-rollout-z6mem6/md-rollout-5826wq: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-rollout-z6mem6/md-rollout-5826wq kube-system pod logs
STEP: Fetching kube-system pod logs took 1.107096421s
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-rp8gx, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-dfmbh, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-cpxfl, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-p6djl, container kube-proxy
... skipping 11 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-bq2pj, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-gqbhr, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-68tk2, container calico-node-startup
STEP: Creating log watcher for controller kube-system/etcd-md-rollout-5826wq-control-plane-mk4rz, container etcd
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-rollout-5826wq-control-plane-mk4rz, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-apiserver-md-rollout-5826wq-control-plane-mk4rz, container kube-apiserver
STEP: Error starting logs stream for pod kube-system/kube-proxy-windows-w5gzr, container kube-proxy: container "kube-proxy" in pod "kube-proxy-windows-w5gzr" is waiting to start: ContainerCreating
STEP: Error starting logs stream for pod kube-system/calico-node-windows-bh8hr, container calico-node-felix: container "calico-node-felix" in pod "calico-node-windows-bh8hr" is waiting to start: PodInitializing
STEP: Error starting logs stream for pod kube-system/calico-node-windows-bh8hr, container calico-node-startup: container "calico-node-startup" in pod "calico-node-windows-bh8hr" is waiting to start: PodInitializing
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 261.777826ms
STEP: Dumping all the Cluster API resources in the "md-rollout-z6mem6" namespace
STEP: Deleting cluster md-rollout-z6mem6/md-rollout-5826wq
STEP: Deleting cluster md-rollout-5826wq
INFO: Waiting for the Cluster md-rollout-z6mem6/md-rollout-5826wq to be deleted
STEP: Waiting for cluster md-rollout-5826wq to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-cpxfl, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-rollout-5826wq-control-plane-mk4rz, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-68tk2, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-rollout-5826wq-control-plane-mk4rz, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-p6djl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dfmbh, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-f97b8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xdn6p, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-68tk2, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-rp8gx, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-rollout-5826wq-control-plane-mk4rz, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-gqbhr, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-lwn4f, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-bq2pj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-cpxfl, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-rollout-5826wq-control-plane-mk4rz, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-vhbmq, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-rollout" test spec
INFO: Deleting namespace md-rollout-z6mem6
STEP: Redacting sensitive information from logs


• [SLOW TEST:1723.313 seconds]
... skipping 92 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-xqtnw, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-fr46x, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-fntm7, container coredns
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-swqxe8-control-plane-6kbvv, container etcd
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-swqxe8-control-plane-f6l7g, container etcd
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-swqxe8-control-plane-hrcgp, container etcd
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 200.468ms
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-fpsupo" namespace
STEP: Deleting cluster kcp-upgrade-fpsupo/kcp-upgrade-swqxe8
STEP: Deleting cluster kcp-upgrade-swqxe8
INFO: Waiting for the Cluster kcp-upgrade-fpsupo/kcp-upgrade-swqxe8 to be deleted
STEP: Waiting for cluster kcp-upgrade-swqxe8 to be deleted
... skipping 81 lines ...
Nov  4 20:26:58.306: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-khh0pd-md-0-5x6hq

Nov  4 20:26:58.803: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster kcp-upgrade-khh0pd in namespace kcp-upgrade-2ns2ij

Nov  4 20:27:34.906: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-khh0pd-md-win-ln62x

Failed to get logs for machine kcp-upgrade-khh0pd-md-win-bc8b89895-8fndq, cluster kcp-upgrade-2ns2ij/kcp-upgrade-khh0pd: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov  4 20:27:35.362: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-khh0pd in namespace kcp-upgrade-2ns2ij

Nov  4 20:28:07.189: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-khh0pd-md-win-zmzrb

Failed to get logs for machine kcp-upgrade-khh0pd-md-win-bc8b89895-wjbqc, cluster kcp-upgrade-2ns2ij/kcp-upgrade-khh0pd: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster kcp-upgrade-2ns2ij/kcp-upgrade-khh0pd kube-system pod logs
STEP: Fetching kube-system pod logs took 922.409858ms
STEP: Dumping workload cluster kcp-upgrade-2ns2ij/kcp-upgrade-khh0pd Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-2lsfj, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-rqjt9, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-mfjf6, container calico-node-felix
... skipping 20 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-khh0pd-control-plane-2jpbl, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-khh0pd-control-plane-7j449, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-windows-llpmb, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-proxy-wzsd5, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-khh0pd-control-plane-2jpbl, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-c78dw, container kube-proxy
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 286.450428ms
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-2ns2ij" namespace
STEP: Deleting cluster kcp-upgrade-2ns2ij/kcp-upgrade-khh0pd
STEP: Deleting cluster kcp-upgrade-khh0pd
INFO: Waiting for the Cluster kcp-upgrade-2ns2ij/kcp-upgrade-khh0pd to be deleted
STEP: Waiting for cluster kcp-upgrade-khh0pd to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-khh0pd-control-plane-7j449, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2lsfj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-c78dw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-khh0pd-control-plane-fkmd4, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5jm2g, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-khh0pd-control-plane-fkmd4, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-llpmb, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-khh0pd-control-plane-2jpbl, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-llpmb, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wzsd5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-mfjf6, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-khh0pd-control-plane-2jpbl, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-khh0pd-control-plane-2jpbl, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-4m844, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-khh0pd-control-plane-2jpbl, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-khh0pd-control-plane-fkmd4, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wqggf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-ch8g2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-khh0pd-control-plane-7j449, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-khh0pd-control-plane-7j449, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-khh0pd-control-plane-7j449, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-mfjf6, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4jjcf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mbb8n, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-gpr26, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-skxss, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-khh0pd-control-plane-fkmd4, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rc886, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-rqjt9, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-2ns2ij
STEP: Redacting sensitive information from logs


• [SLOW TEST:2176.256 seconds]
... skipping 8 lines ...
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107

Node Id (1 Indexed): 5
STEP: Creating namespace "self-hosted" for hosting the cluster
Nov  4 20:26:38.759: INFO: starting to create namespace for hosting the "self-hosted" test spec
2021/11/04 20:26:38 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-rkxeyh" using the "management" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-rkxeyh --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management
... skipping 67 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-self-hosted-rkxeyh-control-plane-rk2sm, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-self-hosted-rkxeyh-control-plane-rk2sm, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-8xbhf, container calico-node
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-sfjbk, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-97xbv, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-5tzhd, container coredns
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 239.555317ms
STEP: Dumping all the Cluster API resources in the "self-hosted" namespace
STEP: Deleting all clusters in the self-hosted namespace
STEP: Deleting cluster self-hosted-rkxeyh
INFO: Waiting for the Cluster self-hosted/self-hosted-rkxeyh to be deleted
STEP: Waiting for cluster self-hosted-rkxeyh to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-self-hosted-rkxeyh-control-plane-rk2sm, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-self-hosted-rkxeyh-control-plane-rk2sm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-97xbv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-self-hosted-rkxeyh-control-plane-rk2sm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-fbnng, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5tzhd, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-self-hosted-rkxeyh-control-plane-rk2sm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-sfjbk, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-j49m2, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "self-hosted" test spec
INFO: Deleting namespace self-hosted
STEP: Checking if any resources are left over in Azure for spec "self-hosted"
STEP: Redacting sensitive information from logs
STEP: Redacting sensitive information from logs

... skipping 104 lines ...
STEP: Dumping logs from the "kcp-upgrade-ynjhh8" workload cluster
STEP: Dumping workload cluster kcp-upgrade-d8dy0r/kcp-upgrade-ynjhh8 logs
Nov  4 20:16:52.492: INFO: INFO: Collecting logs for node kcp-upgrade-ynjhh8-control-plane-jm9zr in cluster kcp-upgrade-ynjhh8 in namespace kcp-upgrade-d8dy0r

Nov  4 20:19:02.842: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-ynjhh8-control-plane-jm9zr

Failed to get logs for machine kcp-upgrade-ynjhh8-control-plane-tk5pr, cluster kcp-upgrade-d8dy0r/kcp-upgrade-ynjhh8: dialing public load balancer at kcp-upgrade-ynjhh8-62131ba5.westeurope.cloudapp.azure.com: dial tcp 20.103.193.167:22: connect: connection timed out
Nov  4 20:19:04.219: INFO: INFO: Collecting logs for node kcp-upgrade-ynjhh8-md-0-mdlvn in cluster kcp-upgrade-ynjhh8 in namespace kcp-upgrade-d8dy0r

Nov  4 20:21:13.910: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-ynjhh8-md-0-mdlvn

Failed to get logs for machine kcp-upgrade-ynjhh8-md-0-56f5b9c4c8-56xzn, cluster kcp-upgrade-d8dy0r/kcp-upgrade-ynjhh8: dialing public load balancer at kcp-upgrade-ynjhh8-62131ba5.westeurope.cloudapp.azure.com: dial tcp 20.103.193.167:22: connect: connection timed out
Nov  4 20:21:15.232: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster kcp-upgrade-ynjhh8 in namespace kcp-upgrade-d8dy0r

Nov  4 20:27:47.126: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-ynjhh8-md-win-tvz4f

Failed to get logs for machine kcp-upgrade-ynjhh8-md-win-6cb45b596f-652jf, cluster kcp-upgrade-d8dy0r/kcp-upgrade-ynjhh8: dialing public load balancer at kcp-upgrade-ynjhh8-62131ba5.westeurope.cloudapp.azure.com: dial tcp 20.103.193.167:22: connect: connection timed out
Nov  4 20:28:21.913: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-ynjhh8 in namespace kcp-upgrade-d8dy0r

Nov  4 20:34:55.158: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-ynjhh8-md-win-tc4tl

Failed to get logs for machine kcp-upgrade-ynjhh8-md-win-6cb45b596f-jwhmt, cluster kcp-upgrade-d8dy0r/kcp-upgrade-ynjhh8: dialing public load balancer at kcp-upgrade-ynjhh8-62131ba5.westeurope.cloudapp.azure.com: dial tcp 20.103.193.167:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-d8dy0r/kcp-upgrade-ynjhh8 kube-system pod logs
STEP: Fetching kube-system pod logs took 1.027460896s
STEP: Dumping workload cluster kcp-upgrade-d8dy0r/kcp-upgrade-ynjhh8 Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-v5nxg, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-windows-grc8n, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-proxy-r27v2, container kube-proxy
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-mv44j, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-c7nvd, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-ynjhh8-control-plane-jm9zr, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-windows-lhz89, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-ynjhh8-control-plane-jm9zr, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-ynjhh8-control-plane-jm9zr, container kube-scheduler
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 221.201765ms
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-d8dy0r" namespace
STEP: Deleting cluster kcp-upgrade-d8dy0r/kcp-upgrade-ynjhh8
STEP: Deleting cluster kcp-upgrade-ynjhh8
INFO: Waiting for the Cluster kcp-upgrade-d8dy0r/kcp-upgrade-ynjhh8 to be deleted
STEP: Waiting for cluster kcp-upgrade-ynjhh8 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-5xgzs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mv44j, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-rxnzt, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-c7nvd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-grc8n, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-grc8n, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-ynjhh8-control-plane-jm9zr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-lhz89, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rp729, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-ynjhh8-control-plane-jm9zr, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-ynjhh8-control-plane-jm9zr, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-lhz89, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-ynjhh8-control-plane-jm9zr, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-d8dy0r
STEP: Redacting sensitive information from logs


• [SLOW TEST:2714.826 seconds]
... skipping 69 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-a3gdmg-control-plane-grzsm, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-6vqdh, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-qp7c8, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-jn5jh, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-9znjx, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-a3gdmg-control-plane-grzsm, container kube-apiserver
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 328.607968ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-83zq9a" namespace
STEP: Deleting cluster mhc-remediation-83zq9a/mhc-remediation-a3gdmg
STEP: Deleting cluster mhc-remediation-a3gdmg
INFO: Waiting for the Cluster mhc-remediation-83zq9a/mhc-remediation-a3gdmg to be deleted
STEP: Waiting for cluster mhc-remediation-a3gdmg to be deleted
... skipping 59 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-adoption-7rm8yv-control-plane-0, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-8bjrm, container calico-node
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-njbf6, container calico-kube-controllers
STEP: Dumping workload cluster kcp-adoption-6gooae/kcp-adoption-7rm8yv Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-jgdq5, container coredns
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-adoption-7rm8yv-control-plane-0, container kube-scheduler
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 343.794718ms
STEP: Dumping all the Cluster API resources in the "kcp-adoption-6gooae" namespace
STEP: Deleting cluster kcp-adoption-6gooae/kcp-adoption-7rm8yv
STEP: Deleting cluster kcp-adoption-7rm8yv
INFO: Waiting for the Cluster kcp-adoption-6gooae/kcp-adoption-7rm8yv to be deleted
STEP: Waiting for cluster kcp-adoption-7rm8yv to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-adoption-7rm8yv-control-plane-0, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-adoption-7rm8yv-control-plane-0, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8bjrm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-jgdq5, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-m7f5k, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9w6p8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-njbf6, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-adoption-7rm8yv-control-plane-0, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-adoption-7rm8yv-control-plane-0, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-adoption" test spec
INFO: Deleting namespace kcp-adoption-6gooae
STEP: Redacting sensitive information from logs


• [SLOW TEST:625.672 seconds]
... skipping 91 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-ygm4rk-control-plane-jw2sq, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-j6gjk, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-ygm4rk-control-plane-jw2sq, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-ygm4rk-control-plane-s8t2z, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-v2nt2, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-2hzsg, container calico-node
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 230.492494ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-zxy8xy" namespace
STEP: Deleting cluster mhc-remediation-zxy8xy/mhc-remediation-ygm4rk
STEP: Deleting cluster mhc-remediation-ygm4rk
INFO: Waiting for the Cluster mhc-remediation-zxy8xy/mhc-remediation-ygm4rk to be deleted
STEP: Waiting for cluster mhc-remediation-ygm4rk to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-ygm4rk-control-plane-xgtcd, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-865lq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-ygm4rk-control-plane-jw2sq, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rxgpm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-bk5x7, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-ygm4rk-control-plane-s8t2z, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-ygm4rk-control-plane-xgtcd, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-ygm4rk-control-plane-xgtcd, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-ygm4rk-control-plane-s8t2z, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hp6s8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-ygm4rk-control-plane-xgtcd, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-ygm4rk-control-plane-s8t2z, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-ygm4rk-control-plane-jw2sq, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dlvnt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-ygm4rk-control-plane-jw2sq, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-ygm4rk-control-plane-jw2sq, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-ygm4rk-control-plane-s8t2z, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2hzsg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-j6gjk, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-zxy8xy
STEP: Redacting sensitive information from logs


• [SLOW TEST:1298.809 seconds]
... skipping 62 lines ...
Nov  4 20:57:01.829: INFO: INFO: Collecting boot logs for AzureMachine md-scale-2uqy6w-md-0-gwwwk

Nov  4 20:57:02.437: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster md-scale-2uqy6w in namespace md-scale-znkjvu

Nov  4 20:57:30.511: INFO: INFO: Collecting boot logs for AzureMachine md-scale-2uqy6w-md-win-n4fdr

Failed to get logs for machine md-scale-2uqy6w-md-win-957844ddc-lg84b, cluster md-scale-znkjvu/md-scale-2uqy6w: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov  4 20:57:31.092: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster md-scale-2uqy6w in namespace md-scale-znkjvu

Nov  4 20:58:38.287: INFO: INFO: Collecting boot logs for AzureMachine md-scale-2uqy6w-md-win-gj2zn

Failed to get logs for machine md-scale-2uqy6w-md-win-957844ddc-xkbf9, cluster md-scale-znkjvu/md-scale-2uqy6w: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-scale-znkjvu/md-scale-2uqy6w kube-system pod logs
STEP: Fetching kube-system pod logs took 1.055386436s
STEP: Dumping workload cluster md-scale-znkjvu/md-scale-2uqy6w Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-9sg5t, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-windows-rtndl, container calico-node-startup
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-bl2p7, container coredns
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-md-scale-2uqy6w-control-plane-l9wzc, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-windows-t75sr, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-apiserver-md-scale-2uqy6w-control-plane-l9wzc, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-2uqy6w-control-plane-l9wzc, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-l67mt, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-zb5kt, container kube-proxy
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 252.072308ms
STEP: Dumping all the Cluster API resources in the "md-scale-znkjvu" namespace
STEP: Deleting cluster md-scale-znkjvu/md-scale-2uqy6w
STEP: Deleting cluster md-scale-2uqy6w
INFO: Waiting for the Cluster md-scale-znkjvu/md-scale-2uqy6w to be deleted
STEP: Waiting for cluster md-scale-2uqy6w to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-rtndl, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-9sg5t, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-l87cj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2vpll, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-rtndl, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-zb5kt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-pf8vh, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-t75sr, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-scale-2uqy6w-control-plane-l9wzc, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-scale-2uqy6w-control-plane-l9wzc, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-t75sr, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-bl2p7, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-scale-2uqy6w-control-plane-l9wzc, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9wlww, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-scale-2uqy6w-control-plane-l9wzc, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-scale" test spec
INFO: Deleting namespace md-scale-znkjvu
STEP: Redacting sensitive information from logs


• [SLOW TEST:1494.441 seconds]
... skipping 61 lines ...
Nov  4 20:59:12.069: INFO: INFO: Collecting boot logs for AzureMachine machine-pool-5hntlp-control-plane-kn7xq

Nov  4 20:59:13.191: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-5hntlp in namespace machine-pool-tb7hu8

Nov  4 20:59:39.534: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set machine-pool-5hntlp-mp-0

Failed to get logs for machine pool machine-pool-5hntlp-mp-0, cluster machine-pool-tb7hu8/machine-pool-5hntlp: [running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1]
Nov  4 20:59:40.039: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-5hntlp in namespace machine-pool-tb7hu8

Nov  4 21:00:32.087: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set win-p-win

Failed to get logs for machine pool machine-pool-5hntlp-mp-win, cluster machine-pool-tb7hu8/machine-pool-5hntlp: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster machine-pool-tb7hu8/machine-pool-5hntlp kube-system pod logs
STEP: Fetching kube-system pod logs took 1.023239975s
STEP: Dumping workload cluster machine-pool-tb7hu8/machine-pool-5hntlp Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-ncs55, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/etcd-machine-pool-5hntlp-control-plane-kn7xq, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-rwv26, container kube-proxy
... skipping 5 lines ...
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-9wnrc, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-5hntlp-control-plane-kn7xq, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-5hntlp-control-plane-kn7xq, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-5hntlp-control-plane-kn7xq, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-r4pl9, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-h86qv, container coredns
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 250.443896ms
STEP: Dumping all the Cluster API resources in the "machine-pool-tb7hu8" namespace
STEP: Deleting cluster machine-pool-tb7hu8/machine-pool-5hntlp
STEP: Deleting cluster machine-pool-5hntlp
INFO: Waiting for the Cluster machine-pool-tb7hu8/machine-pool-5hntlp to be deleted
STEP: Waiting for cluster machine-pool-5hntlp to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-5wsn7, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-5hntlp-control-plane-kn7xq, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-5wsn7, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-5hntlp-control-plane-kn7xq, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-jqqcp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-h86qv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-ncs55, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-5hntlp-control-plane-kn7xq, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-r4pl9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-9wnrc, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-zkwtq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-zfmhx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rwv26, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-5hntlp-control-plane-kn7xq, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-tb7hu8
STEP: Redacting sensitive information from logs


• [SLOW TEST:1724.839 seconds]
... skipping 57 lines ...
STEP: Dumping logs from the "node-drain-8bmlgz" workload cluster
STEP: Dumping workload cluster node-drain-6406k4/node-drain-8bmlgz logs
Nov  4 21:10:54.917: INFO: INFO: Collecting logs for node node-drain-8bmlgz-control-plane-lcmgd in cluster node-drain-8bmlgz in namespace node-drain-6406k4

Nov  4 21:13:04.826: INFO: INFO: Collecting boot logs for AzureMachine node-drain-8bmlgz-control-plane-lcmgd

Failed to get logs for machine node-drain-8bmlgz-control-plane-l9s25, cluster node-drain-6406k4/node-drain-8bmlgz: dialing public load balancer at node-drain-8bmlgz-c23c70c5.westeurope.cloudapp.azure.com: dial tcp 20.101.17.63:22: connect: connection timed out
STEP: Dumping workload cluster node-drain-6406k4/node-drain-8bmlgz kube-system pod logs
STEP: Fetching kube-system pod logs took 973.182099ms
STEP: Dumping workload cluster node-drain-6406k4/node-drain-8bmlgz Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-89ld4, container coredns
STEP: Creating log watcher for controller kube-system/kube-apiserver-node-drain-8bmlgz-control-plane-lcmgd, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-l9vxn, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-pjndv, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-node-drain-8bmlgz-control-plane-lcmgd, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-controller-manager-node-drain-8bmlgz-control-plane-lcmgd, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-fxqk8, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-4stbs, container coredns
STEP: Creating log watcher for controller kube-system/etcd-node-drain-8bmlgz-control-plane-lcmgd, container etcd
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 204.662263ms
STEP: Dumping all the Cluster API resources in the "node-drain-6406k4" namespace
STEP: Deleting cluster node-drain-6406k4/node-drain-8bmlgz
STEP: Deleting cluster node-drain-8bmlgz
INFO: Waiting for the Cluster node-drain-6406k4/node-drain-8bmlgz to be deleted
STEP: Waiting for cluster node-drain-8bmlgz to be deleted
... skipping 145 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-fh8vq, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-clusterctl-upgrade-vrs4s8-control-plane-6l4gv, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-kr44f, container coredns
STEP: Creating log watcher for controller kube-system/etcd-clusterctl-upgrade-vrs4s8-control-plane-6l4gv, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-zc4n9, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-82jtr, container coredns
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 253.473145ms
STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-4m96gs" namespace
STEP: Deleting cluster clusterctl-upgrade-4m96gs/clusterctl-upgrade-vrs4s8
STEP: Deleting cluster clusterctl-upgrade-vrs4s8
INFO: Waiting for the Cluster clusterctl-upgrade-4m96gs/clusterctl-upgrade-vrs4s8 to be deleted
STEP: Waiting for cluster clusterctl-upgrade-vrs4s8 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-fh8vq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jsnh9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-wd5ng, container calico-kube-controllers: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-7589dc74b9-9c9wj, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-5fm2g, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-clusterctl-upgrade-vrs4s8-control-plane-6l4gv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-kr44f, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-clusterctl-upgrade-vrs4s8-control-plane-6l4gv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-zc4n9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-82jtr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-clusterctl-upgrade-vrs4s8-control-plane-6l4gv, container etcd: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-nzrfb, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xcz8j, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-clusterctl-upgrade-vrs4s8-control-plane-6l4gv, container kube-controller-manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-hbnhh, container manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
INFO: Deleting namespace clusterctl-upgrade-4m96gs
STEP: Redacting sensitive information from logs


• [SLOW TEST:2106.445 seconds]
Running the Cluster API E2E tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:40
  upgrade from v1alpha3 to v1beta1, and scale workload clusters created in v1alpha3
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:234
    Should create a management cluster and then upgrade all the providers
    /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/clusterctl_upgrade.go:145
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2021-11-04T23:47:18Z"}
++ early_exit_handler
++ '[' -n 161 ']'
++ kill -TERM 161
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 12 lines ...
Cleaning up after docker
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
================================================================================
Done cleaning up after docker in docker.
All sensitive variables are redacted
{"component":"entrypoint","file":"prow/entrypoint/run.go:255","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2021-11-05T00:02:18Z"}
{"component":"entrypoint","error":"os: process already finished","file":"prow/entrypoint/run.go:257","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2021-11-05T00:02:18Z"}