This job view page is being replaced by Spyglass soon. Check out the new job view.
PRshysank: [WIP] Increase parallelism for e2e tests
ResultFAILURE
Tests 0 failed / 12 succeeded
Started2021-11-09 02:32
Elapsed4h15m
Revision8769bd602bedcbe499b62a00fa1eec69b582fe2f
Refs 1816

No Test Failures!


Show 12 Passed Tests

Show 10 Skipped Tests

Error lines from build-log.txt

... skipping 483 lines ...
Nov  9 02:51:01.394: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-yv4qk0-md-0-feal4w-dg2xs

Nov  9 02:51:01.919: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster md-rollout-yv4qk0 in namespace md-rollout-mztrwh

Nov  9 02:53:41.284: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-yv4qk0-md-win-jlqv8

Failed to get logs for machine md-rollout-yv4qk0-md-win-5788d9689-sf75k, cluster md-rollout-mztrwh/md-rollout-yv4qk0: [[dialing from control plane to target node at 10.1.0.4: ssh: rejected: connect failed (Connection timed out), getting a new SSH client connection: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain], failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollou-jlqv8' under resource group 'capz-e2e-fq5uhv' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"]
Nov  9 02:53:41.811: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster md-rollout-yv4qk0 in namespace md-rollout-mztrwh

Nov  9 02:54:19.033: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-yv4qk0-md-win-dl2sh

Failed to get logs for machine md-rollout-yv4qk0-md-win-5788d9689-tqdjc, cluster md-rollout-mztrwh/md-rollout-yv4qk0: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov  9 02:54:19.958: INFO: INFO: Collecting logs for node 10.1.0.8 in cluster md-rollout-yv4qk0 in namespace md-rollout-mztrwh

Nov  9 02:54:47.643: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-yv4qk0-md-win-8rl3yt-nvgjg

Failed to get logs for machine md-rollout-yv4qk0-md-win-6499ff78-kpbjf, cluster md-rollout-mztrwh/md-rollout-yv4qk0: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-rollout-mztrwh/md-rollout-yv4qk0 kube-system pod logs
STEP: Fetching kube-system pod logs took 1.236182857s
STEP: Dumping workload cluster md-rollout-mztrwh/md-rollout-yv4qk0 Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-s6ntn, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-j86gl, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-htb6r, container kube-proxy
... skipping 11 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-windows-bk8r4, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-rsn55, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-8ql5g, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-scheduler-md-rollout-yv4qk0-control-plane-wb8jh, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-windows-bk8r4, container calico-node-felix
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-5mzmg, container coredns
STEP: Error starting logs stream for pod kube-system/kube-proxy-windows-l8kxx, container kube-proxy: container "kube-proxy" in pod "kube-proxy-windows-l8kxx" is waiting to start: ContainerCreating
STEP: Error starting logs stream for pod kube-system/calico-node-windows-bk8r4, container calico-node-startup: container "calico-node-startup" in pod "calico-node-windows-bk8r4" is waiting to start: PodInitializing
STEP: Error starting logs stream for pod kube-system/calico-node-windows-bk8r4, container calico-node-felix: container "calico-node-felix" in pod "calico-node-windows-bk8r4" is waiting to start: PodInitializing
STEP: Fetching activity logs took 1.048335006s
STEP: Dumping all the Cluster API resources in the "md-rollout-mztrwh" namespace
STEP: Deleting cluster md-rollout-mztrwh/md-rollout-yv4qk0
STEP: Deleting cluster md-rollout-yv4qk0
INFO: Waiting for the Cluster md-rollout-mztrwh/md-rollout-yv4qk0 to be deleted
STEP: Waiting for cluster md-rollout-yv4qk0 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-8ql5g, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-cbxwk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rsn55, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-htb6r, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-s6ntn, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-rollout-yv4qk0-control-plane-wb8jh, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-4c456, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5mzmg, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-j86gl, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-rollout-yv4qk0-control-plane-wb8jh, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-4c456, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-8ql5g, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-8f6qr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-rollout-yv4qk0-control-plane-wb8jh, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-f859f, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-rollout-yv4qk0-control-plane-wb8jh, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9sncm, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-rollout" test spec
INFO: Deleting namespace md-rollout-mztrwh
STEP: Redacting sensitive information from logs


• [SLOW TEST:1372.905 seconds]
... skipping 91 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-nt8rs, container calico-node
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-5hff9, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-nkk75, container coredns
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-0mfs6e-control-plane-78q68, container kube-scheduler
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-0mfs6e-control-plane-78q68, container etcd
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-kfcl6, container coredns
STEP: Got error while iterating over activity logs for resource group capz-e2e-c90tk0: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001034256s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-nwyf97" namespace
STEP: Deleting cluster kcp-upgrade-nwyf97/kcp-upgrade-0mfs6e
STEP: Deleting cluster kcp-upgrade-0mfs6e
INFO: Waiting for the Cluster kcp-upgrade-nwyf97/kcp-upgrade-0mfs6e to be deleted
STEP: Waiting for cluster kcp-upgrade-0mfs6e to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-0mfs6e-control-plane-78q68, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-7lmf7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kfcl6, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-0mfs6e-control-plane-78q68, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-htzgm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-0mfs6e-control-plane-78q68, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-0mfs6e-control-plane-78q68, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-nwyf97
STEP: Redacting sensitive information from logs


• [SLOW TEST:2130.833 seconds]
... skipping 56 lines ...
STEP: Dumping logs from the "kcp-upgrade-2lrp8f" workload cluster
STEP: Dumping workload cluster kcp-upgrade-6cdl26/kcp-upgrade-2lrp8f logs
Nov  9 02:52:45.668: INFO: INFO: Collecting logs for node kcp-upgrade-2lrp8f-control-plane-2pbm4 in cluster kcp-upgrade-2lrp8f in namespace kcp-upgrade-6cdl26

Nov  9 02:54:56.116: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-2lrp8f-control-plane-2pbm4

Failed to get logs for machine kcp-upgrade-2lrp8f-control-plane-9crcq, cluster kcp-upgrade-6cdl26/kcp-upgrade-2lrp8f: dialing public load balancer at kcp-upgrade-2lrp8f-847dd752.westeurope.cloudapp.azure.com: dial tcp 20.93.168.239:22: connect: connection timed out
Nov  9 02:54:57.578: INFO: INFO: Collecting logs for node kcp-upgrade-2lrp8f-md-0-6nljm in cluster kcp-upgrade-2lrp8f in namespace kcp-upgrade-6cdl26

Nov  9 02:57:07.188: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-2lrp8f-md-0-6nljm

Failed to get logs for machine kcp-upgrade-2lrp8f-md-0-86f5d674fc-c4r6r, cluster kcp-upgrade-6cdl26/kcp-upgrade-2lrp8f: dialing public load balancer at kcp-upgrade-2lrp8f-847dd752.westeurope.cloudapp.azure.com: dial tcp 20.93.168.239:22: connect: connection timed out
Nov  9 02:57:08.743: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster kcp-upgrade-2lrp8f in namespace kcp-upgrade-6cdl26

Nov  9 03:03:40.408: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-2lrp8f-md-win-fknkc

Failed to get logs for machine kcp-upgrade-2lrp8f-md-win-594fbd9c74-nqxkc, cluster kcp-upgrade-6cdl26/kcp-upgrade-2lrp8f: dialing public load balancer at kcp-upgrade-2lrp8f-847dd752.westeurope.cloudapp.azure.com: dial tcp 20.93.168.239:22: connect: connection timed out
Nov  9 03:03:41.440: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster kcp-upgrade-2lrp8f in namespace kcp-upgrade-6cdl26

Nov  9 03:10:13.620: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-2lrp8f-md-win-n6bts

Failed to get logs for machine kcp-upgrade-2lrp8f-md-win-594fbd9c74-wb4cm, cluster kcp-upgrade-6cdl26/kcp-upgrade-2lrp8f: dialing public load balancer at kcp-upgrade-2lrp8f-847dd752.westeurope.cloudapp.azure.com: dial tcp 20.93.168.239:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-6cdl26/kcp-upgrade-2lrp8f kube-system pod logs
STEP: Fetching kube-system pod logs took 1.013368715s
STEP: Dumping workload cluster kcp-upgrade-6cdl26/kcp-upgrade-2lrp8f Azure activity log
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-2lrp8f-control-plane-2pbm4, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-59n9w, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-nzws7, container kube-proxy
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-q4h57, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-n7zdz, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-proxy-6nmzh, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-xzmsc, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-9ghlm, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-7qdj7, container coredns
STEP: Got error while iterating over activity logs for resource group capz-e2e-ygti1s: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000917498s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-6cdl26" namespace
STEP: Deleting cluster kcp-upgrade-6cdl26/kcp-upgrade-2lrp8f
STEP: Deleting cluster kcp-upgrade-2lrp8f
INFO: Waiting for the Cluster kcp-upgrade-6cdl26/kcp-upgrade-2lrp8f to be deleted
STEP: Waiting for cluster kcp-upgrade-2lrp8f to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-2lrp8f-control-plane-2pbm4, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-59n9w, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-n7zdz, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-n7zdz, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-6ctq8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nzws7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-2lrp8f-control-plane-2pbm4, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-2lrp8f-control-plane-2pbm4, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-4vphz, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-2lrp8f-control-plane-2pbm4, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-9ghlm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-4vphz, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-q4h57, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-6cdl26
STEP: Redacting sensitive information from logs


• [SLOW TEST:2333.610 seconds]
... skipping 7 lines ...
Running the Cluster API E2E tests Running the self-hosted spec 
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107

STEP: Creating namespace "self-hosted" for hosting the cluster
Nov  9 03:01:52.900: INFO: starting to create namespace for hosting the "self-hosted" test spec
2021/11/09 03:01:52 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-2d69n6" using the "management" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-2d69n6 --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management
... skipping 73 lines ...
STEP: Fetching activity logs took 554.430564ms
STEP: Dumping all the Cluster API resources in the "self-hosted" namespace
STEP: Deleting all clusters in the self-hosted namespace
STEP: Deleting cluster self-hosted-2d69n6
INFO: Waiting for the Cluster self-hosted/self-hosted-2d69n6 to be deleted
STEP: Waiting for cluster self-hosted-2d69n6 to be deleted
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-7584cb676-fbmxm, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-self-hosted-2d69n6-control-plane-nn9kd, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-v787r, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-self-hosted-2d69n6-control-plane-nn9kd, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-rt7zs, container coredns: http2: client connection lost
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-pz25w, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-d9ql8, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-self-hosted-2d69n6-control-plane-nn9kd, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5qpfm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-n8n2h, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4vw6j, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-zrpbt, container calico-kube-controllers: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-zq844, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-self-hosted-2d69n6-control-plane-nn9kd, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8cl4v, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "self-hosted" test spec
INFO: Deleting namespace self-hosted
STEP: Checking if any resources are left over in Azure for spec "self-hosted"
STEP: Redacting sensitive information from logs
STEP: Redacting sensitive information from logs

... skipping 60 lines ...
STEP: Fetching activity logs took 486.893304ms
STEP: Dumping all the Cluster API resources in the "kcp-adoption-ay43xx" namespace
STEP: Deleting cluster kcp-adoption-ay43xx/kcp-adoption-73x8i8
STEP: Deleting cluster kcp-adoption-73x8i8
INFO: Waiting for the Cluster kcp-adoption-ay43xx/kcp-adoption-73x8i8 to be deleted
STEP: Waiting for cluster kcp-adoption-73x8i8 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-jcg8l, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-adoption-73x8i8-control-plane-0, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-adoption-73x8i8-control-plane-0, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-adoption-73x8i8-control-plane-0, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vv7pk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-adoption-73x8i8-control-plane-0, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mwzfs, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kfdtv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-dglfd, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-adoption" test spec
INFO: Deleting namespace kcp-adoption-ay43xx
STEP: Redacting sensitive information from logs


• [SLOW TEST:654.217 seconds]
... skipping 68 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-z6rgg, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-26zjt, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-6ksp80-control-plane-t55nn, container kube-apiserver
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-kjrc7, container coredns
STEP: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-6ksp80-control-plane-t55nn, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-qdkcg, container calico-node
STEP: Error starting logs stream for pod kube-system/calico-node-qdkcg, container calico-node: container "calico-node" in pod "calico-node-qdkcg" is waiting to start: PodInitializing
STEP: Fetching activity logs took 508.745276ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-eal1kq" namespace
STEP: Deleting cluster mhc-remediation-eal1kq/mhc-remediation-6ksp80
STEP: Deleting cluster mhc-remediation-6ksp80
INFO: Waiting for the Cluster mhc-remediation-eal1kq/mhc-remediation-6ksp80 to be deleted
STEP: Waiting for cluster mhc-remediation-6ksp80 to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8tfzs, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-6ksp80-control-plane-t55nn, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-6ksp80-control-plane-t55nn, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-6ksp80-control-plane-t55nn, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-6ksp80-control-plane-t55nn, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-v5cnm, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-z6rgg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-26zjt, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kjrc7, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-eal1kq
STEP: Redacting sensitive information from logs


• [SLOW TEST:1021.364 seconds]
... skipping 58 lines ...
STEP: Dumping logs from the "kcp-upgrade-kzn2sx" workload cluster
STEP: Dumping workload cluster kcp-upgrade-mzq8c9/kcp-upgrade-kzn2sx logs
Nov  9 03:04:46.966: INFO: INFO: Collecting logs for node kcp-upgrade-kzn2sx-control-plane-qgj5v in cluster kcp-upgrade-kzn2sx in namespace kcp-upgrade-mzq8c9

Nov  9 03:06:57.012: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-kzn2sx-control-plane-qgj5v

Failed to get logs for machine kcp-upgrade-kzn2sx-control-plane-4gdt5, cluster kcp-upgrade-mzq8c9/kcp-upgrade-kzn2sx: dialing public load balancer at kcp-upgrade-kzn2sx-1ef28227.westeurope.cloudapp.azure.com: dial tcp 20.76.247.241:22: connect: connection timed out
Nov  9 03:06:58.320: INFO: INFO: Collecting logs for node kcp-upgrade-kzn2sx-control-plane-p5gls in cluster kcp-upgrade-kzn2sx in namespace kcp-upgrade-mzq8c9

Nov  9 03:09:08.084: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-kzn2sx-control-plane-p5gls

Failed to get logs for machine kcp-upgrade-kzn2sx-control-plane-7qm4w, cluster kcp-upgrade-mzq8c9/kcp-upgrade-kzn2sx: dialing public load balancer at kcp-upgrade-kzn2sx-1ef28227.westeurope.cloudapp.azure.com: dial tcp 20.76.247.241:22: connect: connection timed out
Nov  9 03:09:09.741: INFO: INFO: Collecting logs for node kcp-upgrade-kzn2sx-control-plane-rnxrx in cluster kcp-upgrade-kzn2sx in namespace kcp-upgrade-mzq8c9

Nov  9 03:11:19.156: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-kzn2sx-control-plane-rnxrx

Failed to get logs for machine kcp-upgrade-kzn2sx-control-plane-ht2c8, cluster kcp-upgrade-mzq8c9/kcp-upgrade-kzn2sx: dialing public load balancer at kcp-upgrade-kzn2sx-1ef28227.westeurope.cloudapp.azure.com: dial tcp 20.76.247.241:22: connect: connection timed out
Nov  9 03:11:20.527: INFO: INFO: Collecting logs for node kcp-upgrade-kzn2sx-md-0-xw9pz in cluster kcp-upgrade-kzn2sx in namespace kcp-upgrade-mzq8c9

Nov  9 03:13:30.232: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-kzn2sx-md-0-xw9pz

Failed to get logs for machine kcp-upgrade-kzn2sx-md-0-78bdb9d8bf-vc2vd, cluster kcp-upgrade-mzq8c9/kcp-upgrade-kzn2sx: dialing public load balancer at kcp-upgrade-kzn2sx-1ef28227.westeurope.cloudapp.azure.com: dial tcp 20.76.247.241:22: connect: connection timed out
Nov  9 03:13:31.855: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster kcp-upgrade-kzn2sx in namespace kcp-upgrade-mzq8c9

Nov  9 03:20:03.444: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-kzn2sx-md-win-vk2mh

Failed to get logs for machine kcp-upgrade-kzn2sx-md-win-6cdd9d8c98-8j58l, cluster kcp-upgrade-mzq8c9/kcp-upgrade-kzn2sx: dialing public load balancer at kcp-upgrade-kzn2sx-1ef28227.westeurope.cloudapp.azure.com: dial tcp 20.76.247.241:22: connect: connection timed out
Nov  9 03:20:04.619: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-kzn2sx in namespace kcp-upgrade-mzq8c9

Nov  9 03:26:36.660: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-kzn2sx-md-win-wwgd7

Failed to get logs for machine kcp-upgrade-kzn2sx-md-win-6cdd9d8c98-dhswk, cluster kcp-upgrade-mzq8c9/kcp-upgrade-kzn2sx: dialing public load balancer at kcp-upgrade-kzn2sx-1ef28227.westeurope.cloudapp.azure.com: dial tcp 20.76.247.241:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-mzq8c9/kcp-upgrade-kzn2sx kube-system pod logs
STEP: Fetching kube-system pod logs took 878.311743ms
STEP: Creating log watcher for controller kube-system/calico-node-jp4xn, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-kzn2sx-control-plane-p5gls, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-w97wc, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-pltwh, container calico-node-felix
... skipping 20 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-sdpfc, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-s2mqv, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-pltwh, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-kzn2sx-control-plane-qgj5v, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-kzn2sx-control-plane-p5gls, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-kzn2sx-control-plane-rnxrx, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-itn7r0: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000340006s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-mzq8c9" namespace
STEP: Deleting cluster kcp-upgrade-mzq8c9/kcp-upgrade-kzn2sx
STEP: Deleting cluster kcp-upgrade-kzn2sx
INFO: Waiting for the Cluster kcp-upgrade-mzq8c9/kcp-upgrade-kzn2sx to be deleted
STEP: Waiting for cluster kcp-upgrade-kzn2sx to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-kzn2sx-control-plane-p5gls, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-kfk72, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-mbwkz, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-kzn2sx-control-plane-p5gls, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-jp4xn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-szkct, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-kzn2sx-control-plane-rnxrx, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-wgffp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8flzc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-kzn2sx-control-plane-rnxrx, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-kzn2sx-control-plane-rnxrx, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mnz6x, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-w97wc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-zrqzq, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-kzn2sx-control-plane-rnxrx, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-mbwkz, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5v2nz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-wh4p6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-pltwh, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-kzn2sx-control-plane-p5gls, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kcv4j, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-kzn2sx-control-plane-p5gls, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-pltwh, container calico-node-startup: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-mzq8c9
STEP: Redacting sensitive information from logs


• [SLOW TEST:3360.251 seconds]
... skipping 96 lines ...
STEP: Fetching activity logs took 603.400536ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-ijwmxi" namespace
STEP: Deleting cluster mhc-remediation-ijwmxi/mhc-remediation-hqmjoh
STEP: Deleting cluster mhc-remediation-hqmjoh
INFO: Waiting for the Cluster mhc-remediation-ijwmxi/mhc-remediation-hqmjoh to be deleted
STEP: Waiting for cluster mhc-remediation-hqmjoh to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-hqmjoh-control-plane-qwqtd, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-hqmjoh-control-plane-qwqtd, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7c5dq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-gz6vv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-hqmjoh-control-plane-fvf66, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-hqmjoh-control-plane-fvf66, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-hqmjoh-control-plane-qwqtd, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-sh6zh, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-hqmjoh-control-plane-qwqtd, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-g2hw4, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-hqmjoh-control-plane-fvf66, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-hqmjoh-control-plane-fvf66, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-ijwmxi
STEP: Redacting sensitive information from logs


• [SLOW TEST:1133.793 seconds]
... skipping 61 lines ...
Nov  9 03:42:49.600: INFO: INFO: Collecting boot logs for AzureMachine md-scale-hhzqdr-md-0-l2m4l

Nov  9 03:42:50.060: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster md-scale-hhzqdr in namespace md-scale-9kg5pq

Nov  9 03:43:55.734: INFO: INFO: Collecting boot logs for AzureMachine md-scale-hhzqdr-md-win-2w9tf

Failed to get logs for machine md-scale-hhzqdr-md-win-66bbb844d-9jsdv, cluster md-scale-9kg5pq/md-scale-hhzqdr: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov  9 03:43:56.171: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster md-scale-hhzqdr in namespace md-scale-9kg5pq

Nov  9 03:44:29.549: INFO: INFO: Collecting boot logs for AzureMachine md-scale-hhzqdr-md-win-7htzn

Failed to get logs for machine md-scale-hhzqdr-md-win-66bbb844d-tnmc6, cluster md-scale-9kg5pq/md-scale-hhzqdr: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-scale-9kg5pq/md-scale-hhzqdr kube-system pod logs
STEP: Fetching kube-system pod logs took 1.135744731s
STEP: Dumping workload cluster md-scale-9kg5pq/md-scale-hhzqdr Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-windows-55jl2, container calico-node-felix
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-85mcs, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-6qmjd, container kube-proxy
... skipping 14 lines ...
STEP: Fetching activity logs took 663.776986ms
STEP: Dumping all the Cluster API resources in the "md-scale-9kg5pq" namespace
STEP: Deleting cluster md-scale-9kg5pq/md-scale-hhzqdr
STEP: Deleting cluster md-scale-hhzqdr
INFO: Waiting for the Cluster md-scale-9kg5pq/md-scale-hhzqdr to be deleted
STEP: Waiting for cluster md-scale-hhzqdr to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-mlls7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-87w9x, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-55jl2, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-scale-hhzqdr-control-plane-gts8l, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-87w9x, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-kzn2m, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-scale-hhzqdr-control-plane-gts8l, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5lkmd, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-bnc4x, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-85mcs, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-7rmnt, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-f24qk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-frbcc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-55jl2, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-scale-hhzqdr-control-plane-gts8l, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-scale-hhzqdr-control-plane-gts8l, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6qmjd, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-scale" test spec
INFO: Deleting namespace md-scale-9kg5pq
STEP: Redacting sensitive information from logs


• [SLOW TEST:1290.502 seconds]
... skipping 62 lines ...
Nov  9 03:47:22.555: INFO: INFO: Collecting boot logs for AzureMachine machine-pool-17ahzi-control-plane-84zcl

Nov  9 03:47:23.824: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-17ahzi in namespace machine-pool-8xrcww

Nov  9 03:47:33.384: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set machine-pool-17ahzi-mp-0

Failed to get logs for machine pool machine-pool-17ahzi-mp-0, cluster machine-pool-8xrcww/machine-pool-17ahzi: [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1]
Nov  9 03:47:34.229: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-17ahzi in namespace machine-pool-8xrcww

Nov  9 03:48:10.136: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set win-p-win

Failed to get logs for machine pool machine-pool-17ahzi-mp-win, cluster machine-pool-8xrcww/machine-pool-17ahzi: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster machine-pool-8xrcww/machine-pool-17ahzi kube-system pod logs
STEP: Fetching kube-system pod logs took 998.452731ms
STEP: Dumping workload cluster machine-pool-8xrcww/machine-pool-17ahzi Azure activity log
STEP: Creating log watcher for controller kube-system/etcd-machine-pool-17ahzi-control-plane-84zcl, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-windows-ml85w, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-17ahzi-control-plane-84zcl, container kube-controller-manager
... skipping 11 lines ...
STEP: Fetching activity logs took 711.836845ms
STEP: Dumping all the Cluster API resources in the "machine-pool-8xrcww" namespace
STEP: Deleting cluster machine-pool-8xrcww/machine-pool-17ahzi
STEP: Deleting cluster machine-pool-17ahzi
INFO: Waiting for the Cluster machine-pool-8xrcww/machine-pool-17ahzi to be deleted
STEP: Waiting for cluster machine-pool-17ahzi to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-x2x7k, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-17ahzi-control-plane-84zcl, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-5hxlr, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-ml85w, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2nt2m, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ds5jg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-2njc2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-17ahzi-control-plane-84zcl, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-ml85w, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mv2nf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-17ahzi-control-plane-84zcl, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-27zr2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-zqx8c, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-17ahzi-control-plane-84zcl, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-8xrcww
STEP: Redacting sensitive information from logs


• [SLOW TEST:1529.967 seconds]
... skipping 56 lines ...
STEP: Dumping logs from the "node-drain-xmgy5j" workload cluster
STEP: Dumping workload cluster node-drain-jn1wsl/node-drain-xmgy5j logs
Nov  9 03:54:56.902: INFO: INFO: Collecting logs for node node-drain-xmgy5j-control-plane-qwg88 in cluster node-drain-xmgy5j in namespace node-drain-jn1wsl

Nov  9 03:57:07.576: INFO: INFO: Collecting boot logs for AzureMachine node-drain-xmgy5j-control-plane-qwg88

Failed to get logs for machine node-drain-xmgy5j-control-plane-2pkp6, cluster node-drain-jn1wsl/node-drain-xmgy5j: dialing public load balancer at node-drain-xmgy5j-e30e8c69.westeurope.cloudapp.azure.com: dial tcp 20.73.110.41:22: connect: connection timed out
STEP: Dumping workload cluster node-drain-jn1wsl/node-drain-xmgy5j kube-system pod logs
STEP: Fetching kube-system pod logs took 927.535317ms
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-4mq2h, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/etcd-node-drain-xmgy5j-control-plane-qwg88, container etcd
STEP: Creating log watcher for controller kube-system/kube-scheduler-node-drain-xmgy5j-control-plane-qwg88, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-controller-manager-node-drain-xmgy5j-control-plane-qwg88, container kube-controller-manager
... skipping 156 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-l2snq, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-ktc9m, container coredns
STEP: Creating log watcher for controller kube-system/kube-scheduler-clusterctl-upgrade-hoe36o-control-plane-qjxqv, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-bbwc6, container calico-node
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-pjb5p, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-proxy-hgxbn, container kube-proxy
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 240.352753ms
STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-ofrp9m" namespace
STEP: Deleting cluster clusterctl-upgrade-ofrp9m/clusterctl-upgrade-hoe36o
STEP: Deleting cluster clusterctl-upgrade-hoe36o
INFO: Waiting for the Cluster clusterctl-upgrade-ofrp9m/clusterctl-upgrade-hoe36o to be deleted
STEP: Waiting for cluster clusterctl-upgrade-hoe36o to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-9kln2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-clusterctl-upgrade-hoe36o-control-plane-qjxqv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hgxbn, container kube-proxy: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-9l5fj, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-l2snq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-bbwc6, container calico-node: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-tvz8h, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-clusterctl-upgrade-hoe36o-control-plane-qjxqv, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-ktc9m, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-clusterctl-upgrade-hoe36o-control-plane-qjxqv, container kube-scheduler: http2: client connection lost
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-4srjj, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-clusterctl-upgrade-hoe36o-control-plane-qjxqv, container kube-apiserver: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-7584cb676-j9vx6, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ldhqc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-pjb5p, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
INFO: Deleting namespace clusterctl-upgrade-ofrp9m
STEP: Redacting sensitive information from logs


• [SLOW TEST:1802.184 seconds]
Running the Cluster API E2E tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:40
  upgrade from v1alpha3 to v1beta1, and scale workload clusters created in v1alpha3
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:233
    Should create a management cluster and then upgrade all the providers
    /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/clusterctl_upgrade.go:145
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2021-11-09T06:32:16Z"}
++ early_exit_handler
++ '[' -n 161 ']'
++ kill -TERM 161
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 12 lines ...
Cleaning up after docker
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
================================================================================
Done cleaning up after docker in docker.
All sensitive variables are redacted
{"component":"entrypoint","file":"prow/entrypoint/run.go:255","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2021-11-09T06:47:16Z"}
{"component":"entrypoint","error":"os: process already finished","file":"prow/entrypoint/run.go:257","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2021-11-09T06:47:16Z"}