This job view page is being replaced by Spyglass soon. Check out the new job view.
PRshysank: [WIP] Increase parallelism for e2e tests
ResultFAILURE
Tests 1 failed / 12 succeeded
Started2021-11-05 08:16
Elapsed1h29m
Revisionc87d613eec1cc133b87eed82f5c7006d8b1500c0
Refs 1816

Test Failures


capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation 22m19s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sShould\ssuccessfully\sremediate\sunhealthy\smachines\swith\sMachineHealthCheck\sShould\ssuccessfully\strigger\sKCP\sremediation$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/mhc_remediations.go:115
Failed to get controller-runtime client
Unexpected error:
    <*url.Error | 0xc00072ba70>: {
        Op: "Get",
        URL: "https://mhc-remediation-kef1cn-9c6e0ab1.northeurope.cloudapp.azure.com:6443/api?timeout=32s",
        Err: <*http.httpError | 0xc000a487c8>{
            err: "net/http: request canceled (Client.Timeout exceeded while awaiting headers)",
            timeout: true,
        },
    }
    Get "https://mhc-remediation-kef1cn-9c6e0ab1.northeurope.cloudapp.azure.com:6443/api?timeout=32s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
occurred
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/cluster_proxy.go:171
				
				Click to see stdout/stderrfrom junit.e2e_suite.2.xml

Filter through log files | View test history on testgrid


Show 12 Passed Tests

Show 10 Skipped Tests

Error lines from build-log.txt

... skipping 486 lines ...
Nov  5 08:34:33.751: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-fvha35-md-0-lzx34y-n5ns7

Nov  5 08:34:34.191: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster md-rollout-fvha35 in namespace md-rollout-1j6vhe

Nov  5 08:36:18.145: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-fvha35-md-win-fjrfc

Failed to get logs for machine md-rollout-fvha35-md-win-6dcb78cb9c-cpff5, cluster md-rollout-1j6vhe/md-rollout-fvha35: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Failed to get logs for machine md-rollout-fvha35-md-win-6dcb78cb9c-mjm99, cluster md-rollout-1j6vhe/md-rollout-fvha35: azuremachines.infrastructure.cluster.x-k8s.io "md-rollout-fvha35-md-win-l6cmw" not found
Nov  5 08:36:19.327: INFO: INFO: Collecting logs for node 10.1.0.8 in cluster md-rollout-fvha35 in namespace md-rollout-1j6vhe

Nov  5 08:36:58.319: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-fvha35-md-win-v9hzqu-j8sng

Failed to get logs for machine md-rollout-fvha35-md-win-f4c7db78c-2kt49, cluster md-rollout-1j6vhe/md-rollout-fvha35: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-rollout-1j6vhe/md-rollout-fvha35 kube-system pod logs
STEP: Fetching kube-system pod logs took 997.848023ms
STEP: Dumping workload cluster md-rollout-1j6vhe/md-rollout-fvha35 Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-np2fp, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-2z4wn, container calico-node-felix
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-gf7t7, container coredns
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-hxbvr, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-2z4wn, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-wcm2v, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-zs6q2, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-md-rollout-fvha35-control-plane-b87ns, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-windows-qpqlt, container calico-node-felix
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 253.349581ms
STEP: Dumping all the Cluster API resources in the "md-rollout-1j6vhe" namespace
STEP: Deleting cluster md-rollout-1j6vhe/md-rollout-fvha35
STEP: Deleting cluster md-rollout-fvha35
INFO: Waiting for the Cluster md-rollout-1j6vhe/md-rollout-fvha35 to be deleted
STEP: Waiting for cluster md-rollout-fvha35 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-rollout-fvha35-control-plane-b87ns, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-rollout-fvha35-control-plane-b87ns, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-2z4wn, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-l9hk7, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-wcm2v, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-2z4wn, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6rx4h, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-rollout-fvha35-control-plane-b87ns, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-qpqlt, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-gf7t7, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hxbvr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-c5tnk, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-rollout-fvha35-control-plane-b87ns, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-zs6q2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-qpqlt, container calico-node-startup: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-rollout" test spec
INFO: Deleting namespace md-rollout-1j6vhe
STEP: Redacting sensitive information from logs


• [SLOW TEST:1319.673 seconds]
... skipping 56 lines ...
Nov  5 08:30:00.564: INFO: INFO: Collecting boot logs for AzureMachine quick-start-4gjp81-md-0-9mnhr

Nov  5 08:30:00.984: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster quick-start-4gjp81 in namespace quick-start-94ogsw

Nov  5 08:30:38.853: INFO: INFO: Collecting boot logs for AzureMachine quick-start-4gjp81-md-win-z2nz4

Failed to get logs for machine quick-start-4gjp81-md-win-dd5c7fc7f-4d6rp, cluster quick-start-94ogsw/quick-start-4gjp81: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov  5 08:30:39.295: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster quick-start-4gjp81 in namespace quick-start-94ogsw

Nov  5 08:47:23.445: INFO: INFO: Collecting boot logs for AzureMachine quick-start-4gjp81-md-win-s4pmq

Failed to get logs for machine quick-start-4gjp81-md-win-dd5c7fc7f-kn926, cluster quick-start-94ogsw/quick-start-4gjp81: [[running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1], running command "Get-NetIPAddress -IncludeAllCompartments": wait: remote command exited without exit status or exit signal]
STEP: Dumping workload cluster quick-start-94ogsw/quick-start-4gjp81 kube-system pod logs
STEP: Fetching kube-system pod logs took 1.00308195s
STEP: Dumping workload cluster quick-start-94ogsw/quick-start-4gjp81 Azure activity log
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-drw4f, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-quick-start-4gjp81-control-plane-88sgm, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-windows-gdtfl, container calico-node-felix
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-windows-ffhpq, container calico-node-felix
STEP: Creating log watcher for controller kube-system/calico-node-windows-gdtfl, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-f5hwm, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-8zq5w, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-sgvdf, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-4gjp81-control-plane-88sgm, container kube-controller-manager
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 227.296713ms
STEP: Dumping all the Cluster API resources in the "quick-start-94ogsw" namespace
STEP: Deleting cluster quick-start-94ogsw/quick-start-4gjp81
STEP: Deleting cluster quick-start-4gjp81
INFO: Waiting for the Cluster quick-start-94ogsw/quick-start-4gjp81 to be deleted
STEP: Waiting for cluster quick-start-4gjp81 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-quick-start-4gjp81-control-plane-88sgm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-ffhpq, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-lttkc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-sq29j, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-drw4f, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-gdtfl, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-gdtfl, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-l45ns, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-ffhpq, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-quick-start-4gjp81-control-plane-88sgm, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-497cz, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-quick-start-4gjp81-control-plane-88sgm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-l4m5z, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-sgvdf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8zq5w, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-quick-start-4gjp81-control-plane-88sgm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-f5hwm, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "quick-start" test spec
INFO: Deleting namespace quick-start-94ogsw
STEP: Redacting sensitive information from logs


• [SLOW TEST:1865.171 seconds]
... skipping 92 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-fxtss, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-g8vjp, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-sh2klo-control-plane-5nnrk, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-wkzfg, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-sh2klo-control-plane-5nnrk, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-wjt89, container calico-kube-controllers
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 209.419627ms
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-3g8fgf" namespace
STEP: Deleting cluster kcp-upgrade-3g8fgf/kcp-upgrade-sh2klo
STEP: Deleting cluster kcp-upgrade-sh2klo
INFO: Waiting for the Cluster kcp-upgrade-3g8fgf/kcp-upgrade-sh2klo to be deleted
STEP: Waiting for cluster kcp-upgrade-sh2klo to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-sh2klo-control-plane-d4ngl, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-sh2klo-control-plane-c9kqm, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-sh2klo-control-plane-5nnrk, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-sh2klo-control-plane-d4ngl, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-gp5sg, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wx58n, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-sh2klo-control-plane-5nnrk, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xcdqv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-slkhc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-sh2klo-control-plane-d4ngl, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-g8vjp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-sh2klo-control-plane-5nnrk, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-sh2klo-control-plane-c9kqm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vcwhd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-sh2klo-control-plane-c9kqm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wkzfg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fxtss, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-sh2klo-control-plane-d4ngl, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-sh2klo-control-plane-c9kqm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-sh2klo-control-plane-5nnrk, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-wjt89, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-52wz8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-bchlm, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-3g8fgf
STEP: Redacting sensitive information from logs


• [SLOW TEST:2110.270 seconds]
... skipping 75 lines ...
Nov  5 08:51:56.705: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-056ym7-md-0-2tw5h

Nov  5 08:51:57.196: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster kcp-upgrade-056ym7 in namespace kcp-upgrade-070ewi

Nov  5 08:52:28.064: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-056ym7-md-win-vkz4s

Failed to get logs for machine kcp-upgrade-056ym7-md-win-6c8bd797bb-mbvz5, cluster kcp-upgrade-070ewi/kcp-upgrade-056ym7: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov  5 08:52:28.965: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster kcp-upgrade-056ym7 in namespace kcp-upgrade-070ewi

Nov  5 08:53:06.444: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-056ym7-md-win-zd785

Failed to get logs for machine kcp-upgrade-056ym7-md-win-6c8bd797bb-ngdvj, cluster kcp-upgrade-070ewi/kcp-upgrade-056ym7: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster kcp-upgrade-070ewi/kcp-upgrade-056ym7 kube-system pod logs
STEP: Fetching kube-system pod logs took 855.731261ms
STEP: Dumping workload cluster kcp-upgrade-070ewi/kcp-upgrade-056ym7 Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-ddtzb, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-056ym7-control-plane-lt2zs, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-xhl2q, container kube-proxy
... skipping 20 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-windows-gfdwv, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-proxy-ktw8l, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-mtkll, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-056ym7-control-plane-hnr8m, container kube-apiserver
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-flh4t, container coredns
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-056ym7-control-plane-hnr8m, container etcd
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 237.640981ms
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-070ewi" namespace
STEP: Deleting cluster kcp-upgrade-070ewi/kcp-upgrade-056ym7
STEP: Deleting cluster kcp-upgrade-056ym7
INFO: Waiting for the Cluster kcp-upgrade-070ewi/kcp-upgrade-056ym7 to be deleted
STEP: Waiting for cluster kcp-upgrade-056ym7 to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-xglsb, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-056ym7-control-plane-fm9pl, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ktw8l, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-gfdwv, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-056ym7-control-plane-hnr8m, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-056ym7-control-plane-fm9pl, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-gfdwv, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-qv4qb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-vf58z, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wpttd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-ddtzb, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-v7vfn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-cq9n8, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-flh4t, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-xhl2q, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-056ym7-control-plane-lt2zs, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-mtkll, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-056ym7-control-plane-lt2zs, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-mtkll, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-056ym7-control-plane-hnr8m, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-056ym7-control-plane-lt2zs, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-056ym7-control-plane-fm9pl, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-kspf9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-056ym7-control-plane-fm9pl, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-056ym7-control-plane-hnr8m, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-txzdc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-056ym7-control-plane-lt2zs, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-056ym7-control-plane-hnr8m, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-s5tr9, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-070ewi
STEP: Redacting sensitive information from logs


• [SLOW TEST:2192.076 seconds]
... skipping 57 lines ...
STEP: Dumping logs from the "kcp-upgrade-gct6da" workload cluster
STEP: Dumping workload cluster kcp-upgrade-sckqzs/kcp-upgrade-gct6da logs
Nov  5 08:36:47.555: INFO: INFO: Collecting logs for node kcp-upgrade-gct6da-control-plane-php2q in cluster kcp-upgrade-gct6da in namespace kcp-upgrade-sckqzs

Nov  5 08:38:58.317: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-gct6da-control-plane-php2q

Failed to get logs for machine kcp-upgrade-gct6da-control-plane-xbvnm, cluster kcp-upgrade-sckqzs/kcp-upgrade-gct6da: dialing public load balancer at kcp-upgrade-gct6da-edf093d5.northeurope.cloudapp.azure.com: dial tcp 20.67.172.79:22: connect: connection timed out
Nov  5 08:39:00.100: INFO: INFO: Collecting logs for node kcp-upgrade-gct6da-md-0-qtkl8 in cluster kcp-upgrade-gct6da in namespace kcp-upgrade-sckqzs

Nov  5 08:41:09.389: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-gct6da-md-0-qtkl8

Failed to get logs for machine kcp-upgrade-gct6da-md-0-b948d8d96-tdlb8, cluster kcp-upgrade-sckqzs/kcp-upgrade-gct6da: dialing public load balancer at kcp-upgrade-gct6da-edf093d5.northeurope.cloudapp.azure.com: dial tcp 20.67.172.79:22: connect: connection timed out
Nov  5 08:41:10.830: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster kcp-upgrade-gct6da in namespace kcp-upgrade-sckqzs

Nov  5 08:47:42.609: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-gct6da-md-win-hsbhv

Failed to get logs for machine kcp-upgrade-gct6da-md-win-747d9bc979-bpqwr, cluster kcp-upgrade-sckqzs/kcp-upgrade-gct6da: dialing public load balancer at kcp-upgrade-gct6da-edf093d5.northeurope.cloudapp.azure.com: dial tcp 20.67.172.79:22: connect: connection timed out
Nov  5 08:47:43.844: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster kcp-upgrade-gct6da in namespace kcp-upgrade-sckqzs

Nov  5 08:54:15.821: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-gct6da-md-win-7j8f9

Failed to get logs for machine kcp-upgrade-gct6da-md-win-747d9bc979-hmltm, cluster kcp-upgrade-sckqzs/kcp-upgrade-gct6da: dialing public load balancer at kcp-upgrade-gct6da-edf093d5.northeurope.cloudapp.azure.com: dial tcp 20.67.172.79:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-sckqzs/kcp-upgrade-gct6da kube-system pod logs
STEP: Fetching kube-system pod logs took 1.039557359s
STEP: Dumping workload cluster kcp-upgrade-sckqzs/kcp-upgrade-gct6da Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-s2sss, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-z8xv6, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-windows-m9g7r, container calico-node-startup
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-7gcjk, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-gct6da-control-plane-php2q, container kube-apiserver
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-gct6da-control-plane-php2q, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-6kg2q, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-gct6da-control-plane-php2q, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-gct6da-control-plane-php2q, container kube-scheduler
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 212.978833ms
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-sckqzs" namespace
STEP: Deleting cluster kcp-upgrade-sckqzs/kcp-upgrade-gct6da
STEP: Deleting cluster kcp-upgrade-gct6da
INFO: Waiting for the Cluster kcp-upgrade-sckqzs/kcp-upgrade-gct6da to be deleted
STEP: Waiting for cluster kcp-upgrade-gct6da to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-pd7kz, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-7gcjk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-gct6da-control-plane-php2q, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-kcwx7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-gct6da-control-plane-php2q, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xs7sx, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-gct6da-control-plane-php2q, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-bvjt9, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-bvjt9, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jf86c, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6kg2q, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-z8xv6, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mshw9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-m9g7r, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-s2sss, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-gct6da-control-plane-php2q, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-m9g7r, container calico-node-felix: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-sckqzs
STEP: Redacting sensitive information from logs


• [SLOW TEST:2262.318 seconds]
... skipping 8 lines ...
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:110

Node Id (1 Indexed): 3
STEP: Creating namespace "self-hosted" for hosting the cluster
Nov  5 08:45:32.172: INFO: starting to create namespace for hosting the "self-hosted" test spec
2021/11/05 08:45:32 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-uynevq" using the "management" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-uynevq --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management
... skipping 73 lines ...
STEP: Fetching activity logs took 574.522072ms
STEP: Dumping all the Cluster API resources in the "self-hosted" namespace
STEP: Deleting all clusters in the self-hosted namespace
STEP: Deleting cluster self-hosted-uynevq
INFO: Waiting for the Cluster self-hosted/self-hosted-uynevq to be deleted
STEP: Waiting for cluster self-hosted-uynevq to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-74gf8, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-self-hosted-uynevq-control-plane-hghvr, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-self-hosted-uynevq-control-plane-hghvr, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-td5lt, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-9zzsv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-self-hosted-uynevq-control-plane-hghvr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-n56g2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-msnrt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-self-hosted-uynevq-control-plane-hghvr, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "self-hosted" test spec
INFO: Deleting namespace self-hosted
STEP: Checking if any resources are left over in Azure for spec "self-hosted"
STEP: Redacting sensitive information from logs
STEP: Redacting sensitive information from logs

... skipping 55 lines ...
STEP: Dumping workload cluster kcp-adoption-88h7pr/kcp-adoption-n4zihp Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-gndxk, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-adoption-n4zihp-control-plane-0, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-xdzns, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-pmx5j, container coredns
STEP: Creating log watcher for controller kube-system/etcd-kcp-adoption-n4zihp-control-plane-0, container etcd
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 240.557361ms
STEP: Dumping all the Cluster API resources in the "kcp-adoption-88h7pr" namespace
STEP: Deleting cluster kcp-adoption-88h7pr/kcp-adoption-n4zihp
STEP: Deleting cluster kcp-adoption-n4zihp
INFO: Waiting for the Cluster kcp-adoption-88h7pr/kcp-adoption-n4zihp to be deleted
STEP: Waiting for cluster kcp-adoption-n4zihp to be deleted
... skipping 75 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-nhrtm, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-9rgtw, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-kns6q, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-hmoxyr-control-plane-m2c4v, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-6l94d, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-hmoxyr-control-plane-m2c4v, container kube-scheduler
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 268.709408ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-fmohtq" namespace
STEP: Deleting cluster mhc-remediation-fmohtq/mhc-remediation-hmoxyr
STEP: Deleting cluster mhc-remediation-hmoxyr
INFO: Waiting for the Cluster mhc-remediation-fmohtq/mhc-remediation-hmoxyr to be deleted
STEP: Waiting for cluster mhc-remediation-hmoxyr to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-9rgtw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hp5p2, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-fmohtq
STEP: Redacting sensitive information from logs


• [SLOW TEST:1094.142 seconds]
... skipping 52 lines ...
STEP: Dumping logs from the "mhc-remediation-kef1cn" workload cluster
STEP: Dumping workload cluster mhc-remediation-sw1a5i/mhc-remediation-kef1cn logs
Nov  5 09:10:17.314: INFO: INFO: Collecting logs for node mhc-remediation-kef1cn-control-plane-s9pxg in cluster mhc-remediation-kef1cn in namespace mhc-remediation-sw1a5i

Nov  5 09:12:27.409: INFO: INFO: Collecting boot logs for AzureMachine mhc-remediation-kef1cn-control-plane-s9pxg

Failed to get logs for machine mhc-remediation-kef1cn-control-plane-8phkx, cluster mhc-remediation-sw1a5i/mhc-remediation-kef1cn: [dialing public load balancer at mhc-remediation-kef1cn-9c6e0ab1.northeurope.cloudapp.azure.com: dial tcp 20.93.45.130:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/mhc-remediation-kef1cn-control-plane-s9pxg' under resource group 'mhc-remediation-kef1cn' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"]
Nov  5 09:12:27.983: INFO: INFO: Collecting logs for node mhc-remediation-kef1cn-control-plane-6zcqg in cluster mhc-remediation-kef1cn in namespace mhc-remediation-sw1a5i

Nov  5 09:12:43.639: INFO: INFO: Collecting boot logs for AzureMachine mhc-remediation-kef1cn-control-plane-6zcqg

Failed to get logs for machine mhc-remediation-kef1cn-control-plane-cxznj, cluster mhc-remediation-sw1a5i/mhc-remediation-kef1cn: dialing public load balancer at mhc-remediation-kef1cn-9c6e0ab1.northeurope.cloudapp.azure.com: dial tcp 20.93.45.130:22: connect: connection refused
Nov  5 09:12:44.720: INFO: INFO: Collecting logs for node mhc-remediation-kef1cn-control-plane-xt9kr in cluster mhc-remediation-kef1cn in namespace mhc-remediation-sw1a5i

Nov  5 09:12:44.872: INFO: INFO: Collecting boot logs for AzureMachine mhc-remediation-kef1cn-control-plane-xt9kr

Failed to get logs for machine mhc-remediation-kef1cn-control-plane-sfk4p, cluster mhc-remediation-sw1a5i/mhc-remediation-kef1cn: dialing public load balancer at mhc-remediation-kef1cn-9c6e0ab1.northeurope.cloudapp.azure.com: dial tcp 20.93.45.130:22: connect: connection refused
Nov  5 09:12:45.837: INFO: INFO: Collecting logs for node mhc-remediation-kef1cn-md-0-4p7dt in cluster mhc-remediation-kef1cn in namespace mhc-remediation-sw1a5i

Nov  5 09:12:45.989: INFO: INFO: Collecting boot logs for AzureMachine mhc-remediation-kef1cn-md-0-4p7dt

Failed to get logs for machine mhc-remediation-kef1cn-md-0-55cc985dbf-zbrdx, cluster mhc-remediation-sw1a5i/mhc-remediation-kef1cn: dialing public load balancer at mhc-remediation-kef1cn-9c6e0ab1.northeurope.cloudapp.azure.com: dial tcp 20.93.45.130:22: connect: connection refused
STEP: Dumping workload cluster mhc-remediation-sw1a5i/mhc-remediation-kef1cn kube-system pod logs
STEP: Fetching kube-system pod logs took 1.045600215s
STEP: Dumping workload cluster mhc-remediation-sw1a5i/mhc-remediation-kef1cn Azure activity log
STEP: Creating log watcher for controller kube-system/etcd-mhc-remediation-kef1cn-control-plane-xt9kr, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-nlsnv, container calico-node
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-bmmdw, container calico-kube-controllers
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-kef1cn-control-plane-xt9kr, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-gr7dt, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-g799x, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-vprvk, container coredns
STEP: Creating log watcher for controller kube-system/etcd-mhc-remediation-kef1cn-control-plane-6zcqg, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-vkgqz, container calico-node
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 245.163418ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-sw1a5i" namespace
STEP: Deleting cluster mhc-remediation-sw1a5i/mhc-remediation-kef1cn
STEP: Deleting cluster mhc-remediation-kef1cn
INFO: Waiting for the Cluster mhc-remediation-sw1a5i/mhc-remediation-kef1cn to be deleted
STEP: Waiting for cluster mhc-remediation-kef1cn to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-mvk9f, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-bmmdw, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-vprvk, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4tsh2, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-sw1a5i
STEP: Redacting sensitive information from logs


• Failure [1339.728 seconds]
Running the Cluster API E2E tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:40
  Should successfully remediate unhealthy machines with MachineHealthCheck
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:173
    Should successfully trigger KCP remediation [It]
    /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/mhc_remediations.go:115

    Failed to get controller-runtime client
    Unexpected error:
        <*url.Error | 0xc00072ba70>: {
            Op: "Get",
            URL: "https://mhc-remediation-kef1cn-9c6e0ab1.northeurope.cloudapp.azure.com:6443/api?timeout=32s",
            Err: <*http.httpError | 0xc000a487c8>{
                err: "net/http: request canceled (Client.Timeout exceeded while awaiting headers)",
                timeout: true,
            },
... skipping 114 lines ...
Nov  5 09:18:27.467: INFO: INFO: Collecting boot logs for AzureMachine machine-pool-y41zdg-control-plane-nrvxq

Nov  5 09:18:29.112: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-y41zdg in namespace machine-pool-805rrf

Nov  5 09:19:05.606: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set machine-pool-y41zdg-mp-0

Failed to get logs for machine pool machine-pool-y41zdg-mp-0, cluster machine-pool-805rrf/machine-pool-y41zdg: [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1]
Nov  5 09:19:06.286: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-y41zdg in namespace machine-pool-805rrf

Nov  5 09:20:13.444: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set win-p-win

Failed to get logs for machine pool machine-pool-y41zdg-mp-win, cluster machine-pool-805rrf/machine-pool-y41zdg: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster machine-pool-805rrf/machine-pool-y41zdg kube-system pod logs
STEP: Fetching kube-system pod logs took 1.011717514s
STEP: Dumping workload cluster machine-pool-805rrf/machine-pool-y41zdg Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-76g9f, container coredns
STEP: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-y41zdg-control-plane-nrvxq, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-s4bql, container kube-proxy
... skipping 9 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-5gb5p, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-zx7vh, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-machine-pool-y41zdg-control-plane-nrvxq, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-9xdj2, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-pgptj, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-57lmh, container coredns
STEP: Error starting logs stream for pod kube-system/kube-proxy-pgptj, container kube-proxy: pods "machine-pool-y41zdg-mp-0000000" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-6hf2f, container kube-proxy: pods "machine-pool-y41zdg-mp-0000001" not found
STEP: Error starting logs stream for pod kube-system/calico-node-p8g4m, container calico-node: pods "machine-pool-y41zdg-mp-0000000" not found
STEP: Error starting logs stream for pod kube-system/calico-node-zx7vh, container calico-node: pods "machine-pool-y41zdg-mp-0000001" not found
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 364.755974ms
STEP: Dumping all the Cluster API resources in the "machine-pool-805rrf" namespace
STEP: Deleting cluster machine-pool-805rrf/machine-pool-y41zdg
STEP: Deleting cluster machine-pool-y41zdg
INFO: Waiting for the Cluster machine-pool-805rrf/machine-pool-y41zdg to be deleted
STEP: Waiting for cluster machine-pool-y41zdg to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-fx2w9, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-57lmh, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-y41zdg-control-plane-nrvxq, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-y41zdg-control-plane-nrvxq, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-s4bql, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5gb5p, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-nzjwm, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-y41zdg-control-plane-nrvxq, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-k2r54, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-76g9f, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-fx2w9, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-y41zdg-control-plane-nrvxq, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-805rrf
STEP: Redacting sensitive information from logs


• [SLOW TEST:1657.411 seconds]
... skipping 62 lines ...
Nov  5 09:15:33.161: INFO: INFO: Collecting boot logs for AzureMachine md-scale-db7536-md-0-pqzjq

Nov  5 09:15:33.809: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster md-scale-db7536 in namespace md-scale-040aax

Nov  5 09:16:09.407: INFO: INFO: Collecting boot logs for AzureMachine md-scale-db7536-md-win-kpbkn

Failed to get logs for machine md-scale-db7536-md-win-99959558d-c5kpp, cluster md-scale-040aax/md-scale-db7536: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov  5 09:16:09.881: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster md-scale-db7536 in namespace md-scale-040aax

Nov  5 09:17:23.635: INFO: INFO: Collecting boot logs for AzureMachine md-scale-db7536-md-win-f65c4

Failed to get logs for machine md-scale-db7536-md-win-99959558d-spgln, cluster md-scale-040aax/md-scale-db7536: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-scale-040aax/md-scale-db7536 kube-system pod logs
STEP: Fetching kube-system pod logs took 999.09065ms
STEP: Dumping workload cluster md-scale-040aax/md-scale-db7536 Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-kswkv, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-5pz7b, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-vq55w, container calico-node-startup
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-jgbt7, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-md-scale-db7536-control-plane-kkz5g, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-windows-46mpn, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-db7536-control-plane-kkz5g, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-qfv6n, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-l9mmp, container kube-proxy
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 247.636278ms
STEP: Dumping all the Cluster API resources in the "md-scale-040aax" namespace
STEP: Deleting cluster md-scale-040aax/md-scale-db7536
STEP: Deleting cluster md-scale-db7536
INFO: Waiting for the Cluster md-scale-040aax/md-scale-db7536 to be deleted
STEP: Waiting for cluster md-scale-db7536 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8287z, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-46mpn, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-scale-db7536-control-plane-kkz5g, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-46mpn, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-vq55w, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-l9mmp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-scale-db7536-control-plane-kkz5g, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kswkv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5szv4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-qfv6n, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-xgz7r, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-jgbt7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-scale-db7536-control-plane-kkz5g, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5pz7b, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mtvpd, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-scale-db7536-control-plane-kkz5g, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-vq55w, container calico-node-startup: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-scale" test spec
INFO: Deleting namespace md-scale-040aax
STEP: Redacting sensitive information from logs


• [SLOW TEST:1625.885 seconds]
... skipping 57 lines ...
STEP: Dumping logs from the "node-drain-q2cp7m" workload cluster
STEP: Dumping workload cluster node-drain-3bz12f/node-drain-q2cp7m logs
Nov  5 09:32:27.359: INFO: INFO: Collecting logs for node node-drain-q2cp7m-control-plane-mth7g in cluster node-drain-q2cp7m in namespace node-drain-3bz12f

Nov  5 09:34:38.605: INFO: INFO: Collecting boot logs for AzureMachine node-drain-q2cp7m-control-plane-mth7g

Failed to get logs for machine node-drain-q2cp7m-control-plane-f5rdk, cluster node-drain-3bz12f/node-drain-q2cp7m: dialing public load balancer at node-drain-q2cp7m-37efb5c1.northeurope.cloudapp.azure.com: dial tcp 20.93.42.142:22: connect: connection timed out
STEP: Dumping workload cluster node-drain-3bz12f/node-drain-q2cp7m kube-system pod logs
STEP: Fetching kube-system pod logs took 924.761333ms
STEP: Dumping workload cluster node-drain-3bz12f/node-drain-q2cp7m Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-tlmk9, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-node-drain-q2cp7m-control-plane-mth7g, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-m7p56, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-node-drain-q2cp7m-control-plane-mth7g, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-node-drain-q2cp7m-control-plane-mth7g, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-ccq45, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-ds7mh, container coredns
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-x7ksz, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-scheduler-node-drain-q2cp7m-control-plane-mth7g, container kube-scheduler
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 200.840666ms
STEP: Dumping all the Cluster API resources in the "node-drain-3bz12f" namespace
STEP: Deleting cluster node-drain-3bz12f/node-drain-q2cp7m
STEP: Deleting cluster node-drain-q2cp7m
INFO: Waiting for the Cluster node-drain-3bz12f/node-drain-q2cp7m to be deleted
STEP: Waiting for cluster node-drain-q2cp7m to be deleted
... skipping 145 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-6fvnm, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-94ttm, container coredns
STEP: Creating log watcher for controller kube-system/kube-scheduler-clusterctl-upgrade-1nlvpp-control-plane-8lcw9, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-xmzhh, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-ggds6, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-pzjt8, container kube-proxy
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 271.392848ms
STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-0sb5gh" namespace
STEP: Deleting cluster clusterctl-upgrade-0sb5gh/clusterctl-upgrade-1nlvpp
STEP: Deleting cluster clusterctl-upgrade-1nlvpp
INFO: Waiting for the Cluster clusterctl-upgrade-0sb5gh/clusterctl-upgrade-1nlvpp to be deleted
STEP: Waiting for cluster clusterctl-upgrade-1nlvpp to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-pzjt8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-94ttm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-clusterctl-upgrade-1nlvpp-control-plane-8lcw9, container kube-scheduler: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-7687cd95b7-fx8p5, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-clusterctl-upgrade-1nlvpp-control-plane-8lcw9, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-xmzhh, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6fvnm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ggds6, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-clusterctl-upgrade-1nlvpp-control-plane-8lcw9, container kube-apiserver: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-g7xhr, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-mbrtg, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-2b6cf, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-clusterctl-upgrade-1nlvpp-control-plane-8lcw9, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-shwkj, container kube-proxy: http2: client connection lost
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-cp44j, container manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
INFO: Deleting namespace clusterctl-upgrade-0sb5gh
STEP: Redacting sensitive information from logs


• [SLOW TEST:1900.644 seconds]
... skipping 7 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck [It] Should successfully trigger KCP remediation 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/cluster_proxy.go:171

Ran 13 of 23 Specs in 4977.952 seconds
FAIL! -- 12 Passed | 1 Failed | 0 Pending | 10 Skipped


Ginkgo ran 1 suite in 1h24m18.88411856s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...