This job view page is being replaced by Spyglass soon. Check out the new job view.
PRshysank: [WIP] Increase parallelism for e2e tests
ResultFAILURE
Tests 1 failed / 12 succeeded
Started2021-11-06 06:18
Elapsed1h27m
Revision71773565512673c7857e1d7ac9d7cce30eabde82
Refs 1816

Test Failures


capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation 18m28s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sShould\ssuccessfully\sremediate\sunhealthy\smachines\swith\sMachineHealthCheck\sShould\ssuccessfully\strigger\sKCP\sremediation$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/mhc_remediations.go:115
Failed to get controller-runtime client
Unexpected error:
    <*url.Error | 0xc000e3d6e0>: {
        Op: "Get",
        URL: "https://mhc-remediation-4n0r8x-9e03adf1.eastus.cloudapp.azure.com:6443/api?timeout=32s",
        Err: <*http.httpError | 0xc0004ca8a0>{
            err: "net/http: request canceled (Client.Timeout exceeded while awaiting headers)",
            timeout: true,
        },
    }
    Get "https://mhc-remediation-4n0r8x-9e03adf1.eastus.cloudapp.azure.com:6443/api?timeout=32s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
occurred
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/cluster_proxy.go:171
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 12 Passed Tests

Show 10 Skipped Tests

Error lines from build-log.txt

... skipping 475 lines ...
Nov  6 06:32:51.100: INFO: INFO: Collecting boot logs for AzureMachine quick-start-9emfzn-md-0-g7glf

Nov  6 06:32:51.436: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster quick-start-9emfzn in namespace quick-start-e5uhla

Nov  6 06:33:17.322: INFO: INFO: Collecting boot logs for AzureMachine quick-start-9emfzn-md-win-cs7fw

Failed to get logs for machine quick-start-9emfzn-md-win-7fdb56c97d-55vzl, cluster quick-start-e5uhla/quick-start-9emfzn: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov  6 06:33:17.619: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster quick-start-9emfzn in namespace quick-start-e5uhla

Nov  6 06:33:42.551: INFO: INFO: Collecting boot logs for AzureMachine quick-start-9emfzn-md-win-wcxvx

Failed to get logs for machine quick-start-9emfzn-md-win-7fdb56c97d-f9ptg, cluster quick-start-e5uhla/quick-start-9emfzn: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster quick-start-e5uhla/quick-start-9emfzn kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-node-v7g56, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-b56dm, container calico-node-felix
STEP: Creating log watcher for controller kube-system/calico-node-windows-zt8zn, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-k78r4, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/etcd-quick-start-9emfzn-control-plane-c5p8b, container etcd
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-qjrjq, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-zpqgl, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-b56dm, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-proxy-kvfxn, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-quick-start-9emfzn-control-plane-c5p8b, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-apiserver-quick-start-9emfzn-control-plane-c5p8b, container kube-apiserver
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 238.908123ms
STEP: Dumping all the Cluster API resources in the "quick-start-e5uhla" namespace
STEP: Deleting cluster quick-start-e5uhla/quick-start-9emfzn
STEP: Deleting cluster quick-start-9emfzn
INFO: Waiting for the Cluster quick-start-e5uhla/quick-start-9emfzn to be deleted
STEP: Waiting for cluster quick-start-9emfzn to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-nctwj, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-zt8zn, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-zpqgl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-quick-start-9emfzn-control-plane-c5p8b, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-v7g56, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-quick-start-9emfzn-control-plane-c5p8b, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-b9g4j, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-k78r4, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-qjrjq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-w66vr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-zt8zn, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-b56dm, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-quick-start-9emfzn-control-plane-c5p8b, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-kvfxn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-b56dm, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-v8x7x, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-quick-start-9emfzn-control-plane-c5p8b, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "quick-start" test spec
INFO: Deleting namespace quick-start-e5uhla
STEP: Redacting sensitive information from logs


• [SLOW TEST:905.466 seconds]
... skipping 8 lines ...
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:110

Node Id (1 Indexed): 4
STEP: Creating namespace "self-hosted" for hosting the cluster
Nov  6 06:40:19.540: INFO: starting to create namespace for hosting the "self-hosted" test spec
2021/11/06 06:40:19 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-rlcpc4" using the "management" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-rlcpc4 --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management
... skipping 73 lines ...
STEP: Fetching activity logs took 551.333569ms
STEP: Dumping all the Cluster API resources in the "self-hosted" namespace
STEP: Deleting all clusters in the self-hosted namespace
STEP: Deleting cluster self-hosted-rlcpc4
INFO: Waiting for the Cluster self-hosted/self-hosted-rlcpc4 to be deleted
STEP: Waiting for cluster self-hosted-rlcpc4 to be deleted
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-mzq9g, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-sjng2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-g887x, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xvtld, container calico-node: http2: client connection lost
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-njck4, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-x7q86, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-self-hosted-rlcpc4-control-plane-lpmsr, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-self-hosted-rlcpc4-control-plane-lpmsr, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mxmcn, container kube-proxy: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-5d8b7cb6d-rm4zh, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-x8q4w, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cdn6k, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-self-hosted-rlcpc4-control-plane-lpmsr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-vv9c9, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-self-hosted-rlcpc4-control-plane-lpmsr, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "self-hosted" test spec
INFO: Deleting namespace self-hosted
STEP: Checking if any resources are left over in Azure for spec "self-hosted"
STEP: Redacting sensitive information from logs
STEP: Redacting sensitive information from logs

... skipping 69 lines ...
Nov  6 06:37:10.697: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-rtef0p-md-0-qiiua5-l65n8

Nov  6 06:37:10.966: INFO: INFO: Collecting logs for node 10.1.0.8 in cluster md-rollout-rtef0p in namespace md-rollout-as42nj

Nov  6 06:37:40.583: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-rtef0p-md-win-uddiiu-5pjxn

Failed to get logs for machine md-rollout-rtef0p-md-win-5598d8949d-lkxxd, cluster md-rollout-as42nj/md-rollout-rtef0p: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov  6 06:37:40.871: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster md-rollout-rtef0p in namespace md-rollout-as42nj

Nov  6 06:38:50.050: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-rtef0p-md-win-k27t2

Failed to get logs for machine md-rollout-rtef0p-md-win-85f44d9459-6hnxd, cluster md-rollout-as42nj/md-rollout-rtef0p: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov  6 06:38:50.365: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster md-rollout-rtef0p in namespace md-rollout-as42nj

Nov  6 06:39:30.182: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-rtef0p-md-win-hlz44

Failed to get logs for machine md-rollout-rtef0p-md-win-85f44d9459-rqngr, cluster md-rollout-as42nj/md-rollout-rtef0p: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-rollout-as42nj/md-rollout-rtef0p kube-system pod logs
STEP: Fetching kube-system pod logs took 393.606325ms
STEP: Dumping workload cluster md-rollout-as42nj/md-rollout-rtef0p Azure activity log
STEP: Creating log watcher for controller kube-system/kube-apiserver-md-rollout-rtef0p-control-plane-82xdx, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-mnvx4, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-tz9k5, container calico-node-felix
... skipping 11 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-2blpl, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-vm2zj, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-rollout-rtef0p-control-plane-82xdx, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-452f5, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-t4z8v, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-mbsrj, container calico-node-felix
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 230.421665ms
STEP: Dumping all the Cluster API resources in the "md-rollout-as42nj" namespace
STEP: Deleting cluster md-rollout-as42nj/md-rollout-rtef0p
STEP: Deleting cluster md-rollout-rtef0p
INFO: Waiting for the Cluster md-rollout-as42nj/md-rollout-rtef0p to be deleted
STEP: Waiting for cluster md-rollout-rtef0p to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-mbsrj, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-57r5f, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-rollout-rtef0p-control-plane-82xdx, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-rollout-rtef0p-control-plane-82xdx, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-452f5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-vm2zj, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-vrbvf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-mbsrj, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-whkcm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-rollout-rtef0p-control-plane-82xdx, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-t4z8v, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-tz9k5, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-57r5f, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mnvx4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-lrksc, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-rollout-rtef0p-control-plane-82xdx, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-tz9k5, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4b27w, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-2blpl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-m7xl4, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-rollout" test spec
INFO: Deleting namespace md-rollout-as42nj
STEP: Redacting sensitive information from logs


• [SLOW TEST:1765.362 seconds]
... skipping 75 lines ...
Nov  6 06:51:08.295: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-3teyyn-md-0-pvvmx

Nov  6 06:51:08.621: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster kcp-upgrade-3teyyn in namespace kcp-upgrade-r3xiew

Nov  6 06:51:30.739: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-3teyyn-md-win-gsjxf

Failed to get logs for machine kcp-upgrade-3teyyn-md-win-7c4d5568fd-gddhx, cluster kcp-upgrade-r3xiew/kcp-upgrade-3teyyn: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov  6 06:51:31.014: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-3teyyn in namespace kcp-upgrade-r3xiew

Nov  6 06:51:55.984: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-3teyyn-md-win-fbtcg

Failed to get logs for machine kcp-upgrade-3teyyn-md-win-7c4d5568fd-pf5w4, cluster kcp-upgrade-r3xiew/kcp-upgrade-3teyyn: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster kcp-upgrade-r3xiew/kcp-upgrade-3teyyn kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-node-thf5l, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-cdwxv, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-3teyyn-control-plane-kcf5m, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-3teyyn-control-plane-kcf5m, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-cszz9, container kube-proxy
... skipping 20 lines ...
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-3teyyn-control-plane-kcf5m, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-8j2b9, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-3teyyn-control-plane-h9jw2, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-4l8rd, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-gbccs, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-3teyyn-control-plane-bpm5k, container kube-apiserver
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 223.674589ms
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-r3xiew" namespace
STEP: Deleting cluster kcp-upgrade-r3xiew/kcp-upgrade-3teyyn
STEP: Deleting cluster kcp-upgrade-3teyyn
INFO: Waiting for the Cluster kcp-upgrade-r3xiew/kcp-upgrade-3teyyn to be deleted
STEP: Waiting for cluster kcp-upgrade-3teyyn to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-2px9j, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-cdwxv, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-6lfg5, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4l8rd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-3teyyn-control-plane-kcf5m, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-3teyyn-control-plane-h9jw2, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-wv9ww, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-gbccs, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cszz9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-3teyyn-control-plane-kcf5m, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-4x29z, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-gbccs, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-s9qqd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-3teyyn-control-plane-kcf5m, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-3teyyn-control-plane-h9jw2, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-3teyyn-control-plane-bpm5k, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-3teyyn-control-plane-h9jw2, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-hhjz7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-shgvh, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-8j2b9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-ft244, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-3teyyn-control-plane-bpm5k, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-3teyyn-control-plane-kcf5m, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2bf9z, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-3teyyn-control-plane-h9jw2, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-thf5l, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-cdwxv, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-3teyyn-control-plane-bpm5k, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-3teyyn-control-plane-bpm5k, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-r3xiew
STEP: Redacting sensitive information from logs


• [SLOW TEST:2080.897 seconds]
... skipping 57 lines ...
STEP: Dumping logs from the "kcp-upgrade-9wno5v" workload cluster
STEP: Dumping workload cluster kcp-upgrade-2k4o4b/kcp-upgrade-9wno5v logs
Nov  6 06:39:48.983: INFO: INFO: Collecting logs for node kcp-upgrade-9wno5v-control-plane-rmqz5 in cluster kcp-upgrade-9wno5v in namespace kcp-upgrade-2k4o4b

Nov  6 06:41:59.092: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-9wno5v-control-plane-rmqz5

Failed to get logs for machine kcp-upgrade-9wno5v-control-plane-cvh2g, cluster kcp-upgrade-2k4o4b/kcp-upgrade-9wno5v: dialing public load balancer at kcp-upgrade-9wno5v-4a21ff1f.eastus.cloudapp.azure.com: dial tcp 40.71.238.54:22: connect: connection timed out
Nov  6 06:42:00.158: INFO: INFO: Collecting logs for node kcp-upgrade-9wno5v-md-0-56fl6 in cluster kcp-upgrade-9wno5v in namespace kcp-upgrade-2k4o4b

Nov  6 06:44:10.168: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-9wno5v-md-0-56fl6

Failed to get logs for machine kcp-upgrade-9wno5v-md-0-7d6cf9f95f-lbrpw, cluster kcp-upgrade-2k4o4b/kcp-upgrade-9wno5v: dialing public load balancer at kcp-upgrade-9wno5v-4a21ff1f.eastus.cloudapp.azure.com: dial tcp 40.71.238.54:22: connect: connection timed out
Nov  6 06:44:11.000: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster kcp-upgrade-9wno5v in namespace kcp-upgrade-2k4o4b

Nov  6 06:50:43.380: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-9wno5v-md-win-c5r2t

Failed to get logs for machine kcp-upgrade-9wno5v-md-win-5c5fc5f9cb-jrmkv, cluster kcp-upgrade-2k4o4b/kcp-upgrade-9wno5v: dialing public load balancer at kcp-upgrade-9wno5v-4a21ff1f.eastus.cloudapp.azure.com: dial tcp 40.71.238.54:22: connect: connection timed out
Nov  6 06:50:44.187: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-9wno5v in namespace kcp-upgrade-2k4o4b

Nov  6 06:57:16.596: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-9wno5v-md-win-rhp2p

Failed to get logs for machine kcp-upgrade-9wno5v-md-win-5c5fc5f9cb-s595p, cluster kcp-upgrade-2k4o4b/kcp-upgrade-9wno5v: dialing public load balancer at kcp-upgrade-9wno5v-4a21ff1f.eastus.cloudapp.azure.com: dial tcp 40.71.238.54:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-2k4o4b/kcp-upgrade-9wno5v kube-system pod logs
STEP: Fetching kube-system pod logs took 329.339173ms
STEP: Dumping workload cluster kcp-upgrade-2k4o4b/kcp-upgrade-9wno5v Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-mc2sj, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-mr8n2, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-jzrgg, container kube-proxy
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-sgq6f, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-9wno5v-control-plane-rmqz5, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-7prs2, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-xgqbf, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-njl7h, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-9wno5v-control-plane-rmqz5, container kube-apiserver
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 292.362219ms
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-2k4o4b" namespace
STEP: Deleting cluster kcp-upgrade-2k4o4b/kcp-upgrade-9wno5v
STEP: Deleting cluster kcp-upgrade-9wno5v
INFO: Waiting for the Cluster kcp-upgrade-2k4o4b/kcp-upgrade-9wno5v to be deleted
STEP: Waiting for cluster kcp-upgrade-9wno5v to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-xgqbf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-w6qtr, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mr8n2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-9wno5v-control-plane-rmqz5, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mz9vk, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-jzrgg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-9wno5v-control-plane-rmqz5, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-njl7h, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-47v89, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-9wno5v-control-plane-rmqz5, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-njl7h, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-9wno5v-control-plane-rmqz5, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-47v89, container calico-node-startup: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-2k4o4b
STEP: Redacting sensitive information from logs


• [SLOW TEST:2407.757 seconds]
... skipping 92 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-wh27b, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-xwgxv, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-eufzas-control-plane-74zn9, container etcd
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-eufzas-control-plane-rrw5p, container etcd
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-eufzas-control-plane-2bsqc, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-b7jlq, container kube-proxy
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 202.26039ms
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-0kh2yr" namespace
STEP: Deleting cluster kcp-upgrade-0kh2yr/kcp-upgrade-eufzas
STEP: Deleting cluster kcp-upgrade-eufzas
INFO: Waiting for the Cluster kcp-upgrade-0kh2yr/kcp-upgrade-eufzas to be deleted
STEP: Waiting for cluster kcp-upgrade-eufzas to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-eufzas-control-plane-rrw5p, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-eufzas-control-plane-74zn9, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-eufzas-control-plane-2bsqc, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-p7jd2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-eufzas-control-plane-74zn9, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-eufzas-control-plane-2bsqc, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-q2ccl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xwgxv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-rcvtm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-eufzas-control-plane-74zn9, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-c5p9s, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-fmltf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-b7jlq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-eufzas-control-plane-2bsqc, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-eufzas-control-plane-74zn9, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-eufzas-control-plane-rrw5p, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-sh4j8, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-gslxq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-eufzas-control-plane-rrw5p, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-n9zpk, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-eufzas-control-plane-rrw5p, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-eufzas-control-plane-2bsqc, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-0kh2yr
STEP: Redacting sensitive information from logs


• [SLOW TEST:2608.179 seconds]
... skipping 52 lines ...
STEP: Dumping logs from the "mhc-remediation-4n0r8x" workload cluster
STEP: Dumping workload cluster mhc-remediation-twd3pa/mhc-remediation-4n0r8x logs
Nov  6 07:02:03.885: INFO: INFO: Collecting logs for node mhc-remediation-4n0r8x-control-plane-zt6mp in cluster mhc-remediation-4n0r8x in namespace mhc-remediation-twd3pa

Nov  6 07:04:20.581: INFO: INFO: Collecting boot logs for AzureMachine mhc-remediation-4n0r8x-control-plane-zt6mp

Failed to get logs for machine mhc-remediation-4n0r8x-control-plane-d558t, cluster mhc-remediation-twd3pa/mhc-remediation-4n0r8x: [dialing from control plane to target node at mhc-remediation-4n0r8x-control-plane-zt6mp: ssh: rejected: connect failed (Connection timed out), failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/mhc-remediation-4n0r8x-control-plane-zt6mp' under resource group 'mhc-remediation-4n0r8x' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"]
Nov  6 07:04:21.070: INFO: INFO: Collecting logs for node mhc-remediation-4n0r8x-control-plane-mr9tk in cluster mhc-remediation-4n0r8x in namespace mhc-remediation-twd3pa

Nov  6 07:04:31.890: INFO: INFO: Collecting boot logs for AzureMachine mhc-remediation-4n0r8x-control-plane-mr9tk

Nov  6 07:04:32.398: INFO: INFO: Collecting logs for node mhc-remediation-4n0r8x-control-plane-mrc6j in cluster mhc-remediation-4n0r8x in namespace mhc-remediation-twd3pa

... skipping 20 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-jsxrv, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-mhc-remediation-4n0r8x-control-plane-mr9tk, container etcd
STEP: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-4n0r8x-control-plane-mr9tk, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-qlmtt, container coredns
STEP: Creating log watcher for controller kube-system/etcd-mhc-remediation-4n0r8x-control-plane-mrc6j, container etcd
STEP: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-4n0r8x-control-plane-mrc6j, container kube-controller-manager
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 203.240535ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-twd3pa" namespace
STEP: Deleting cluster mhc-remediation-twd3pa/mhc-remediation-4n0r8x
STEP: Deleting cluster mhc-remediation-4n0r8x
INFO: Waiting for the Cluster mhc-remediation-twd3pa/mhc-remediation-4n0r8x to be deleted
STEP: Waiting for cluster mhc-remediation-4n0r8x to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-4n0r8x-control-plane-mr9tk, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-4n0r8x-control-plane-mr9tk, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-4n0r8x-control-plane-mr9tk, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-f9l9s, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jsxrv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-c9vp4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-4n0r8x-control-plane-mr9tk, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-grnk5, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-twd3pa
STEP: Redacting sensitive information from logs


• Failure [1108.648 seconds]
Running the Cluster API E2E tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:40
  Should successfully remediate unhealthy machines with MachineHealthCheck
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:173
    Should successfully trigger KCP remediation [It]
    /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/mhc_remediations.go:115

    Failed to get controller-runtime client
    Unexpected error:
        <*url.Error | 0xc000e3d6e0>: {
            Op: "Get",
            URL: "https://mhc-remediation-4n0r8x-9e03adf1.eastus.cloudapp.azure.com:6443/api?timeout=32s",
            Err: <*http.httpError | 0xc0004ca8a0>{
                err: "net/http: request canceled (Client.Timeout exceeded while awaiting headers)",
                timeout: true,
            },
... skipping 104 lines ...
STEP: Creating log watcher for controller kube-system/etcd-kcp-adoption-8rcbmg-control-plane-0, container etcd
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-adoption-8rcbmg-control-plane-0, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-5t9vk, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-adoption-8rcbmg-control-plane-0, container kube-apiserver
STEP: Dumping workload cluster kcp-adoption-bwuqb6/kcp-adoption-8rcbmg Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-c7c5n, container coredns
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 219.798815ms
STEP: Dumping all the Cluster API resources in the "kcp-adoption-bwuqb6" namespace
STEP: Deleting cluster kcp-adoption-bwuqb6/kcp-adoption-8rcbmg
STEP: Deleting cluster kcp-adoption-8rcbmg
INFO: Waiting for the Cluster kcp-adoption-bwuqb6/kcp-adoption-8rcbmg to be deleted
STEP: Waiting for cluster kcp-adoption-8rcbmg to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-adoption-8rcbmg-control-plane-0, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-fjf77, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-adoption-8rcbmg-control-plane-0, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-c7c5n, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-5t9vk, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jzvm4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-adoption-8rcbmg-control-plane-0, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6872b, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-adoption-8rcbmg-control-plane-0, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-adoption" test spec
INFO: Deleting namespace kcp-adoption-bwuqb6
STEP: Redacting sensitive information from logs


• [SLOW TEST:831.623 seconds]
... skipping 69 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-bbrwj, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-ndpfl, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-i1xscq-control-plane-6lj7c, container kube-apiserver
STEP: Creating log watcher for controller kube-system/etcd-mhc-remediation-i1xscq-control-plane-6lj7c, container etcd
STEP: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-i1xscq-control-plane-6lj7c, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-i1xscq-control-plane-6lj7c, container kube-controller-manager
STEP: Error starting logs stream for pod kube-system/calico-node-djbhh, container calico-node: container "calico-node" in pod "calico-node-djbhh" is waiting to start: PodInitializing
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 209.394766ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-hq9x9m" namespace
STEP: Deleting cluster mhc-remediation-hq9x9m/mhc-remediation-i1xscq
STEP: Deleting cluster mhc-remediation-i1xscq
INFO: Waiting for the Cluster mhc-remediation-hq9x9m/mhc-remediation-i1xscq to be deleted
STEP: Waiting for cluster mhc-remediation-i1xscq to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-n88j5, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bbrwj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-i1xscq-control-plane-6lj7c, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-cls2n, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-i1xscq-control-plane-6lj7c, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mffp8, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-i1xscq-control-plane-6lj7c, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ndpfl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-d67zx, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-i1xscq-control-plane-6lj7c, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-hq9x9m
STEP: Redacting sensitive information from logs


• [SLOW TEST:1306.835 seconds]
... skipping 61 lines ...
Nov  6 07:20:44.714: INFO: INFO: Collecting boot logs for AzureMachine machine-pool-kr6plu-control-plane-n2ptq

Nov  6 07:20:45.551: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-kr6plu in namespace machine-pool-hrhk0d

Nov  6 07:21:12.303: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set machine-pool-kr6plu-mp-0

Failed to get logs for machine pool machine-pool-kr6plu-mp-0, cluster machine-pool-hrhk0d/machine-pool-kr6plu: [running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1]
Nov  6 07:21:12.573: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-kr6plu in namespace machine-pool-hrhk0d

Nov  6 07:22:13.049: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set win-p-win

Failed to get logs for machine pool machine-pool-kr6plu-mp-win, cluster machine-pool-hrhk0d/machine-pool-kr6plu: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster machine-pool-hrhk0d/machine-pool-kr6plu kube-system pod logs
STEP: Fetching kube-system pod logs took 327.247037ms
STEP: Creating log watcher for controller kube-system/calico-node-45j8k, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-kjspb, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-bdrsb, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-kjspb, container calico-node-felix
... skipping 9 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-kr6plu-control-plane-n2ptq, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-n9xv6, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-knpgm, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-rf6sb, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-machine-pool-kr6plu-control-plane-n2ptq, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-kr6plu-control-plane-n2ptq, container kube-apiserver
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 251.883365ms
STEP: Dumping all the Cluster API resources in the "machine-pool-hrhk0d" namespace
STEP: Deleting cluster machine-pool-hrhk0d/machine-pool-kr6plu
STEP: Deleting cluster machine-pool-kr6plu
INFO: Waiting for the Cluster machine-pool-hrhk0d/machine-pool-kr6plu to be deleted
STEP: Waiting for cluster machine-pool-kr6plu to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bwf5b, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-485jx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-bdrsb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-kjspb, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-kjspb, container calico-node-startup: http2: client connection lost
STEP: Error starting logs stream for pod kube-system/calico-node-45j8k, container calico-node: Get "https://machine-pool-kr6plu-22b02c61.eastus.cloudapp.azure.com:6443/api/v1/namespaces/kube-system/pods/calico-node-45j8k/log?container=calico-node&follow=true": http2: client connection lost
STEP: Error starting logs stream for pod kube-system/kube-proxy-49xcg, container kube-proxy: Get "https://machine-pool-kr6plu-22b02c61.eastus.cloudapp.azure.com:6443/api/v1/namespaces/kube-system/pods/kube-proxy-49xcg/log?container=kube-proxy&follow=true": http2: client connection lost
STEP: Error starting logs stream for pod kube-system/calico-node-knpgm, container calico-node: Get "https://machine-pool-kr6plu-22b02c61.eastus.cloudapp.azure.com:6443/api/v1/namespaces/kube-system/pods/calico-node-knpgm/log?container=calico-node&follow=true": http2: client connection lost
STEP: Error starting logs stream for pod kube-system/kube-proxy-bvm98, container kube-proxy: Get "https://machine-pool-kr6plu-22b02c61.eastus.cloudapp.azure.com:6443/api/v1/namespaces/kube-system/pods/kube-proxy-bvm98/log?container=kube-proxy&follow=true": http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-hrhk0d
STEP: Redacting sensitive information from logs


• [SLOW TEST:1567.892 seconds]
... skipping 62 lines ...
Nov  6 07:22:19.958: INFO: INFO: Collecting boot logs for AzureMachine md-scale-286cdl-md-0-c5z77

Nov  6 07:22:20.314: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster md-scale-286cdl in namespace md-scale-1yblgg

Nov  6 07:23:47.090: INFO: INFO: Collecting boot logs for AzureMachine md-scale-286cdl-md-win-6pq9t

Failed to get logs for machine md-scale-286cdl-md-win-55b77cf88b-8zbpg, cluster md-scale-1yblgg/md-scale-286cdl: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov  6 07:23:47.433: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster md-scale-286cdl in namespace md-scale-1yblgg

Nov  6 07:24:18.303: INFO: INFO: Collecting boot logs for AzureMachine md-scale-286cdl-md-win-pcgvp

Failed to get logs for machine md-scale-286cdl-md-win-55b77cf88b-wkftj, cluster md-scale-1yblgg/md-scale-286cdl: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-scale-1yblgg/md-scale-286cdl kube-system pod logs
STEP: Fetching kube-system pod logs took 333.637069ms
STEP: Dumping workload cluster md-scale-1yblgg/md-scale-286cdl Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-mjs7k, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-286cdl-control-plane-fvll9, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-windows-cqblp, container calico-node-startup
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-4f2pq, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-md-scale-286cdl-control-plane-fvll9, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-windows-cqblp, container calico-node-felix
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-rr2dr, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-windows-ghzh6, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-qd6kr, container kube-proxy
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 399.923931ms
STEP: Dumping all the Cluster API resources in the "md-scale-1yblgg" namespace
STEP: Deleting cluster md-scale-1yblgg/md-scale-286cdl
STEP: Deleting cluster md-scale-286cdl
INFO: Waiting for the Cluster md-scale-1yblgg/md-scale-286cdl to be deleted
STEP: Waiting for cluster md-scale-286cdl to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-ghzh6, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-ghzh6, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-cqblp, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-tvqr9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-qd6kr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-cqblp, container calico-node-startup: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-scale" test spec
INFO: Deleting namespace md-scale-1yblgg
STEP: Redacting sensitive information from logs


• [SLOW TEST:1492.849 seconds]
... skipping 142 lines ...
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-g2668, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-rtbsr, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-fwxhr, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-clusterctl-upgrade-uznqel-control-plane-wkrvd, container etcd
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-nw9zp, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-apiserver-clusterctl-upgrade-uznqel-control-plane-wkrvd, container kube-apiserver
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 231.781673ms
STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-ks63sg" namespace
STEP: Deleting cluster clusterctl-upgrade-ks63sg/clusterctl-upgrade-uznqel
STEP: Deleting cluster clusterctl-upgrade-uznqel
INFO: Waiting for the Cluster clusterctl-upgrade-ks63sg/clusterctl-upgrade-uznqel to be deleted
STEP: Waiting for cluster clusterctl-upgrade-uznqel to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-nw9zp, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-clusterctl-upgrade-uznqel-control-plane-wkrvd, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rtbsr, container calico-node: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-5d8b7cb6d-jq8rm, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-gd5s7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-clusterctl-upgrade-uznqel-control-plane-wkrvd, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-26j9t, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-clusterctl-upgrade-uznqel-control-plane-wkrvd, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-fwxhr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-g2668, container coredns: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-2wp8q, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-clusterctl-upgrade-uznqel-control-plane-wkrvd, container kube-apiserver: http2: client connection lost
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-h7gx4, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-6j5b2, container coredns: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-zshkz, container manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
INFO: Deleting namespace clusterctl-upgrade-ks63sg
STEP: Redacting sensitive information from logs


• [SLOW TEST:1733.450 seconds]
... skipping 57 lines ...
STEP: Dumping logs from the "node-drain-xk2ex4" workload cluster
STEP: Dumping workload cluster node-drain-t63itn/node-drain-xk2ex4 logs
Nov  6 07:33:05.104: INFO: INFO: Collecting logs for node node-drain-xk2ex4-control-plane-bdc7h in cluster node-drain-xk2ex4 in namespace node-drain-t63itn

Nov  6 07:35:16.024: INFO: INFO: Collecting boot logs for AzureMachine node-drain-xk2ex4-control-plane-bdc7h

Failed to get logs for machine node-drain-xk2ex4-control-plane-g26wx, cluster node-drain-t63itn/node-drain-xk2ex4: dialing public load balancer at node-drain-xk2ex4-1370a27d.eastus.cloudapp.azure.com: dial tcp 20.85.156.159:22: connect: connection timed out
STEP: Dumping workload cluster node-drain-t63itn/node-drain-xk2ex4 kube-system pod logs
STEP: Fetching kube-system pod logs took 331.012676ms
STEP: Dumping workload cluster node-drain-t63itn/node-drain-xk2ex4 Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-grg4g, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-9rl8q, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-node-drain-xk2ex4-control-plane-bdc7h, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-apiserver-node-drain-xk2ex4-control-plane-bdc7h, container kube-apiserver
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-t2hp4, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-s598n, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-node-drain-xk2ex4-control-plane-bdc7h, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-skk27, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-node-drain-xk2ex4-control-plane-bdc7h, container etcd
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 292.171614ms
STEP: Dumping all the Cluster API resources in the "node-drain-t63itn" namespace
STEP: Deleting cluster node-drain-t63itn/node-drain-xk2ex4
STEP: Deleting cluster node-drain-xk2ex4
INFO: Waiting for the Cluster node-drain-t63itn/node-drain-xk2ex4 to be deleted
STEP: Waiting for cluster node-drain-xk2ex4 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-node-drain-xk2ex4-control-plane-bdc7h, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-node-drain-xk2ex4-control-plane-bdc7h, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-node-drain-xk2ex4-control-plane-bdc7h, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-node-drain-xk2ex4-control-plane-bdc7h, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-s598n, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-grg4g, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-t2hp4, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-9rl8q, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-skk27, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "node-drain" test spec
INFO: Deleting namespace node-drain-t63itn
STEP: Redacting sensitive information from logs


• [SLOW TEST:1896.207 seconds]
... skipping 7 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck [It] Should successfully trigger KCP remediation 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/cluster_proxy.go:171

Ran 13 of 23 Specs in 4889.465 seconds
FAIL! -- 12 Passed | 1 Failed | 0 Pending | 10 Skipped


Ginkgo ran 1 suite in 1h22m55.61444875s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...