This job view page is being replaced by Spyglass soon. Check out the new job view.
PRshysank: [WIP] Increase parallelism for e2e tests
ResultFAILURE
Tests 1 failed / 12 succeeded
Started2021-11-10 05:47
Elapsed1h38m
Revision2ff77ac4d8d0f424138aa8eabf1ac6b2e81b85e0
Refs 1816

Test Failures


capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count 27m58s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sShould\ssuccessfully\sscale\sout\sand\sscale\sin\sa\sMachineDeployment\sShould\ssuccessfully\sscale\sa\sMachineDeployment\sup\sand\sdown\supon\schanges\sto\sthe\sMachineDeployment\sreplica\scount$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/md_scale.go:70
Timed out after 1200.002s.
Error: Unexpected non-nil/non-zero extra argument at index 1:
	<*errors.fundamental>: Machine count does not match existing nodes count
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/machinedeployment_helpers.go:348
				
				Click to see stdout/stderrfrom junit.e2e_suite.4.xml

Filter through log files | View test history on testgrid


Show 12 Passed Tests

Show 10 Skipped Tests

Error lines from build-log.txt

... skipping 679 lines ...
[1] Nov 10 06:05:01.484: INFO: INFO: Collecting boot logs for AzureMachine quick-start-zexcjm-md-0-fqgsp
[1] 
[1] Nov 10 06:05:01.937: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster quick-start-zexcjm in namespace quick-start-i1e0q7
[1] 
[1] Nov 10 06:05:38.223: INFO: INFO: Collecting boot logs for AzureMachine quick-start-zexcjm-md-win-pzfmt
[1] 
[1] Failed to get logs for machine quick-start-zexcjm-md-win-fcfc4f9cd-7zzgr, cluster quick-start-i1e0q7/quick-start-zexcjm: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
[1] Nov 10 06:05:38.683: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster quick-start-zexcjm in namespace quick-start-i1e0q7
[1] 
[1] Nov 10 06:06:10.270: INFO: INFO: Collecting boot logs for AzureMachine quick-start-zexcjm-md-win-4czm6
[1] 
[1] Failed to get logs for machine quick-start-zexcjm-md-win-fcfc4f9cd-pcllc, cluster quick-start-i1e0q7/quick-start-zexcjm: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
[1] STEP: Dumping workload cluster quick-start-i1e0q7/quick-start-zexcjm kube-system pod logs
[1] STEP: Fetching kube-system pod logs took 1.038690596s
[1] STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-22g8h, container coredns
[1] STEP: Creating log watcher for controller kube-system/kube-scheduler-quick-start-zexcjm-control-plane-2vmpc, container kube-scheduler
[1] STEP: Creating log watcher for controller kube-system/calico-node-windows-9n88l, container calico-node-startup
[1] STEP: Dumping workload cluster quick-start-i1e0q7/quick-start-zexcjm Azure activity log
... skipping 8 lines ...
[1] STEP: Creating log watcher for controller kube-system/kube-apiserver-quick-start-zexcjm-control-plane-2vmpc, container kube-apiserver
[1] STEP: Creating log watcher for controller kube-system/kube-proxy-vdnl8, container kube-proxy
[1] STEP: Creating log watcher for controller kube-system/kube-proxy-windows-llr8s, container kube-proxy
[1] STEP: Creating log watcher for controller kube-system/kube-proxy-windows-hsb4k, container kube-proxy
[1] STEP: Creating log watcher for controller kube-system/kube-proxy-sqhvp, container kube-proxy
[1] STEP: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-zexcjm-control-plane-2vmpc, container kube-controller-manager
[1] STEP: Error starting logs stream for pod kube-system/calico-node-windows-mw67q, container calico-node-felix: container "calico-node-felix" in pod "calico-node-windows-mw67q" is waiting to start: PodInitializing
[1] STEP: Error starting logs stream for pod kube-system/calico-node-windows-mw67q, container calico-node-startup: container "calico-node-startup" in pod "calico-node-windows-mw67q" is waiting to start: PodInitializing
[1] STEP: Error starting logs stream for pod kube-system/calico-node-windows-9n88l, container calico-node-felix: container "calico-node-felix" in pod "calico-node-windows-9n88l" is waiting to start: PodInitializing
[1] STEP: Error starting logs stream for pod kube-system/calico-node-windows-9n88l, container calico-node-startup: container "calico-node-startup" in pod "calico-node-windows-9n88l" is waiting to start: PodInitializing
[1] STEP: Fetching activity logs took 549.143043ms
[1] STEP: Dumping all the Cluster API resources in the "quick-start-i1e0q7" namespace
[1] STEP: Deleting cluster quick-start-i1e0q7/quick-start-zexcjm
[1] STEP: Deleting cluster quick-start-zexcjm
[1] INFO: Waiting for the Cluster quick-start-i1e0q7/quick-start-zexcjm to be deleted
[1] STEP: Waiting for cluster quick-start-zexcjm to be deleted
[1] STEP: Got error while streaming logs for pod kube-system/kube-proxy-sqhvp, container kube-proxy: http2: client connection lost
[1] STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-quick-start-zexcjm-control-plane-2vmpc, container kube-controller-manager: http2: client connection lost
[1] STEP: Got error while streaming logs for pod kube-system/etcd-quick-start-zexcjm-control-plane-2vmpc, container etcd: http2: client connection lost
[1] STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-c5d4r, container calico-kube-controllers: http2: client connection lost
[1] STEP: Got error while streaming logs for pod kube-system/calico-node-9nwdw, container calico-node: http2: client connection lost
[1] STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-22g8h, container coredns: http2: client connection lost
[1] STEP: Got error while streaming logs for pod kube-system/calico-node-n48j8, container calico-node: http2: client connection lost
[1] STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-llr8s, container kube-proxy: http2: client connection lost
[1] STEP: Got error while streaming logs for pod kube-system/kube-proxy-vdnl8, container kube-proxy: http2: client connection lost
[1] STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-hsb4k, container kube-proxy: http2: client connection lost
[1] STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-jpqqs, container coredns: http2: client connection lost
[1] STEP: Got error while streaming logs for pod kube-system/kube-scheduler-quick-start-zexcjm-control-plane-2vmpc, container kube-scheduler: http2: client connection lost
[1] STEP: Got error while streaming logs for pod kube-system/kube-apiserver-quick-start-zexcjm-control-plane-2vmpc, container kube-apiserver: http2: client connection lost
[2] STEP: PASSED!
[2] STEP: Dumping logs from the "md-rollout-cmzu3v" workload cluster
[2] STEP: Dumping workload cluster md-rollout-5rau2k/md-rollout-cmzu3v logs
[2] Nov 10 06:08:30.058: INFO: INFO: Collecting logs for node md-rollout-cmzu3v-control-plane-2rm58 in cluster md-rollout-cmzu3v in namespace md-rollout-5rau2k
[2] 
[2] Nov 10 06:08:42.756: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-cmzu3v-control-plane-2rm58
... skipping 13 lines ...
[4] STEP: Dumping logs from the "kcp-upgrade-5uu232" workload cluster
[4] STEP: Dumping workload cluster kcp-upgrade-rdd1cq/kcp-upgrade-5uu232 logs
[4] Nov 10 06:10:01.727: INFO: INFO: Collecting logs for node kcp-upgrade-5uu232-control-plane-lp479 in cluster kcp-upgrade-5uu232 in namespace kcp-upgrade-rdd1cq
[4] 
[2] Nov 10 06:10:15.459: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-cmzu3v-md-win-w5q4j
[2] 
[2] Failed to get logs for machine md-rollout-cmzu3v-md-win-596ddf8c98-7n57h, cluster md-rollout-5rau2k/md-rollout-cmzu3v: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
[2] Nov 10 06:10:15.861: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster md-rollout-cmzu3v in namespace md-rollout-5rau2k
[2] 
[2] Nov 10 06:11:47.615: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-cmzu3v-md-win-bmr62
[2] 
[2] Failed to get logs for machine md-rollout-cmzu3v-md-win-596ddf8c98-qpgp7, cluster md-rollout-5rau2k/md-rollout-cmzu3v: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
[2] Nov 10 06:11:48.742: INFO: INFO: Collecting logs for node 10.1.0.8 in cluster md-rollout-cmzu3v in namespace md-rollout-5rau2k
[2] 
[4] Nov 10 06:12:12.941: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-5uu232-control-plane-lp479
[4] 
[4] Failed to get logs for machine kcp-upgrade-5uu232-control-plane-lprc9, cluster kcp-upgrade-rdd1cq/kcp-upgrade-5uu232: dialing public load balancer at kcp-upgrade-5uu232-3e1171cc.uksouth.cloudapp.azure.com: dial tcp 20.90.125.205:22: connect: connection timed out
[4] Nov 10 06:12:14.301: INFO: INFO: Collecting logs for node kcp-upgrade-5uu232-md-0-mz2k4 in cluster kcp-upgrade-5uu232 in namespace kcp-upgrade-rdd1cq
[4] 
[2] Nov 10 06:12:24.707: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-cmzu3v-md-win-2eg1se-mq4rq
[2] 
[2] Failed to get logs for machine md-rollout-cmzu3v-md-win-74d5fbbdc4-rdpdr, cluster md-rollout-5rau2k/md-rollout-cmzu3v: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
[2] STEP: Dumping workload cluster md-rollout-5rau2k/md-rollout-cmzu3v kube-system pod logs
[2] STEP: Fetching kube-system pod logs took 984.452664ms
[2] STEP: Dumping workload cluster md-rollout-5rau2k/md-rollout-cmzu3v Azure activity log
[2] STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-lzqtq, container coredns
[2] STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-k9bx8, container calico-kube-controllers
[2] STEP: Creating log watcher for controller kube-system/calico-node-windows-77psr, container calico-node-startup
... skipping 34 lines ...
[1] ------------------------------
[1] Running the Cluster API E2E tests Running the self-hosted spec 
[1]   Should pivot the bootstrap cluster to a self-hosted cluster
[1]   /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107
[1] STEP: Creating namespace "self-hosted" for hosting the cluster
[1] Nov 10 06:13:17.475: INFO: starting to create namespace for hosting the "self-hosted" test spec
[1] 2021/11/10 06:13:17 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
[1] INFO: Creating namespace self-hosted
[1] INFO: Creating event watcher for namespace "self-hosted"
[1] STEP: Creating a workload cluster
[1] INFO: Creating the workload cluster with name "self-hosted-s9balk" using the "management" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
[1] INFO: Getting the cluster template yaml
[1] INFO: clusterctl config cluster self-hosted-s9balk --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management
... skipping 10 lines ...
[1] azuremachinetemplate.infrastructure.cluster.x-k8s.io/self-hosted-s9balk-md-0 created
[1] 
[1] INFO: Waiting for the cluster infrastructure to be provisioned
[1] STEP: Waiting for cluster to enter the provisioned phase
[4] Nov 10 06:14:24.013: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-5uu232-md-0-mz2k4
[4] 
[4] Failed to get logs for machine kcp-upgrade-5uu232-md-0-7768c64f9f-g842q, cluster kcp-upgrade-rdd1cq/kcp-upgrade-5uu232: dialing public load balancer at kcp-upgrade-5uu232-3e1171cc.uksouth.cloudapp.azure.com: dial tcp 20.90.125.205:22: connect: connection timed out
[4] Nov 10 06:14:25.379: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster kcp-upgrade-5uu232 in namespace kcp-upgrade-rdd1cq
[4] 
[1] INFO: Waiting for control plane to be initialized
[1] INFO: Waiting for the first control plane machine managed by self-hosted/self-hosted-s9balk-control-plane to be provisioned
[1] STEP: Waiting for one control plane node to exist
[2] STEP: Got error while streaming logs for pod kube-system/calico-node-windows-65hcz, container calico-node-felix: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/etcd-md-rollout-cmzu3v-control-plane-2rm58, container etcd: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-6ngqc, container coredns: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-k9bx8, container calico-kube-controllers: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-rollout-cmzu3v-control-plane-2rm58, container kube-apiserver: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-nhxms, container kube-proxy: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/calico-node-windows-65hcz, container calico-node-startup: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-n9pqc, container kube-proxy: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/calico-node-windows-77psr, container calico-node-startup: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/kube-proxy-ljh2f, container kube-proxy: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/calico-node-windows-77psr, container calico-node-felix: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-lzqtq, container coredns: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-rollout-cmzu3v-control-plane-2rm58, container kube-controller-manager: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-rollout-cmzu3v-control-plane-2rm58, container kube-scheduler: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-rpdbp, container kube-proxy: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/calico-node-windows-w8xbx, container calico-node-startup: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/calico-node-windows-w8xbx, container calico-node-felix: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/calico-node-652mg, container calico-node: http2: client connection lost
[1] INFO: Waiting for control plane to be ready
[1] INFO: Waiting for control plane self-hosted/self-hosted-s9balk-control-plane to be ready (implies underlying nodes to be ready as well)
[1] STEP: Waiting for the control plane to be ready
[1] INFO: Waiting for the machine deployments to be provisioned
[1] STEP: Waiting for the workload nodes to exist
[1] INFO: Waiting for the machine pools to be provisioned
... skipping 18 lines ...
[1] Nov 10 06:20:36.108: INFO: Waiting for the cluster to be reconciled after moving to self hosted
[1] STEP: Waiting for cluster to enter the provisioned phase
[1] STEP: PASSED!
[1] STEP: Ensure API servers are stable before doing move
[4] Nov 10 06:20:57.233: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-5uu232-md-win-6ntcb
[4] 
[4] Failed to get logs for machine kcp-upgrade-5uu232-md-win-587b784c64-5ql46, cluster kcp-upgrade-rdd1cq/kcp-upgrade-5uu232: dialing public load balancer at kcp-upgrade-5uu232-3e1171cc.uksouth.cloudapp.azure.com: dial tcp 20.90.125.205:22: connect: connection timed out
[4] Nov 10 06:20:58.357: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-5uu232 in namespace kcp-upgrade-rdd1cq
[4] 
[1] STEP: Moving the cluster back to bootstrap
[1] STEP: Moving workload clusters
[1] Nov 10 06:21:48.802: INFO: Waiting for the cluster to be reconciled after moving back to booststrap
[1] STEP: Waiting for cluster to enter the provisioned phase
... skipping 100 lines ...
[3] STEP: Creating log watcher for controller kube-system/kube-proxy-ljqrj, container kube-proxy
[3] STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-3w3shm-control-plane-bkn9x, container kube-scheduler
[5] Nov 10 06:23:37.105: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-tc0a64-md-0-ghjdh
[5] 
[5] Nov 10 06:23:37.513: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster kcp-upgrade-tc0a64 in namespace kcp-upgrade-l25sq3
[5] 
[3] STEP: Got error while iterating over activity logs for resource group capz-e2e-km3iga: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
[3] STEP: Fetching activity logs took 30.000295949s
[3] STEP: Dumping all the Cluster API resources in the "kcp-upgrade-lh53ng" namespace
[3] STEP: Deleting cluster kcp-upgrade-lh53ng/kcp-upgrade-3w3shm
[3] STEP: Deleting cluster kcp-upgrade-3w3shm
[3] INFO: Waiting for the Cluster kcp-upgrade-lh53ng/kcp-upgrade-3w3shm to be deleted
[3] STEP: Waiting for cluster kcp-upgrade-3w3shm to be deleted
[5] Nov 10 06:24:10.297: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-tc0a64-md-win-2f5js
[5] 
[5] Failed to get logs for machine kcp-upgrade-tc0a64-md-win-68bc79d864-nbw2w, cluster kcp-upgrade-l25sq3/kcp-upgrade-tc0a64: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
[5] Nov 10 06:24:10.874: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-tc0a64 in namespace kcp-upgrade-l25sq3
[5] 
[5] Nov 10 06:24:44.608: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-tc0a64-md-win-4jl62
[5] 
[5] Failed to get logs for machine kcp-upgrade-tc0a64-md-win-68bc79d864-pgjjt, cluster kcp-upgrade-l25sq3/kcp-upgrade-tc0a64: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
[5] STEP: Dumping workload cluster kcp-upgrade-l25sq3/kcp-upgrade-tc0a64 kube-system pod logs
[5] STEP: Fetching kube-system pod logs took 935.410822ms
[5] STEP: Dumping workload cluster kcp-upgrade-l25sq3/kcp-upgrade-tc0a64 Azure activity log
[5] STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-wfmqw, container calico-kube-controllers
[5] STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-tc0a64-control-plane-2tpf7, container kube-apiserver
[5] STEP: Creating log watcher for controller kube-system/kube-proxy-8vg9n, container kube-proxy
... skipping 20 lines ...
[5] STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-tc0a64-control-plane-2tpf7, container etcd
[5] STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-tc0a64-control-plane-nws9d, container kube-scheduler
[5] STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-tc0a64-control-plane-z7d4v, container etcd
[5] STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-tc0a64-control-plane-z7d4v, container kube-scheduler
[5] STEP: Creating log watcher for controller kube-system/kube-proxy-windows-w6srd, container kube-proxy
[5] STEP: Creating log watcher for controller kube-system/kube-proxy-windows-zwzlp, container kube-proxy
[5] STEP: Got error while iterating over activity logs for resource group capz-e2e-lynkbf: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
[5] STEP: Fetching activity logs took 30.000660413s
[5] STEP: Dumping all the Cluster API resources in the "kcp-upgrade-l25sq3" namespace
[5] STEP: Deleting cluster kcp-upgrade-l25sq3/kcp-upgrade-tc0a64
[5] STEP: Deleting cluster kcp-upgrade-tc0a64
[5] INFO: Waiting for the Cluster kcp-upgrade-l25sq3/kcp-upgrade-tc0a64 to be deleted
[5] STEP: Waiting for cluster kcp-upgrade-tc0a64 to be deleted
[3] STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-3w3shm-control-plane-s66kb, container etcd: http2: client connection lost
[3] STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-l476t, container calico-kube-controllers: http2: client connection lost
[3] STEP: Got error while streaming logs for pod kube-system/calico-node-nqrwl, container calico-node: http2: client connection lost
[3] STEP: Got error while streaming logs for pod kube-system/calico-node-zv79j, container calico-node: http2: client connection lost
[3] STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-3w3shm-control-plane-s66kb, container kube-apiserver: http2: client connection lost
[3] STEP: Got error while streaming logs for pod kube-system/kube-proxy-plh7r, container kube-proxy: http2: client connection lost
[3] STEP: Got error while streaming logs for pod kube-system/calico-node-jcd5n, container calico-node: http2: client connection lost
[3] STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-3w3shm-control-plane-bkn9x, container kube-scheduler: http2: client connection lost
[3] STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mjjfj, container coredns: http2: client connection lost
[3] STEP: Got error while streaming logs for pod kube-system/kube-proxy-ljqrj, container kube-proxy: http2: client connection lost
[3] STEP: Got error while streaming logs for pod kube-system/calico-node-gc8h8, container calico-node: http2: client connection lost
[3] STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-3w3shm-control-plane-52mpb, container kube-controller-manager: http2: client connection lost
[3] STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-3w3shm-control-plane-bkn9x, container kube-apiserver: http2: client connection lost
[3] STEP: Got error while streaming logs for pod kube-system/kube-proxy-d5wbf, container kube-proxy: http2: client connection lost
[3] STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-3w3shm-control-plane-bkn9x, container kube-controller-manager: http2: client connection lost
[3] STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-3w3shm-control-plane-s66kb, container kube-scheduler: http2: client connection lost
[3] STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-3w3shm-control-plane-bkn9x, container etcd: http2: client connection lost
[3] STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-tlc8d, container coredns: http2: client connection lost
[3] STEP: Got error while streaming logs for pod kube-system/kube-proxy-p5msd, container kube-proxy: http2: client connection lost
[3] STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-3w3shm-control-plane-52mpb, container kube-scheduler: http2: client connection lost
[3] STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-3w3shm-control-plane-52mpb, container kube-apiserver: http2: client connection lost
[3] STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-3w3shm-control-plane-s66kb, container kube-controller-manager: http2: client connection lost
[3] STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-3w3shm-control-plane-52mpb, container etcd: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-59mrk, container coredns: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-tc0a64-control-plane-2tpf7, container etcd: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/calico-node-5hcxc, container calico-node: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-tc0a64-control-plane-z7d4v, container kube-controller-manager: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/calico-node-s6fxd, container calico-node: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/calico-node-cpl8k, container calico-node: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-tc0a64-control-plane-2tpf7, container kube-controller-manager: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-tc0a64-control-plane-nws9d, container kube-scheduler: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-nzh85, container coredns: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/kube-proxy-2czmr, container kube-proxy: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-tc0a64-control-plane-z7d4v, container etcd: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-tc0a64-control-plane-nws9d, container kube-apiserver: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-w6srd, container kube-proxy: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-wfmqw, container calico-kube-controllers: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-tc0a64-control-plane-nws9d, container etcd: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/calico-node-windows-jvsk8, container calico-node-felix: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-tc0a64-control-plane-2tpf7, container kube-apiserver: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/calico-node-windows-fhg9f, container calico-node-startup: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-tc0a64-control-plane-z7d4v, container kube-apiserver: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/calico-node-windows-fhg9f, container calico-node-felix: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-zwzlp, container kube-proxy: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/kube-proxy-8vg9n, container kube-proxy: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-tc0a64-control-plane-nws9d, container kube-controller-manager: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-tc0a64-control-plane-2tpf7, container kube-scheduler: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/kube-proxy-dmjwd, container kube-proxy: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/calico-node-windows-jvsk8, container calico-node-startup: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/kube-proxy-8xbj5, container kube-proxy: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/calico-node-vx7hv, container calico-node: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-tc0a64-control-plane-z7d4v, container kube-scheduler: http2: client connection lost
[4] Nov 10 06:27:30.445: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-5uu232-md-win-zhhjh
[4] 
[4] Failed to get logs for machine kcp-upgrade-5uu232-md-win-587b784c64-fdjsm, cluster kcp-upgrade-rdd1cq/kcp-upgrade-5uu232: dialing public load balancer at kcp-upgrade-5uu232-3e1171cc.uksouth.cloudapp.azure.com: dial tcp 20.90.125.205:22: connect: connection timed out
[4] STEP: Dumping workload cluster kcp-upgrade-rdd1cq/kcp-upgrade-5uu232 kube-system pod logs
[4] STEP: Fetching kube-system pod logs took 1.007557308s
[4] STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-drq9m, container coredns
[4] STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-ms7jr, container coredns
[4] STEP: Creating log watcher for controller kube-system/calico-node-windows-b54fb, container calico-node-startup
[4] STEP: Creating log watcher for controller kube-system/calico-node-windows-94wwm, container calico-node-felix
... skipping 8 lines ...
[4] STEP: Creating log watcher for controller kube-system/kube-proxy-windows-h7g9d, container kube-proxy
[4] STEP: Creating log watcher for controller kube-system/calico-node-nq82q, container calico-node
[4] STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-5uu232-control-plane-lp479, container kube-scheduler
[4] STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-5uu232-control-plane-lp479, container kube-controller-manager
[4] STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-ncl45, container calico-kube-controllers
[4] STEP: Creating log watcher for controller kube-system/calico-node-windows-94wwm, container calico-node-startup
[4] STEP: Got error while iterating over activity logs for resource group capz-e2e-eg962n: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
[4] STEP: Fetching activity logs took 30.000578654s
[4] STEP: Dumping all the Cluster API resources in the "kcp-upgrade-rdd1cq" namespace
[4] STEP: Deleting cluster kcp-upgrade-rdd1cq/kcp-upgrade-5uu232
[4] STEP: Deleting cluster kcp-upgrade-5uu232
[4] INFO: Waiting for the Cluster kcp-upgrade-rdd1cq/kcp-upgrade-5uu232 to be deleted
[4] STEP: Waiting for cluster kcp-upgrade-5uu232 to be deleted
[1] STEP: Deleting namespace used for hosting the "self-hosted" test spec
[1] INFO: Deleting namespace self-hosted
[1] STEP: Checking if any resources are left over in Azure for spec "self-hosted"
[1] STEP: Redacting sensitive information from logs
[4] STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-ms7jr, container coredns: http2: client connection lost
[4] STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-5uu232-control-plane-lp479, container kube-scheduler: http2: client connection lost
[4] STEP: Got error while streaming logs for pod kube-system/calico-node-windows-94wwm, container calico-node-startup: http2: client connection lost
[4] STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-5uu232-control-plane-lp479, container kube-apiserver: http2: client connection lost
[4] STEP: Got error while streaming logs for pod kube-system/calico-node-hzvfb, container calico-node: http2: client connection lost
[4] STEP: Got error while streaming logs for pod kube-system/kube-proxy-72l2r, container kube-proxy: http2: client connection lost
[4] STEP: Got error while streaming logs for pod kube-system/calico-node-windows-b54fb, container calico-node-startup: http2: client connection lost
[4] STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-5uu232-control-plane-lp479, container etcd: http2: client connection lost
[4] STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-5uu232-control-plane-lp479, container kube-controller-manager: http2: client connection lost
[4] STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-drq9m, container coredns: http2: client connection lost
[4] STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-h7g9d, container kube-proxy: http2: client connection lost
[4] STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-ncl45, container calico-kube-controllers: http2: client connection lost
[4] STEP: Got error while streaming logs for pod kube-system/kube-proxy-58mng, container kube-proxy: http2: client connection lost
[4] STEP: Got error while streaming logs for pod kube-system/calico-node-nq82q, container calico-node: http2: client connection lost
[4] STEP: Got error while streaming logs for pod kube-system/calico-node-windows-b54fb, container calico-node-felix: http2: client connection lost
[4] STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-f68fr, container kube-proxy: http2: client connection lost
[4] STEP: Got error while streaming logs for pod kube-system/calico-node-windows-94wwm, container calico-node-felix: http2: client connection lost
[1] STEP: Redacting sensitive information from logs
[1] 
[1] • [SLOW TEST:1071.674 seconds]
[1] Running the Cluster API E2E tests
[1] /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:40
[1]   Running the self-hosted spec
... skipping 266 lines ...
[1] STEP: Creating log watcher for controller kube-system/kube-proxy-vsb4v, container kube-proxy
[1] STEP: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-o2ku5q-control-plane-8fxj9, container kube-scheduler
[1] STEP: Creating log watcher for controller kube-system/calico-node-tx546, container calico-node
[1] STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-czskn, container coredns
[1] STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-wfsm2, container coredns
[1] STEP: Creating log watcher for controller kube-system/kube-proxy-llr49, container kube-proxy
[1] STEP: Error starting logs stream for pod kube-system/calico-node-xbx8n, container calico-node: container "calico-node" in pod "calico-node-xbx8n" is waiting to start: PodInitializing
[1] STEP: Fetching activity logs took 866.644779ms
[1] STEP: Dumping all the Cluster API resources in the "mhc-remediation-kithe6" namespace
[1] STEP: Deleting cluster mhc-remediation-kithe6/mhc-remediation-o2ku5q
[1] STEP: Deleting cluster mhc-remediation-o2ku5q
[1] INFO: Waiting for the Cluster mhc-remediation-kithe6/mhc-remediation-o2ku5q to be deleted
[1] STEP: Waiting for cluster mhc-remediation-o2ku5q to be deleted
[2] STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-adoption-dsmrf8-control-plane-0, container kube-apiserver: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-nt7f7, container calico-kube-controllers: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-adoption-dsmrf8-control-plane-0, container kube-scheduler: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8zfr6, container coredns: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/calico-node-kr4rs, container calico-node: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/etcd-kcp-adoption-dsmrf8-control-plane-0, container etcd: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-lr8f6, container coredns: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-adoption-dsmrf8-control-plane-0, container kube-controller-manager: http2: client connection lost
[2] STEP: Got error while streaming logs for pod kube-system/kube-proxy-cx69v, container kube-proxy: http2: client connection lost
[4] INFO: Waiting for control plane to be ready
[4] INFO: Waiting for control plane md-scale-vyznoz/md-scale-a3thwy-control-plane to be ready (implies underlying nodes to be ready as well)
[4] STEP: Waiting for the control plane to be ready
[4] INFO: Waiting for the machine deployments to be provisioned
[4] STEP: Waiting for the workload nodes to exist
[1] STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-o2ku5q-control-plane-8fxj9, container kube-apiserver: http2: client connection lost
[1] STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-wfsm2, container coredns: http2: client connection lost
[1] STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-x792c, container calico-kube-controllers: http2: client connection lost
[1] STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-o2ku5q-control-plane-8fxj9, container kube-scheduler: http2: client connection lost
[3] INFO: Waiting for control plane mhc-remediation-lvxfxy/mhc-remediation-7qklai-control-plane to be ready (implies underlying nodes to be ready as well)
[3] STEP: Waiting for the control plane to be ready
[3] INFO: Waiting for the machine deployments to be provisioned
[3] STEP: Waiting for the workload nodes to exist
[3] INFO: Waiting for the machine pools to be provisioned
[3] STEP: Setting a machine unhealthy and wait for KubeadmControlPlane remediation
... skipping 160 lines ...
[5] Nov 10 06:52:30.093: INFO: INFO: Collecting boot logs for AzureMachine machine-pool-z8bd5e-control-plane-dx4nl
[5] 
[5] Nov 10 06:52:31.390: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-z8bd5e in namespace machine-pool-dlswhi
[5] 
[5] Nov 10 06:52:50.877: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set machine-pool-z8bd5e-mp-0
[5] 
[5] Failed to get logs for machine pool machine-pool-z8bd5e-mp-0, cluster machine-pool-dlswhi/machine-pool-z8bd5e: [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1]
[5] Nov 10 06:52:51.746: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-z8bd5e in namespace machine-pool-dlswhi
[5] 
[1] INFO: Waiting for control plane to be initialized
[1] INFO: Waiting for the first control plane machine managed by clusterctl-upgrade-dow8lu/clusterctl-upgrade-k23uvu-control-plane to be provisioned
[1] STEP: Waiting for one control plane node to exist
[5] Nov 10 06:54:02.681: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set win-p-win
[5] 
[5] Failed to get logs for machine pool machine-pool-z8bd5e-mp-win, cluster machine-pool-dlswhi/machine-pool-z8bd5e: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
[5] STEP: Dumping workload cluster machine-pool-dlswhi/machine-pool-z8bd5e kube-system pod logs
[5] STEP: Fetching kube-system pod logs took 992.344179ms
[5] STEP: Creating log watcher for controller kube-system/calico-node-wv9w4, container calico-node
[5] STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-z27mw, container coredns
[5] STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-g7h87, container coredns
[5] STEP: Creating log watcher for controller kube-system/calico-node-windows-f6lwc, container calico-node-startup
... skipping 15 lines ...
[5] STEP: Fetching activity logs took 724.100294ms
[5] STEP: Dumping all the Cluster API resources in the "machine-pool-dlswhi" namespace
[5] STEP: Deleting cluster machine-pool-dlswhi/machine-pool-z8bd5e
[5] STEP: Deleting cluster machine-pool-z8bd5e
[5] INFO: Waiting for the Cluster machine-pool-dlswhi/machine-pool-z8bd5e to be deleted
[5] STEP: Waiting for cluster machine-pool-z8bd5e to be deleted
[5] STEP: Error starting logs stream for pod kube-system/kube-proxy-kmqzm, container kube-proxy: Get "https://10.1.0.7:10250/containerLogs/kube-system/kube-proxy-kmqzm/kube-proxy?follow=true": dial tcp 10.1.0.7:10250: i/o timeout
[5] STEP: Error starting logs stream for pod kube-system/calico-node-52pc5, container calico-node: Get "https://10.1.0.6:10250/containerLogs/kube-system/calico-node-52pc5/calico-node?follow=true": dial tcp 10.1.0.6:10250: i/o timeout
[5] STEP: Error starting logs stream for pod kube-system/kube-proxy-xzfjw, container kube-proxy: Get "https://10.1.0.6:10250/containerLogs/kube-system/kube-proxy-xzfjw/kube-proxy?follow=true": dial tcp 10.1.0.6:10250: i/o timeout
[5] STEP: Error starting logs stream for pod kube-system/calico-node-svztp, container calico-node: Get "https://10.1.0.7:10250/containerLogs/kube-system/calico-node-svztp/calico-node?follow=true": dial tcp 10.1.0.7:10250: i/o timeout
[1] INFO: Waiting for control plane to be ready
[1] INFO: Waiting for control plane clusterctl-upgrade-dow8lu/clusterctl-upgrade-k23uvu-control-plane to be ready (implies underlying nodes to be ready as well)
[1] STEP: Waiting for the control plane to be ready
[1] INFO: Waiting for the machine deployments to be provisioned
[1] STEP: Waiting for the workload nodes to exist
[5] STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-z27mw, container coredns: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-flnmb, container kube-proxy: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-z4bxf, container calico-kube-controllers: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/calico-node-wv9w4, container calico-node: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/calico-node-windows-f6lwc, container calico-node-startup: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/calico-node-windows-f6lwc, container calico-node-felix: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-z8bd5e-control-plane-dx4nl, container kube-apiserver: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-z8bd5e-control-plane-dx4nl, container etcd: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/kube-proxy-xpsg5, container kube-proxy: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/calico-node-82dn6, container calico-node: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-g7h87, container coredns: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/kube-proxy-77qjb, container kube-proxy: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-z8bd5e-control-plane-dx4nl, container kube-controller-manager: http2: client connection lost
[5] STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-z8bd5e-control-plane-dx4nl, container kube-scheduler: http2: client connection lost
[2] INFO: Waiting for control plane node-drain-ehcnar/node-drain-6mii2a-control-plane to be ready (implies underlying nodes to be ready as well)
[2] STEP: Waiting for the control plane to be ready
[2] INFO: Waiting for the machine deployments to be provisioned
[2] STEP: Waiting for the workload nodes to exist
[2] INFO: Waiting for the machine pools to be provisioned
[2] STEP: Add a deployment with unevictable pods and podDisruptionBudget to the workload cluster. The deployed pods cannot be evicted in the node draining process.
... skipping 21 lines ...
[3]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/mhc_remediations.go:115
[3] ------------------------------
[3] 
[3] JUnit report was created: /logs/artifacts/junit.e2e_suite.3.xml
[3] 
[3] Ran 2 of 2 Specs in 3902.904 seconds
[3] SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 0 Skipped
[3] PASS
[3] 
[3] You're using deprecated Ginkgo functionality:
[3] =============================================
[3] Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
[3] A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
... skipping 93 lines ...
[4]   Should successfully scale out and scale in a MachineDeployment
[4]   /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:208
[4]     Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count [It]
[4]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/md_scale.go:70
[4] 
[4]     Timed out after 1200.002s.
[4]     Error: Unexpected non-nil/non-zero extra argument at index 1:
[4]     	<*errors.fundamental>: Machine count does not match existing nodes count
[4] 
[4]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/machinedeployment_helpers.go:348
[4] 
[4]     Full Stack Trace
[4]     sigs.k8s.io/cluster-api/test/framework.ScaleAndWaitMachineDeployment(0x2583180, 0xc0000560d0, 0x25a1790, 0xc0003b1e90, 0xc000699180, 0xc000f00000, 0x3, 0xc0005540a0, 0x2, 0x2)
... skipping 32 lines ...
[4] 
[4] JUnit report was created: /logs/artifacts/junit.e2e_suite.4.xml
[4] 
[4] 
[4] Summarizing 1 Failure:
[4] 
[4] [Fail] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment [It] Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count 
[4] /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/machinedeployment_helpers.go:348
[4] 
[4] Ran 2 of 2 Specs in 4510.103 seconds
[4] FAIL! -- 1 Passed | 1 Failed | 0 Pending | 0 Skipped
[4] --- FAIL: TestE2E (4510.13s)
[4] FAIL
[4] 
[4] You're using deprecated Ginkgo functionality:
[4] =============================================
[4] Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
[4] A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
[4]   - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 36 lines ...
[1] STEP: Waiting for cluster clusterctl-upgrade-hgor1g to be deleted
[5] STEP: Deleting namespace used for hosting the "machine-pool" test spec
[5] INFO: Deleting namespace machine-pool-dlswhi
[5] STEP: Redacting sensitive information from logs
[2] Nov 10 07:12:03.089: INFO: INFO: Collecting boot logs for AzureMachine node-drain-6mii2a-control-plane-8whvs
[2] 
[2] Failed to get logs for machine node-drain-6mii2a-control-plane-j26zv, cluster node-drain-ehcnar/node-drain-6mii2a: dialing public load balancer at node-drain-6mii2a-559fb851.uksouth.cloudapp.azure.com: dial tcp 20.108.133.234:22: connect: connection timed out
[2] STEP: Dumping workload cluster node-drain-ehcnar/node-drain-6mii2a kube-system pod logs
[2] STEP: Fetching kube-system pod logs took 937.660047ms
[2] STEP: Dumping workload cluster node-drain-ehcnar/node-drain-6mii2a Azure activity log
[2] STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-l5vj4, container calico-kube-controllers
[2] STEP: Creating log watcher for controller kube-system/etcd-node-drain-6mii2a-control-plane-8whvs, container etcd
[2] STEP: Creating log watcher for controller kube-system/kube-controller-manager-node-drain-6mii2a-control-plane-8whvs, container kube-controller-manager
... skipping 19 lines ...
[5]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/machine_pool.go:76
[5] ------------------------------
[5] 
[5] JUnit report was created: /logs/artifacts/junit.e2e_suite.5.xml
[5] 
[5] Ran 2 of 2 Specs in 4820.699 seconds
[5] SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 0 Skipped
[5] 
[5] You're using deprecated Ginkgo functionality:
[5] =============================================
[5] Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
[5] A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
[5]   - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 36 lines ...
[1] STEP: Creating log watcher for controller kube-system/kube-apiserver-clusterctl-upgrade-k23uvu-control-plane-jck2b, container kube-apiserver
[1] STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-2z8qj, container coredns
[1] STEP: Creating log watcher for controller kube-system/kube-proxy-vkvcz, container kube-proxy
[1] STEP: Creating log watcher for controller kube-system/kube-proxy-xr8ph, container kube-proxy
[1] STEP: Creating log watcher for controller kube-system/kube-controller-manager-clusterctl-upgrade-k23uvu-control-plane-jck2b, container kube-controller-manager
[1] STEP: Creating log watcher for controller kube-system/kube-scheduler-clusterctl-upgrade-k23uvu-control-plane-jck2b, container kube-scheduler
[1] STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
[1] STEP: Fetching activity logs took 249.915489ms
[1] STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-dow8lu" namespace
[1] STEP: Deleting cluster clusterctl-upgrade-dow8lu/clusterctl-upgrade-k23uvu
[1] STEP: Deleting cluster clusterctl-upgrade-k23uvu
[1] INFO: Waiting for the Cluster clusterctl-upgrade-dow8lu/clusterctl-upgrade-k23uvu to be deleted
[1] STEP: Waiting for cluster clusterctl-upgrade-k23uvu to be deleted
[1] INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-fjmqp, container manager: http2: client connection lost
[1] INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-k6gk8, container manager: http2: client connection lost
[1] INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-mpqkl, container manager: http2: client connection lost
[1] INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-f6cb77bb4-cz8zt, container manager: http2: client connection lost
[1] STEP: Got error while streaming logs for pod kube-system/calico-node-8fdsc, container calico-node: http2: client connection lost
[1] STEP: Got error while streaming logs for pod kube-system/kube-proxy-xr8ph, container kube-proxy: http2: client connection lost
[2] STEP: Deleting namespace used for hosting the "node-drain" test spec
[2] INFO: Deleting namespace node-drain-ehcnar
[2] STEP: Redacting sensitive information from logs
[2] 
[2] • [SLOW TEST:1902.680 seconds]
[2] Running the Cluster API E2E tests
... skipping 4 lines ...
[2]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/node_drain_timeout.go:82
[2] ------------------------------
[2] 
[2] JUnit report was created: /logs/artifacts/junit.e2e_suite.2.xml
[2] 
[2] Ran 3 of 3 Specs in 5308.983 seconds
[2] SUCCESS! -- 3 Passed | 0 Failed | 0 Pending | 0 Skipped
[2] 
[2] You're using deprecated Ginkgo functionality:
[2] =============================================
[2] Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
[2] A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
[2]   - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 26 lines ...
[1] ------------------------------
[1] STEP: Tearing down the management cluster
[1] 
[1] JUnit report was created: /logs/artifacts/junit.e2e_suite.1.xml
[1] 
[1] Ran 4 of 14 Specs in 5517.407 seconds
[1] SUCCESS! -- 4 Passed | 0 Failed | 0 Pending | 10 Skipped
[1] PASS
[1] 
[1] You're using deprecated Ginkgo functionality:
[1] =============================================
[1] Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
[1] A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
... skipping 11 lines ...
[1] 
[1] To silence deprecations that can be silenced set the following environment variable:
[1]   ACK_GINKGO_DEPRECATIONS=1.16.5
[1] 

Ginkgo ran 1 suite in 1h33m19.660332978s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...