This job view page is being replaced by Spyglass soon. Check out the new job view.
PRshysank: [WIP] Increase parallelism for e2e tests
ResultABORTED
Tests 1 failed / 3 succeeded
Started2021-11-01 19:34
Elapsed2h9m
Revision37d342cd9f1dd85d9bb68c0ea42c79a60ac5d570
Refs 1816

Test Failures


capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd 1h4m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sKCP\supgrade\sspec\sin\sa\sHA\scluster\sShould\ssuccessfully\supgrade\sKubernetes\,\sDNS\,\skube\-proxy\,\sand\setcd$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/kcp_upgrade.go:112
Timed out after 1800.001s.
Expected
    <bool>: false
to be true
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/cluster_helpers.go:165
				
				Click to see stdout/stderrfrom junit.e2e_suite.3.xml

Filter through log files | View test history on testgrid


Show 3 Passed Tests

Error lines from build-log.txt

... skipping 475 lines ...
Nov  1 19:49:46.115: INFO: INFO: Collecting boot logs for AzureMachine quick-start-6x802f-md-0-4f2qr

Nov  1 19:49:46.411: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster quick-start-6x802f in namespace quick-start-g6jkr8

Nov  1 19:50:12.046: INFO: INFO: Collecting boot logs for AzureMachine quick-start-6x802f-md-win-nkxmq

Failed to get logs for machine quick-start-6x802f-md-win-5cf4f887b9-jx5vf, cluster quick-start-g6jkr8/quick-start-6x802f: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov  1 19:50:12.327: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster quick-start-6x802f in namespace quick-start-g6jkr8

Nov  1 19:50:33.504: INFO: INFO: Collecting boot logs for AzureMachine quick-start-6x802f-md-win-zqvlk

Failed to get logs for machine quick-start-6x802f-md-win-5cf4f887b9-qpzr4, cluster quick-start-g6jkr8/quick-start-6x802f: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster quick-start-g6jkr8/quick-start-6x802f kube-system pod logs
STEP: Fetching kube-system pod logs took 424.116386ms
STEP: Dumping workload cluster quick-start-g6jkr8/quick-start-6x802f Azure activity log
STEP: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-6x802f-control-plane-br8vs, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-987gk, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-hfktk, container kube-proxy
... skipping 14 lines ...
STEP: Fetching activity logs took 506.246448ms
STEP: Dumping all the Cluster API resources in the "quick-start-g6jkr8" namespace
STEP: Deleting cluster quick-start-g6jkr8/quick-start-6x802f
STEP: Deleting cluster quick-start-6x802f
INFO: Waiting for the Cluster quick-start-g6jkr8/quick-start-6x802f to be deleted
STEP: Waiting for cluster quick-start-6x802f to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hfktk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rq2m2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-987gk, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-quick-start-6x802f-control-plane-br8vs, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-quick-start-6x802f-control-plane-br8vs, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-quick-start-6x802f-control-plane-br8vs, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-mcvnd, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-hp6c8, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-hp6c8, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-l6zv9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jv2bv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-quick-start-6x802f-control-plane-br8vs, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-vj2rs, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-m87z4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-p87g8, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-p87g8, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-6pjs9, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "quick-start" test spec
INFO: Deleting namespace quick-start-g6jkr8
STEP: Redacting sensitive information from logs


• [SLOW TEST:946.998 seconds]
... skipping 67 lines ...
Nov  1 19:56:31.114: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-avqy19-md-0-y4884a-dq4kf

Nov  1 19:56:31.459: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster md-rollout-avqy19 in namespace md-rollout-7tgsq6

Nov  1 19:57:37.793: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-avqy19-md-win-q5gpw

Failed to get logs for machine md-rollout-avqy19-md-win-654446bbb9-8nqgg, cluster md-rollout-7tgsq6/md-rollout-avqy19: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov  1 19:59:04.664: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster md-rollout-avqy19 in namespace md-rollout-7tgsq6

Nov  1 19:59:52.195: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-avqy19-md-win-b85l8

Failed to get logs for machine md-rollout-avqy19-md-win-654446bbb9-8z7lm, cluster md-rollout-7tgsq6/md-rollout-avqy19: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov  1 19:59:52.592: INFO: INFO: Collecting logs for node 10.1.0.8 in cluster md-rollout-avqy19 in namespace md-rollout-7tgsq6

Nov  1 20:01:07.456: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-avqy19-md-win-b3w5ij-tvm8k

Failed to get logs for machine md-rollout-avqy19-md-win-7f77c9b896-ngn67, cluster md-rollout-7tgsq6/md-rollout-avqy19: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-rollout-7tgsq6/md-rollout-avqy19 kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-node-hnw5q, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-4djgb, container calico-node-felix
STEP: Creating log watcher for controller kube-system/calico-node-windows-b5pvd, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-windows-tcn2b, container calico-node-startup
STEP: Creating log watcher for controller kube-system/etcd-md-rollout-avqy19-control-plane-bphm4, container etcd
... skipping 11 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-md-rollout-avqy19-control-plane-bphm4, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-przpg, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-82thq, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-rollout-avqy19-control-plane-bphm4, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-tfq59, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-2t9zm, container kube-proxy
STEP: Error starting logs stream for pod kube-system/calico-node-windows-b5pvd, container calico-node-startup: pods "md-rollou-q5gpw" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-windows-wtfvc, container kube-proxy: pods "md-rollou-q5gpw" not found
STEP: Error starting logs stream for pod kube-system/calico-node-windows-b5pvd, container calico-node-felix: pods "md-rollou-q5gpw" not found
STEP: Fetching activity logs took 1.270266936s
STEP: Dumping all the Cluster API resources in the "md-rollout-7tgsq6" namespace
STEP: Deleting cluster md-rollout-7tgsq6/md-rollout-avqy19
STEP: Deleting cluster md-rollout-avqy19
INFO: Waiting for the Cluster md-rollout-7tgsq6/md-rollout-avqy19 to be deleted
STEP: Waiting for cluster md-rollout-avqy19 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-tcn2b, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-rollout-avqy19-control-plane-bphm4, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-4djgb, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-82thq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2t9zm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-przpg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-fwtxz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8bsjx, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-4djgb, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-rollout-avqy19-control-plane-bphm4, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-tcn2b, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-c42dj, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-lk5qm, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-rollout-avqy19-control-plane-bphm4, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-rollout-avqy19-control-plane-bphm4, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-rollout" test spec
INFO: Deleting namespace md-rollout-7tgsq6
STEP: Redacting sensitive information from logs


• [SLOW TEST:1781.483 seconds]
... skipping 8 lines ...
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107

Node Id (1 Indexed): 1
STEP: Creating namespace "self-hosted" for hosting the cluster
Nov  1 19:57:16.994: INFO: starting to create namespace for hosting the "self-hosted" test spec
2021/11/01 19:57:16 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-2ey4yg" using the "management" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-2ey4yg --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management
... skipping 73 lines ...
STEP: Fetching activity logs took 878.346211ms
STEP: Dumping all the Cluster API resources in the "self-hosted" namespace
STEP: Deleting all clusters in the self-hosted namespace
STEP: Deleting cluster self-hosted-2ey4yg
INFO: Waiting for the Cluster self-hosted/self-hosted-2ey4yg to be deleted
STEP: Waiting for cluster self-hosted-2ey4yg to be deleted
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-668gk, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-54xj9, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-self-hosted-2ey4yg-control-plane-kbw4l, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-fzr7l, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qtvpf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-86xzn, container kube-proxy: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-56f7f455cb-d6c2p, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-self-hosted-2ey4yg-control-plane-kbw4l, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-w6268, container kube-proxy: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-nrn8g, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-self-hosted-2ey4yg-control-plane-kbw4l, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-self-hosted-2ey4yg-control-plane-kbw4l, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-wnflk, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-7tgx4, container calico-node: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-vcc5f, container manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "self-hosted" test spec
INFO: Deleting namespace self-hosted
STEP: Checking if any resources are left over in Azure for spec "self-hosted"
STEP: Redacting sensitive information from logs
STEP: Redacting sensitive information from logs

... skipping 94 lines ...
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-sk2zk, container coredns
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-3dnqvz-control-plane-6lw95, container etcd
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-3dnqvz-control-plane-7rnlm, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-gm5t6, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-3dnqvz-control-plane-jkxfv, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-3dnqvz-control-plane-jkxfv, container etcd
STEP: Got error while iterating over activity logs for resource group capz-e2e-2joce5: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000692277s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-5dvtld" namespace
STEP: Deleting cluster kcp-upgrade-5dvtld/kcp-upgrade-3dnqvz
STEP: Deleting cluster kcp-upgrade-3dnqvz
INFO: Waiting for the Cluster kcp-upgrade-5dvtld/kcp-upgrade-3dnqvz to be deleted
STEP: Waiting for cluster kcp-upgrade-3dnqvz to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-g9428, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-3dnqvz-control-plane-7rnlm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-3dnqvz-control-plane-7rnlm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-3dnqvz-control-plane-6lw95, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-m747w, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-3dnqvz-control-plane-7rnlm, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-sk2zk, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-3dnqvz-control-plane-6lw95, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-sdgrg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-3dnqvz-control-plane-7rnlm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-sqnst, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-3dnqvz-control-plane-6lw95, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-3dnqvz-control-plane-6lw95, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-5dvtld
STEP: Redacting sensitive information from logs


• [SLOW TEST:3260.533 seconds]
... skipping 159 lines ...
Nov  1 20:14:12.916: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-og0lty-md-0-48tqs

Nov  1 20:14:13.339: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-og0lty in namespace kcp-upgrade-umhp5r

Nov  1 20:14:39.368: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-og0lty-md-win-zd6xm

Failed to get logs for machine kcp-upgrade-og0lty-md-win-7cc897b849-6r7l6, cluster kcp-upgrade-umhp5r/kcp-upgrade-og0lty: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov  1 20:14:39.746: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster kcp-upgrade-og0lty in namespace kcp-upgrade-umhp5r

Nov  1 20:15:07.033: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-og0lty-md-win-jllfh

Failed to get logs for machine kcp-upgrade-og0lty-md-win-7cc897b849-fjhf2, cluster kcp-upgrade-umhp5r/kcp-upgrade-og0lty: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster kcp-upgrade-umhp5r/kcp-upgrade-og0lty kube-system pod logs
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-og0lty-control-plane-zsv4z, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-windows-vknb5, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-og0lty-control-plane-dvtt2, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-og0lty-control-plane-zsv4z, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-4x6lc, container kube-proxy
... skipping 20 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-snvjh, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-vknb5, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-lg7zr, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-og0lty-control-plane-zsv4z, container kube-apiserver
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-og0lty-control-plane-zsv4z, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-dqhwf, container calico-node
STEP: Got error while iterating over activity logs for resource group capz-e2e-vzt8fw: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000501528s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-umhp5r" namespace
STEP: Deleting cluster kcp-upgrade-umhp5r/kcp-upgrade-og0lty
STEP: Deleting cluster kcp-upgrade-og0lty
INFO: Waiting for the Cluster kcp-upgrade-umhp5r/kcp-upgrade-og0lty to be deleted
STEP: Waiting for cluster kcp-upgrade-og0lty to be deleted
... skipping 215 lines ...
STEP: Dumping logs from the "kcp-upgrade-62m5su" workload cluster
STEP: Dumping workload cluster kcp-upgrade-hfx3ed/kcp-upgrade-62m5su logs
Nov  1 19:58:35.340: INFO: INFO: Collecting logs for node kcp-upgrade-62m5su-control-plane-ldq2k in cluster kcp-upgrade-62m5su in namespace kcp-upgrade-hfx3ed

Nov  1 20:00:46.114: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-62m5su-control-plane-ldq2k

Failed to get logs for machine kcp-upgrade-62m5su-control-plane-bfgnt, cluster kcp-upgrade-hfx3ed/kcp-upgrade-62m5su: dialing public load balancer at kcp-upgrade-62m5su-213b360e.eastus2.cloudapp.azure.com: dial tcp 20.44.79.235:22: connect: connection timed out
Nov  1 20:00:47.055: INFO: INFO: Collecting logs for node kcp-upgrade-62m5su-md-0-d6g5x in cluster kcp-upgrade-62m5su in namespace kcp-upgrade-hfx3ed

Nov  1 20:02:57.186: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-62m5su-md-0-d6g5x

Failed to get logs for machine kcp-upgrade-62m5su-md-0-5dbdff94c7-cf2hd, cluster kcp-upgrade-hfx3ed/kcp-upgrade-62m5su: dialing public load balancer at kcp-upgrade-62m5su-213b360e.eastus2.cloudapp.azure.com: dial tcp 20.44.79.235:22: connect: connection timed out
Nov  1 20:02:57.988: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster kcp-upgrade-62m5su in namespace kcp-upgrade-hfx3ed

Nov  1 20:09:30.401: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-62m5su-md-win-cfrj7

Failed to get logs for machine kcp-upgrade-62m5su-md-win-6c8fb9fbbb-j9fzg, cluster kcp-upgrade-hfx3ed/kcp-upgrade-62m5su: dialing public load balancer at kcp-upgrade-62m5su-213b360e.eastus2.cloudapp.azure.com: dial tcp 20.44.79.235:22: connect: connection timed out
Nov  1 20:09:31.185: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-62m5su in namespace kcp-upgrade-hfx3ed

Nov  1 20:16:03.621: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-62m5su-md-win-thlch

Failed to get logs for machine kcp-upgrade-62m5su-md-win-6c8fb9fbbb-vkxxl, cluster kcp-upgrade-hfx3ed/kcp-upgrade-62m5su: dialing public load balancer at kcp-upgrade-62m5su-213b360e.eastus2.cloudapp.azure.com: dial tcp 20.44.79.235:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-hfx3ed/kcp-upgrade-62m5su kube-system pod logs
STEP: Fetching kube-system pod logs took 388.042068ms
STEP: Dumping workload cluster kcp-upgrade-hfx3ed/kcp-upgrade-62m5su Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-4ptb6, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-windows-bxw24, container calico-node-startup
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-8r2lw, container coredns
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-69dpb, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-62m5su-control-plane-ldq2k, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-windows-bxw24, container calico-node-felix
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-62m5su-control-plane-ldq2k, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-62m5su-control-plane-ldq2k, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-windows-tgbt8, container calico-node-startup
STEP: Got error while iterating over activity logs for resource group capz-e2e-r4xu1u: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000271187s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-hfx3ed" namespace
STEP: Deleting cluster kcp-upgrade-hfx3ed/kcp-upgrade-62m5su
STEP: Deleting cluster kcp-upgrade-62m5su
INFO: Waiting for the Cluster kcp-upgrade-hfx3ed/kcp-upgrade-62m5su to be deleted
STEP: Waiting for cluster kcp-upgrade-62m5su to be deleted
... skipping 162 lines ...
INFO: Waiting for control plane to be ready
INFO: Waiting for control plane md-scale-ycgavh/md-scale-bcoa7k-control-plane to be ready (implies underlying nodes to be ready as well)
STEP: Waiting for the control plane to be ready
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
STEP: Waiting for the workload nodes to exist
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4x6lc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-og0lty-control-plane-xbz9j, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-og0lty-control-plane-xbz9j, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-og0lty-control-plane-zsv4z, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-og0lty-control-plane-dvtt2, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-zf9gm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9dmgk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-og0lty-control-plane-zsv4z, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-og0lty-control-plane-zsv4z, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-og0lty-control-plane-zsv4z, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-og0lty-control-plane-dvtt2, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-og0lty-control-plane-dvtt2, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-og0lty-control-plane-dvtt2, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-dqhwf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-l2k6h, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6twdw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-og0lty-control-plane-xbz9j, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-snvjh, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-og0lty-control-plane-xbz9j, container kube-apiserver: http2: client connection lost
INFO: Waiting for the machine pools to be provisioned
STEP: Scaling the MachineDeployment out to 3
INFO: Scaling machine deployment md-scale-ycgavh/md-scale-bcoa7k-md-0 from 1 to 3 replicas
INFO: Waiting for correct number of replicas to exist
STEP: Scaling the MachineDeployment down to 1
INFO: Scaling machine deployment md-scale-ycgavh/md-scale-bcoa7k-md-0 from 3 to 1 replicas
... skipping 10 lines ...
Nov  1 21:08:03.601: INFO: INFO: Collecting boot logs for AzureMachine md-scale-bcoa7k-md-0-pmf69

Nov  1 21:08:03.836: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster md-scale-bcoa7k in namespace md-scale-ycgavh

Nov  1 21:09:16.322: INFO: INFO: Collecting boot logs for AzureMachine md-scale-bcoa7k-md-win-2cdj8

Failed to get logs for machine md-scale-bcoa7k-md-win-794f59d846-lr7kb, cluster md-scale-ycgavh/md-scale-bcoa7k: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov  1 21:09:16.979: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster md-scale-bcoa7k in namespace md-scale-ycgavh

Nov  1 21:09:40.814: INFO: INFO: Collecting boot logs for AzureMachine md-scale-bcoa7k-md-win-fg5f2

Failed to get logs for machine md-scale-bcoa7k-md-win-794f59d846-sgc9c, cluster md-scale-ycgavh/md-scale-bcoa7k: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-scale-ycgavh/md-scale-bcoa7k kube-system pod logs
STEP: Fetching kube-system pod logs took 642.260297ms
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-qcnhm, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-windows-429dt, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-proxy-fnscc, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-7lg4n, container calico-node-felix
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/etcd-md-scale-bcoa7k-control-plane-ljbm7, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-windows-429dt, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-apiserver-md-scale-bcoa7k-control-plane-ljbm7, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-ljtrj, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-bcoa7k-control-plane-ljbm7, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-md-scale-bcoa7k-control-plane-ljbm7, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-giyndz: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000450157s
STEP: Dumping all the Cluster API resources in the "md-scale-ycgavh" namespace
STEP: Deleting cluster md-scale-ycgavh/md-scale-bcoa7k
STEP: Deleting cluster md-scale-bcoa7k
INFO: Waiting for the Cluster md-scale-ycgavh/md-scale-bcoa7k to be deleted
STEP: Waiting for cluster md-scale-bcoa7k to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-qcnhm, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-2fwnf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-scale-bcoa7k-control-plane-ljbm7, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-429dt, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7cjxr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-scale-bcoa7k-control-plane-ljbm7, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-429dt, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-scale-bcoa7k-control-plane-ljbm7, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-vdlqs, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-scale-bcoa7k-control-plane-ljbm7, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-wzzn7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mxpzq, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-scale" test spec
INFO: Deleting namespace md-scale-ycgavh
STEP: Redacting sensitive information from logs


• [SLOW TEST:2080.487 seconds]
Running the Cluster API E2E tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:40
  Should successfully scale out and scale in a MachineDeployment
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:209
    Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
    /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/md_scale.go:70
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:169","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2021-11-01T21:28:36Z"}
++ early_exit_handler
++ '[' -n 164 ']'
++ kill -TERM 164
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 5 lines ...