This job view page is being replaced by Spyglass soon. Check out the new job view.
PRshysank: v1alpha4 -> v1beta1 clusterctl upgrade test
ResultFAILURE
Tests 1 failed / 13 succeeded
Started2021-11-17 00:10
Elapsed2h21m
Revisionee7a6ed67cb87d871a770045a4904a1eda93ad60
Refs 1810

Test Failures


capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd 28m14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sKCP\supgrade\sspec\sin\sa\sHA\scluster\sShould\ssuccessfully\supgrade\sKubernetes\,\sDNS\,\skube\-proxy\,\sand\setcd$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.1/e2e/kcp_upgrade.go:75
Timed out after 1200.000s.
Expected
    <int>: 1
to equal
    <int>: 2
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.1/framework/machinedeployment_helpers.go:121
				
				Click to see stdout/stderrfrom junit.e2e_suite.3.xml

Filter through log files | View test history on testgrid


Show 13 Passed Tests

Show 10 Skipped Tests

Error lines from build-log.txt

... skipping 473 lines ...
Nov 17 00:23:09.704: INFO: INFO: Collecting boot logs for AzureMachine quick-start-cxscrv-md-0-hsncb

Nov 17 00:23:09.981: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster quick-start-cxscrv in namespace quick-start-n59wcw

Nov 17 00:23:28.224: INFO: INFO: Collecting boot logs for AzureMachine quick-start-cxscrv-md-win-xbhdd

Failed to get logs for machine quick-start-cxscrv-md-win-6bf66c7b86-97hck, cluster quick-start-n59wcw/quick-start-cxscrv: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 17 00:23:28.501: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster quick-start-cxscrv in namespace quick-start-n59wcw

Nov 17 00:23:50.041: INFO: INFO: Collecting boot logs for AzureMachine quick-start-cxscrv-md-win-z7pgb

Failed to get logs for machine quick-start-cxscrv-md-win-6bf66c7b86-fgvgk, cluster quick-start-n59wcw/quick-start-cxscrv: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster quick-start-n59wcw/quick-start-cxscrv kube-system pod logs
STEP: Fetching kube-system pod logs took 382.809297ms
STEP: Dumping workload cluster quick-start-n59wcw/quick-start-cxscrv Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-2khl6, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-nj9wf, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-cxscrv-control-plane-bqvvs, container kube-controller-manager
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-s9b7r, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-quick-start-cxscrv-control-plane-bqvvs, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-apiserver-quick-start-cxscrv-control-plane-bqvvs, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-xd5kc, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-nj9wf, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-zkzfd, container calico-kube-controllers
STEP: Error starting logs stream for pod kube-system/calico-node-windows-nj9wf, container calico-node-felix: container "calico-node-felix" in pod "calico-node-windows-nj9wf" is waiting to start: PodInitializing
STEP: Error starting logs stream for pod kube-system/calico-node-windows-nj9wf, container calico-node-startup: container "calico-node-startup" in pod "calico-node-windows-nj9wf" is waiting to start: PodInitializing
STEP: Fetching activity logs took 525.102438ms
STEP: Dumping all the Cluster API resources in the "quick-start-n59wcw" namespace
STEP: Deleting cluster quick-start-n59wcw/quick-start-cxscrv
STEP: Deleting cluster quick-start-cxscrv
INFO: Waiting for the Cluster quick-start-n59wcw/quick-start-cxscrv to be deleted
STEP: Waiting for cluster quick-start-cxscrv to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-2mdrq, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hsr9t, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-wkz4p, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-quick-start-cxscrv-control-plane-bqvvs, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-zkzfd, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rk5np, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-t4cs8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-quick-start-cxscrv-control-plane-bqvvs, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-quick-start-cxscrv-control-plane-bqvvs, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-s9b7r, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-quick-start-cxscrv-control-plane-bqvvs, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-d5hml, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-wkz4p, container calico-node-felix: http2: client connection lost
STEP: Deleting namespace used for hosting the "quick-start" test spec
INFO: Deleting namespace quick-start-n59wcw
STEP: Redacting sensitive information from logs


• [SLOW TEST:773.403 seconds]
... skipping 63 lines ...
Nov 17 00:44:09.356: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-r6fu7n-md-0-wtn4v

Nov 17 00:44:09.624: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-r6fu7n in namespace kcp-upgrade-6wyq7p

Nov 17 00:44:35.955: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-r6fu7n-md-win-bckgq

Failed to get logs for machine kcp-upgrade-r6fu7n-md-win-cb86d895b-64jgb, cluster kcp-upgrade-6wyq7p/kcp-upgrade-r6fu7n: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 17 00:44:36.247: INFO: INFO: Unable to collect logs as node doesn't have addresses
Nov 17 00:44:36.247: INFO: INFO: Collecting logs for node kcp-upgrade-r6fu7n-md-win-7zgqq in cluster kcp-upgrade-r6fu7n in namespace kcp-upgrade-6wyq7p

Nov 17 00:44:42.616: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-r6fu7n-md-win-7zgqq

STEP: Redacting sensitive information from logs
... skipping 105 lines ...
STEP: Dumping logs from the "kcp-upgrade-xdc41q" workload cluster
STEP: Dumping workload cluster kcp-upgrade-1ymn8s/kcp-upgrade-xdc41q logs
Nov 17 00:29:08.648: INFO: INFO: Collecting logs for node kcp-upgrade-xdc41q-control-plane-6h2bj in cluster kcp-upgrade-xdc41q in namespace kcp-upgrade-1ymn8s

Nov 17 00:31:19.471: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-xdc41q-control-plane-6h2bj

Failed to get logs for machine kcp-upgrade-xdc41q-control-plane-5wf7x, cluster kcp-upgrade-1ymn8s/kcp-upgrade-xdc41q: dialing public load balancer at kcp-upgrade-xdc41q-5e01ad8d.eastus2.cloudapp.azure.com: dial tcp 52.242.101.139:22: connect: connection timed out
Nov 17 00:31:20.338: INFO: INFO: Collecting logs for node kcp-upgrade-xdc41q-md-0-nkv9n in cluster kcp-upgrade-xdc41q in namespace kcp-upgrade-1ymn8s

Nov 17 00:33:30.539: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-xdc41q-md-0-nkv9n

Failed to get logs for machine kcp-upgrade-xdc41q-md-0-7d5f777b6-mzpf6, cluster kcp-upgrade-1ymn8s/kcp-upgrade-xdc41q: dialing public load balancer at kcp-upgrade-xdc41q-5e01ad8d.eastus2.cloudapp.azure.com: dial tcp 52.242.101.139:22: connect: connection timed out
Nov 17 00:33:31.981: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-xdc41q in namespace kcp-upgrade-1ymn8s

Nov 17 00:40:03.755: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-xdc41q-md-win-2wm4n

Failed to get logs for machine kcp-upgrade-xdc41q-md-win-6748469-djtws, cluster kcp-upgrade-1ymn8s/kcp-upgrade-xdc41q: dialing public load balancer at kcp-upgrade-xdc41q-5e01ad8d.eastus2.cloudapp.azure.com: dial tcp 52.242.101.139:22: connect: connection timed out
Nov 17 00:40:04.563: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster kcp-upgrade-xdc41q in namespace kcp-upgrade-1ymn8s

Nov 17 00:46:36.975: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-xdc41q-md-win-qvvpd

Failed to get logs for machine kcp-upgrade-xdc41q-md-win-6748469-xvdxp, cluster kcp-upgrade-1ymn8s/kcp-upgrade-xdc41q: dialing public load balancer at kcp-upgrade-xdc41q-5e01ad8d.eastus2.cloudapp.azure.com: dial tcp 52.242.101.139:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-1ymn8s/kcp-upgrade-xdc41q kube-system pod logs
STEP: Fetching kube-system pod logs took 381.853135ms
STEP: Dumping workload cluster kcp-upgrade-1ymn8s/kcp-upgrade-xdc41q Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-qfxvh, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-d2gwb, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-xdc41q-control-plane-6h2bj, container etcd
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-windows-8t5cm, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-proxy-h8jh6, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-dxgnn, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-windows-8t5cm, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-xdc41q-control-plane-6h2bj, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-windows-p5dps, container calico-node-felix
STEP: Got error while iterating over activity logs for resource group capz-e2e-8f6wu1: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000936134s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-1ymn8s" namespace
STEP: Deleting cluster kcp-upgrade-1ymn8s/kcp-upgrade-xdc41q
STEP: Deleting cluster kcp-upgrade-xdc41q
INFO: Waiting for the Cluster kcp-upgrade-1ymn8s/kcp-upgrade-xdc41q to be deleted
STEP: Waiting for cluster kcp-upgrade-xdc41q to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-d2gwb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-dxgnn, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-qfxvh, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-xdc41q-control-plane-6h2bj, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-m6gm6, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-4j87j, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-xdc41q-control-plane-6h2bj, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-676dj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-p5dps, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-h8jh6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-p5dps, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-xdc41q-control-plane-6h2bj, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-xdc41q-control-plane-6h2bj, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-kkktm, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-1ymn8s
STEP: Redacting sensitive information from logs


• [SLOW TEST:2201.801 seconds]
... skipping 91 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-nzftik-control-plane-5bzrd, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-nzftik-control-plane-8sh9v, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-nzftik-control-plane-8sh9v, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-nzftik-control-plane-8sh9v, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-fsrdg, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-6nk5d, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-gge858: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000713923s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-p5hb3e" namespace
STEP: Deleting cluster kcp-upgrade-p5hb3e/kcp-upgrade-nzftik
STEP: Deleting cluster kcp-upgrade-nzftik
INFO: Waiting for the Cluster kcp-upgrade-p5hb3e/kcp-upgrade-nzftik to be deleted
STEP: Waiting for cluster kcp-upgrade-nzftik to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-q7msg, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ht9zf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qhdls, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-nzftik-control-plane-5bzrd, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-nzftik-control-plane-m4qfk, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-f7d79, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mgxfw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-nzftik-control-plane-5bzrd, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-nzftik-control-plane-5bzrd, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-nzftik-control-plane-m4qfk, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-nzftik-control-plane-8sh9v, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6nk5d, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-nzftik-control-plane-5bzrd, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-nzftik-control-plane-m4qfk, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-nzftik-control-plane-8sh9v, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-44wfc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-nzftik-control-plane-8sh9v, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-nzftik-control-plane-m4qfk, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-nzftik-control-plane-8sh9v, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-p5hb3e
STEP: Redacting sensitive information from logs


• [SLOW TEST:1990.684 seconds]
... skipping 7 lines ...
Running the Cluster API E2E tests Running the self-hosted spec 
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107

STEP: Creating namespace "self-hosted" for hosting the cluster
Nov 17 00:53:27.070: INFO: starting to create namespace for hosting the "self-hosted" test spec
2021/11/17 00:53:27 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-fv9z68" using the "management" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-fv9z68 --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management
... skipping 73 lines ...
STEP: Fetching activity logs took 710.564069ms
STEP: Dumping all the Cluster API resources in the "self-hosted" namespace
STEP: Deleting all clusters in the self-hosted namespace
STEP: Deleting cluster self-hosted-fv9z68
INFO: Waiting for the Cluster self-hosted/self-hosted-fv9z68 to be deleted
STEP: Waiting for cluster self-hosted-fv9z68 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-rjdg8, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-self-hosted-fv9z68-control-plane-hjrcl, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-cjhrd, container coredns: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-gqw6g, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-self-hosted-fv9z68-control-plane-hjrcl, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-self-hosted-fv9z68-control-plane-hjrcl, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-zrtjd, container calico-kube-controllers: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-5c9b6bcb5b-gqjbg, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-w7ww7, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bttxv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-self-hosted-fv9z68-control-plane-hjrcl, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-ljgjt, container coredns: http2: client connection lost
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-mxgxp, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-tnqr7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-w7m68, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "self-hosted" test spec
INFO: Deleting namespace self-hosted
STEP: Checking if any resources are left over in Azure for spec "self-hosted"
STEP: Redacting sensitive information from logs
STEP: Redacting sensitive information from logs

... skipping 68 lines ...
Nov 17 00:56:45.661: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-xa1zul-md-0-5iu2bn-4nxfg

Nov 17 00:56:45.979: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster md-rollout-xa1zul in namespace md-rollout-ijn89o

Nov 17 00:57:51.663: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-xa1zul-md-win-9xgtb

Failed to get logs for machine md-rollout-xa1zul-md-win-57c6c84df5-cf72m, cluster md-rollout-ijn89o/md-rollout-xa1zul: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 17 00:57:51.914: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster md-rollout-xa1zul in namespace md-rollout-ijn89o

Nov 17 00:59:09.731: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-xa1zul-md-win-x7tz6

Failed to get logs for machine md-rollout-xa1zul-md-win-57c6c84df5-wl6kj, cluster md-rollout-ijn89o/md-rollout-xa1zul: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 17 00:59:10.047: INFO: INFO: Collecting logs for node 10.1.0.8 in cluster md-rollout-xa1zul in namespace md-rollout-ijn89o

Nov 17 00:59:50.098: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-xa1zul-md-win-6hnc2s-9fgcr

Failed to get logs for machine md-rollout-xa1zul-md-win-965574bc6-qdtn2, cluster md-rollout-ijn89o/md-rollout-xa1zul: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-rollout-ijn89o/md-rollout-xa1zul kube-system pod logs
STEP: Fetching kube-system pod logs took 427.120419ms
STEP: Dumping workload cluster md-rollout-ijn89o/md-rollout-xa1zul Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-jlmc4, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-windows-2th7b, container calico-node-felix
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-9vs6d, container coredns
... skipping 17 lines ...
STEP: Fetching activity logs took 1.155857025s
STEP: Dumping all the Cluster API resources in the "md-rollout-ijn89o" namespace
STEP: Deleting cluster md-rollout-ijn89o/md-rollout-xa1zul
STEP: Deleting cluster md-rollout-xa1zul
INFO: Waiting for the Cluster md-rollout-ijn89o/md-rollout-xa1zul to be deleted
STEP: Waiting for cluster md-rollout-xa1zul to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-md-rollout-xa1zul-control-plane-bqxlc, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-rollout-xa1zul-control-plane-bqxlc, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-4mdhl, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-prjl5, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mljtl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8hxrw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-prjl5, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-mwjrm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-jlmc4, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-bkh2d, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-rollout-xa1zul-control-plane-bqxlc, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-rollout-xa1zul-control-plane-bqxlc, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-4mdhl, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-8sppf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-9vs6d, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-2th7b, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-jgslh, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-2th7b, container calico-node-startup: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-rollout" test spec
INFO: Deleting namespace md-rollout-ijn89o
STEP: Redacting sensitive information from logs


• [SLOW TEST:2036.693 seconds]
... skipping 68 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-xmz580-control-plane-5xslg, container kube-scheduler
STEP: Dumping workload cluster mhc-remediation-tpeq4l/mhc-remediation-xmz580 Azure activity log
STEP: Creating log watcher for controller kube-system/kube-proxy-mv24h, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-mhc-remediation-xmz580-control-plane-5xslg, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-4xpcr, container calico-node
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-q8p6w, container calico-kube-controllers
STEP: Error starting logs stream for pod kube-system/calico-node-4xpcr, container calico-node: container "calico-node" in pod "calico-node-4xpcr" is waiting to start: PodInitializing
STEP: Fetching activity logs took 516.60709ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-tpeq4l" namespace
STEP: Deleting cluster mhc-remediation-tpeq4l/mhc-remediation-xmz580
STEP: Deleting cluster mhc-remediation-xmz580
INFO: Waiting for the Cluster mhc-remediation-tpeq4l/mhc-remediation-xmz580 to be deleted
STEP: Waiting for cluster mhc-remediation-xmz580 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-xmz580-control-plane-5xslg, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mv24h, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-znhwh, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-xmz580-control-plane-5xslg, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6w4tn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-q8p6w, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-xmz580-control-plane-5xslg, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-qvxp5, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-xmz580-control-plane-5xslg, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-tpeq4l
STEP: Redacting sensitive information from logs


• [SLOW TEST:993.941 seconds]
... skipping 96 lines ...
STEP: Fetching activity logs took 553.968227ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-kzgjkw" namespace
STEP: Deleting cluster mhc-remediation-kzgjkw/mhc-remediation-ytmduj
STEP: Deleting cluster mhc-remediation-ytmduj
INFO: Waiting for the Cluster mhc-remediation-kzgjkw/mhc-remediation-ytmduj to be deleted
STEP: Waiting for cluster mhc-remediation-ytmduj to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-ytmduj-control-plane-znj7c, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qln5l, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-d9x86, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-h5kgv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-ytmduj-control-plane-vc72n, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-ytmduj-control-plane-znj7c, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-ytmduj-control-plane-znj7c, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-ytmduj-control-plane-vc72n, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-rsn6x, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-ytmduj-control-plane-vc72n, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-ytmduj-control-plane-znj7c, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-nxsq8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-zwscw, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-czr4v, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-ytmduj-control-plane-vc72n, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-kzgjkw
STEP: Redacting sensitive information from logs


• [SLOW TEST:1428.192 seconds]
... skipping 58 lines ...
STEP: Fetching activity logs took 467.829618ms
STEP: Dumping all the Cluster API resources in the "kcp-adoption-5xvrpb" namespace
STEP: Deleting cluster kcp-adoption-5xvrpb/kcp-adoption-ihsvon
STEP: Deleting cluster kcp-adoption-ihsvon
INFO: Waiting for the Cluster kcp-adoption-5xvrpb/kcp-adoption-ihsvon to be deleted
STEP: Waiting for cluster kcp-adoption-ihsvon to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-adoption-ihsvon-control-plane-0, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-sldss, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-adoption-ihsvon-control-plane-0, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8xcs2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-adoption-ihsvon-control-plane-0, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-g6gjb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-adoption-ihsvon-control-plane-0, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9xfr6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-vbdm4, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-adoption" test spec
INFO: Deleting namespace kcp-adoption-5xvrpb
STEP: Redacting sensitive information from logs


• [SLOW TEST:819.277 seconds]
... skipping 60 lines ...
Nov 17 01:45:16.422: INFO: INFO: Collecting boot logs for AzureMachine machine-pool-e8h1xu-control-plane-vfhnk

Nov 17 01:45:17.241: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-e8h1xu in namespace machine-pool-nvj6iq

Nov 17 01:45:35.848: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set machine-pool-e8h1xu-mp-0

Failed to get logs for machine pool machine-pool-e8h1xu-mp-0, cluster machine-pool-nvj6iq/machine-pool-e8h1xu: [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1]
Nov 17 01:45:36.149: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-e8h1xu in namespace machine-pool-nvj6iq

Nov 17 01:46:15.328: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set win-p-win

Failed to get logs for machine pool machine-pool-e8h1xu-mp-win, cluster machine-pool-nvj6iq/machine-pool-e8h1xu: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster machine-pool-nvj6iq/machine-pool-e8h1xu kube-system pod logs
STEP: Fetching kube-system pod logs took 359.057286ms
STEP: Dumping workload cluster machine-pool-nvj6iq/machine-pool-e8h1xu Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-xdq24, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-e8h1xu-control-plane-vfhnk, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/etcd-machine-pool-e8h1xu-control-plane-vfhnk, container etcd
... skipping 11 lines ...
STEP: Fetching activity logs took 788.130997ms
STEP: Dumping all the Cluster API resources in the "machine-pool-nvj6iq" namespace
STEP: Deleting cluster machine-pool-nvj6iq/machine-pool-e8h1xu
STEP: Deleting cluster machine-pool-e8h1xu
INFO: Waiting for the Cluster machine-pool-nvj6iq/machine-pool-e8h1xu to be deleted
STEP: Waiting for cluster machine-pool-e8h1xu to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-v92fr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-e8h1xu-control-plane-vfhnk, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-xdq24, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-fp446, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-g2s4g, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-e8h1xu-control-plane-vfhnk, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-pq9gc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-g2s4g, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-e8h1xu-control-plane-vfhnk, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-nb4j4, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-e8h1xu-control-plane-vfhnk, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-67lp4, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-nvj6iq
STEP: Redacting sensitive information from logs


• [SLOW TEST:2407.726 seconds]
... skipping 61 lines ...
Nov 17 01:44:34.384: INFO: INFO: Collecting boot logs for AzureMachine md-scale-gt6ddv-md-0-cpkql

Nov 17 01:44:34.726: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster md-scale-gt6ddv in namespace md-scale-0rls3c

Nov 17 01:45:45.629: INFO: INFO: Collecting boot logs for AzureMachine md-scale-gt6ddv-md-win-lr2dn

Failed to get logs for machine md-scale-gt6ddv-md-win-5d45964b4d-5mjbw, cluster md-scale-0rls3c/md-scale-gt6ddv: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 17 01:45:45.963: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster md-scale-gt6ddv in namespace md-scale-0rls3c

Nov 17 01:46:09.937: INFO: INFO: Collecting boot logs for AzureMachine md-scale-gt6ddv-md-win-77s2q

Failed to get logs for machine md-scale-gt6ddv-md-win-5d45964b4d-brj6m, cluster md-scale-0rls3c/md-scale-gt6ddv: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-scale-0rls3c/md-scale-gt6ddv kube-system pod logs
STEP: Fetching kube-system pod logs took 394.979419ms
STEP: Dumping workload cluster md-scale-0rls3c/md-scale-gt6ddv Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-windows-k6slh, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-proxy-c75dq, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-qwkbr, container kube-proxy
... skipping 14 lines ...
STEP: Fetching activity logs took 666.16334ms
STEP: Dumping all the Cluster API resources in the "md-scale-0rls3c" namespace
STEP: Deleting cluster md-scale-0rls3c/md-scale-gt6ddv
STEP: Deleting cluster md-scale-gt6ddv
INFO: Waiting for the Cluster md-scale-0rls3c/md-scale-gt6ddv to be deleted
STEP: Waiting for cluster md-scale-gt6ddv to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-ch6m8, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-scale-gt6ddv-control-plane-6dgk2, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-scale-gt6ddv-control-plane-6dgk2, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-2wv7s, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-pkld2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-khr44, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-scale-gt6ddv-control-plane-6dgk2, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-gk7ns, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-c75dq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-k6slh, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-qwkbr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-scale-gt6ddv-control-plane-6dgk2, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-ch6m8, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-k6slh, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-p6stm, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-scale" test spec
INFO: Deleting namespace md-scale-0rls3c
STEP: Redacting sensitive information from logs


• [SLOW TEST:1739.544 seconds]
... skipping 56 lines ...
STEP: Dumping logs from the "node-drain-6wzl0z" workload cluster
STEP: Dumping workload cluster node-drain-iegq5v/node-drain-6wzl0z logs
Nov 17 01:59:01.245: INFO: INFO: Collecting logs for node node-drain-6wzl0z-control-plane-zk7z9 in cluster node-drain-6wzl0z in namespace node-drain-iegq5v

Nov 17 02:01:11.855: INFO: INFO: Collecting boot logs for AzureMachine node-drain-6wzl0z-control-plane-zk7z9

Failed to get logs for machine node-drain-6wzl0z-control-plane-hm5l2, cluster node-drain-iegq5v/node-drain-6wzl0z: dialing public load balancer at node-drain-6wzl0z-f08c4657.eastus2.cloudapp.azure.com: dial tcp 20.62.52.66:22: connect: connection timed out
STEP: Dumping workload cluster node-drain-iegq5v/node-drain-6wzl0z kube-system pod logs
STEP: Fetching kube-system pod logs took 325.552652ms
STEP: Dumping workload cluster node-drain-iegq5v/node-drain-6wzl0z Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-mt7k5, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-nb8zc, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-node-drain-6wzl0z-control-plane-zk7z9, container kube-controller-manager
... skipping 142 lines ...
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-4bq2j, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-controller-manager-clusterctl-upgrade-wj5dp3-control-plane-hzcr7, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-v46qd, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-clusterctl-upgrade-wj5dp3-control-plane-hzcr7, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-2mmkq, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-q9ftt, container coredns
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 225.269808ms
STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-hf457l" namespace
STEP: Deleting cluster clusterctl-upgrade-hf457l/clusterctl-upgrade-wj5dp3
STEP: Deleting cluster clusterctl-upgrade-wj5dp3
INFO: Waiting for the Cluster clusterctl-upgrade-hf457l/clusterctl-upgrade-wj5dp3 to be deleted
STEP: Waiting for cluster clusterctl-upgrade-wj5dp3 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-x78ks, container calico-node: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-g2z2l, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-5c9b6bcb5b-xvk7v, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2mmkq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-clusterctl-upgrade-wj5dp3-control-plane-hzcr7, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-q9ftt, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-4bq2j, container calico-kube-controllers: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-knlfp, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-c77ht, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-clusterctl-upgrade-wj5dp3-control-plane-hzcr7, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-clusterctl-upgrade-wj5dp3-control-plane-hzcr7, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-z7r5f, container coredns: http2: client connection lost
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-62rg9, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-clusterctl-upgrade-wj5dp3-control-plane-hzcr7, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-v46qd, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
INFO: Deleting namespace clusterctl-upgrade-hf457l
STEP: Redacting sensitive information from logs


• [SLOW TEST:1587.250 seconds]
... skipping 143 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-kz25w, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-8wmmp, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-nnnt8, container coredns
STEP: Creating log watcher for controller kube-system/kube-scheduler-clusterctl-upgrade-gabab8-control-plane-scks2, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-28z64, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-tlvtw, container calico-kube-controllers
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 237.62492ms
STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-5w5jv8" namespace
STEP: Deleting cluster clusterctl-upgrade-5w5jv8/clusterctl-upgrade-gabab8
STEP: Deleting cluster clusterctl-upgrade-gabab8
INFO: Waiting for the Cluster clusterctl-upgrade-5w5jv8/clusterctl-upgrade-gabab8 to be deleted
STEP: Waiting for cluster clusterctl-upgrade-gabab8 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-clusterctl-upgrade-gabab8-control-plane-scks2, container kube-apiserver: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-5c9b6bcb5b-t4s47, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8wmmp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-tlvtw, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-28z64, container kube-proxy: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-nv4s9, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-79k58, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-clusterctl-upgrade-gabab8-control-plane-scks2, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-ft2zp, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-n5pxz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-kz25w, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-clusterctl-upgrade-gabab8-control-plane-scks2, container kube-scheduler: http2: client connection lost
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-rgzrt, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-clusterctl-upgrade-gabab8-control-plane-scks2, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-nnnt8, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
INFO: Deleting namespace clusterctl-upgrade-5w5jv8
STEP: Redacting sensitive information from logs


• [SLOW TEST:1812.695 seconds]
... skipping 9 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster [It] Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.1/framework/machinedeployment_helpers.go:121

Ran 14 of 24 Specs in 8096.215 seconds
FAIL! -- 13 Passed | 1 Failed | 0 Pending | 10 Skipped


Ginkgo ran 1 suite in 2h16m16.510233886s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:176: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:184: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...