This job view page is being replaced by Spyglass soon. Check out the new job view.
PRCecileRobertMichon: Add workload cluster upgrades spec
ResultFAILURE
Tests 1 failed / 13 succeeded
Started2021-11-16 17:25
Elapsed2h47m
Revision49f6f62d97fa787bac7ddb4e5017b280d03cfa3b
Refs 1767

Test Failures


capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines 27m49s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sShould\ssuccessfully\sexercise\smachine\spools\sShould\ssuccessfully\screate\sa\scluster\swith\smachine\spool\smachines$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/machine_pool.go:119
Test Panicked
/usr/local/go/src/runtime/panic.go:88

Panic: runtime error: index out of range [1] with length 1

Full stack:
sigs.k8s.io/cluster-api-provider-azure/test/e2e.AzureLogCollector.CollectMachinePoolLog(0x2595320, 0xc0000560d0, 0x25afe80, 0xc00020eaf0, 0xc00018b898, 0xc00210d0e0, 0x55, 0xc00210d080, 0xc0004fcc20)
	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_logcollector.go:104 +0x7be
sigs.k8s.io/cluster-api/test/framework.(*clusterProxy).CollectWorkloadClusterLogs(0xc000a04680, 0x2595320, 0xc0000560d0, 0xc001fee228, 0x13, 0xc001fee210, 0x13, 0xc0006fe1e0, 0x2c)
	/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/cluster_proxy.go:250 +0x8ce
sigs.k8s.io/cluster-api-provider-azure/test/e2e.(*AzureClusterProxy).CollectWorkloadClusterLogs(0xc001a73b60, 0x2595320, 0xc0000560d0, 0xc001fee228, 0x13, 0xc001fee210, 0x13, 0xc0006fe1e0, 0x2c)
	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:129 +0x174
sigs.k8s.io/cluster-api/test/e2e.dumpSpecResourcesAndCleanup(0x2595320, 0xc0000560d0, 0x229c76a, 0xc, 0x25b3a50, 0xc001a73b60, 0xc0005c6240, 0xf, 0xc000f1af20, 0xc000b352b0, ...)
	/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/common.go:67 +0x1b2
sigs.k8s.io/cluster-api/test/e2e.MachinePoolSpec.func3()
	/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/machine_pool.go:121 +0xc7
github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0006df320, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/leafnodes/runner.go:113 +0xa3
github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0006df320, 0xc00111f1ee, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/leafnodes/runner.go:64 +0x15c
github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc00046a1d8, 0x25507a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/leafnodes/setup_nodes.go:15 +0x87
github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc00111f6a0, 0xc00053b2c0, 0x25507a0, 0xc000070940)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/spec/spec.go:180 +0x3cd
github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00053b2c0, 0x0, 0x25507a0, 0xc000070940)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/spec/spec.go:218 +0x809
github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00053b2c0, 0x25507a0, 0xc000070940)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/spec/spec.go:138 +0xf2
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0004c0580, 0xc00053b2c0, 0x2)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/specrunner/spec_runner.go:200 +0x111
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0004c0580, 0x1)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/specrunner/spec_runner.go:170 +0x147
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0004c0580, 0xc000836298)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/specrunner/spec_runner.go:66 +0x117
github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001ac9a0, 0x7fcc42aeebb8, 0xc000470c00, 0x22953a6, 0x8, 0xc000168320, 0x2, 0x2, 0x259e718, 0xc000070940, ...)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/suite/suite.go:79 +0x546
github.com/onsi/ginkgo.runSpecsWithCustomReporters(0x25525c0, 0xc000470c00, 0x22953a6, 0x8, 0xc000168300, 0x2, 0x2, 0x2)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/ginkgo_dsl.go:245 +0x218
github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x25525c0, 0xc000470c00, 0x22953a6, 0x8, 0xc00009bf30, 0x1, 0x1, 0xc0006b6748)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/ginkgo_dsl.go:228 +0x136
sigs.k8s.io/cluster-api-provider-azure/test/e2e.TestE2E(0xc000470c00)
	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:260 +0x1da
testing.tRunner(0xc000470c00, 0x2388370)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 13 Passed Tests

Show 10 Skipped Tests

Error lines from build-log.txt

... skipping 480 lines ...
Nov 16 17:39:36.115: INFO: INFO: Collecting boot logs for AzureMachine quick-start-9ximh6-md-0-27tp8

Nov 16 17:39:36.424: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster quick-start-9ximh6 in namespace quick-start-so7b1m

Nov 16 17:40:23.342: INFO: INFO: Collecting boot logs for AzureMachine quick-start-9ximh6-md-win-l66cb

Failed to get logs for machine quick-start-9ximh6-md-win-5486bc4984-4z9jk, cluster quick-start-so7b1m/quick-start-9ximh6: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 16 17:40:23.610: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster quick-start-9ximh6 in namespace quick-start-so7b1m

Nov 16 17:40:50.755: INFO: INFO: Collecting boot logs for AzureMachine quick-start-9ximh6-md-win-lsw8q

Failed to get logs for machine quick-start-9ximh6-md-win-5486bc4984-8cjkq, cluster quick-start-so7b1m/quick-start-9ximh6: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster quick-start-so7b1m/quick-start-9ximh6 kube-system pod logs
STEP: Fetching kube-system pod logs took 208.062862ms
STEP: Dumping workload cluster quick-start-so7b1m/quick-start-9ximh6 Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-djksx, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-quick-start-9ximh6-control-plane-bw7ft, container kube-apiserver
STEP: Creating log watcher for controller kube-system/etcd-quick-start-9ximh6-control-plane-bw7ft, container etcd
... skipping 14 lines ...
STEP: Fetching activity logs took 588.030397ms
STEP: Dumping all the Cluster API resources in the "quick-start-so7b1m" namespace
STEP: Deleting cluster quick-start-so7b1m/quick-start-9ximh6
STEP: Deleting cluster quick-start-9ximh6
INFO: Waiting for the Cluster quick-start-so7b1m/quick-start-9ximh6 to be deleted
STEP: Waiting for cluster quick-start-9ximh6 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-6vhrv, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-quick-start-9ximh6-control-plane-bw7ft, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-6vhrv, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-nc7ss, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-pthcc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-quick-start-9ximh6-control-plane-bw7ft, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-djksx, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-2b2nk, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-mbksc, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-v8rvw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-quick-start-9ximh6-control-plane-bw7ft, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7d5m9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6pk9r, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-gpbtq, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-quick-start-9ximh6-control-plane-bw7ft, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-lnbcx, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "quick-start" test spec
INFO: Deleting namespace quick-start-so7b1m
STEP: Redacting sensitive information from logs


• [SLOW TEST:788.346 seconds]
... skipping 56 lines ...
STEP: Dumping logs from the "kcp-upgrade-r5ojim" workload cluster
STEP: Dumping workload cluster kcp-upgrade-tn06r7/kcp-upgrade-r5ojim logs
Nov 16 17:48:54.896: INFO: INFO: Collecting logs for node kcp-upgrade-r5ojim-control-plane-6dglc in cluster kcp-upgrade-r5ojim in namespace kcp-upgrade-tn06r7

Nov 16 17:51:04.619: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-r5ojim-control-plane-6dglc

Failed to get logs for machine kcp-upgrade-r5ojim-control-plane-jvpcz, cluster kcp-upgrade-tn06r7/kcp-upgrade-r5ojim: dialing public load balancer at kcp-upgrade-r5ojim-a46948a1.northcentralus.cloudapp.azure.com: dial tcp 52.159.94.60:22: connect: connection timed out
Nov 16 17:51:05.695: INFO: INFO: Collecting logs for node kcp-upgrade-r5ojim-md-0-r4b8m in cluster kcp-upgrade-r5ojim in namespace kcp-upgrade-tn06r7

Nov 16 17:53:15.691: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-r5ojim-md-0-r4b8m

Failed to get logs for machine kcp-upgrade-r5ojim-md-0-59cbd9bc9c-2mtm5, cluster kcp-upgrade-tn06r7/kcp-upgrade-r5ojim: dialing public load balancer at kcp-upgrade-r5ojim-a46948a1.northcentralus.cloudapp.azure.com: dial tcp 52.159.94.60:22: connect: connection timed out
Nov 16 17:53:16.489: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster kcp-upgrade-r5ojim in namespace kcp-upgrade-tn06r7

Nov 16 17:59:48.907: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-r5ojim-md-win-sfh4c

Failed to get logs for machine kcp-upgrade-r5ojim-md-win-79b6db756b-94f7v, cluster kcp-upgrade-tn06r7/kcp-upgrade-r5ojim: dialing public load balancer at kcp-upgrade-r5ojim-a46948a1.northcentralus.cloudapp.azure.com: dial tcp 52.159.94.60:22: connect: connection timed out
Nov 16 17:59:49.638: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-r5ojim in namespace kcp-upgrade-tn06r7

Nov 16 18:06:22.123: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-r5ojim-md-win-9kj6j

Failed to get logs for machine kcp-upgrade-r5ojim-md-win-79b6db756b-kq5q8, cluster kcp-upgrade-tn06r7/kcp-upgrade-r5ojim: dialing public load balancer at kcp-upgrade-r5ojim-a46948a1.northcentralus.cloudapp.azure.com: dial tcp 52.159.94.60:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-tn06r7/kcp-upgrade-r5ojim kube-system pod logs
STEP: Fetching kube-system pod logs took 216.314045ms
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-4t9xn, container coredns
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-r5ojim-control-plane-6dglc, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-windows-hnftv, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-fhc5s, container calico-node
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-cvmt8, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-r5ojim-control-plane-6dglc, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-xk798, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-r5ojim-control-plane-6dglc, container etcd
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-r5ojim-control-plane-6dglc, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-4qwlc, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-p2zjhd: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00080037s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-tn06r7" namespace
STEP: Deleting cluster kcp-upgrade-tn06r7/kcp-upgrade-r5ojim
STEP: Deleting cluster kcp-upgrade-r5ojim
INFO: Waiting for the Cluster kcp-upgrade-tn06r7/kcp-upgrade-r5ojim to be deleted
STEP: Waiting for cluster kcp-upgrade-r5ojim to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-fhc5s, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-r5ojim-control-plane-6dglc, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-vcxkw, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-4t9xn, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-xk798, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-hnftv, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-pllzq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-r5ojim-control-plane-6dglc, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-27x7v, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-4qwlc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-cr2db, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-hnftv, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cvmt8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-r5ojim-control-plane-6dglc, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-vcxkw, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-jqc9n, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-r5ojim-control-plane-6dglc, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-tn06r7
STEP: Redacting sensitive information from logs


• [SLOW TEST:2492.058 seconds]
... skipping 74 lines ...
Nov 16 18:05:41.586: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-8gcfpt-md-0-wr44n

Nov 16 18:05:42.026: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster kcp-upgrade-8gcfpt in namespace kcp-upgrade-dswbqb

Nov 16 18:06:10.313: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-8gcfpt-md-win-lc6zh

Failed to get logs for machine kcp-upgrade-8gcfpt-md-win-6bff9c4f8d-fmsxs, cluster kcp-upgrade-dswbqb/kcp-upgrade-8gcfpt: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 16 18:06:10.636: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-8gcfpt in namespace kcp-upgrade-dswbqb

Nov 16 18:06:36.196: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-8gcfpt-md-win-vmz4s

Failed to get logs for machine kcp-upgrade-8gcfpt-md-win-6bff9c4f8d-mwtkb, cluster kcp-upgrade-dswbqb/kcp-upgrade-8gcfpt: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster kcp-upgrade-dswbqb/kcp-upgrade-8gcfpt kube-system pod logs
STEP: Fetching kube-system pod logs took 200.613433ms
STEP: Dumping workload cluster kcp-upgrade-dswbqb/kcp-upgrade-8gcfpt Azure activity log
STEP: Creating log watcher for controller kube-system/kube-proxy-8rmq2, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-k2hb2, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-8gcfpt-control-plane-lhr6p, container kube-apiserver
... skipping 20 lines ...
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-8gcfpt-control-plane-7nmd9, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-s5ftg, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-xd5zx, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-8gcfpt-control-plane-27tzf, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-8gcfpt-control-plane-7nmd9, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-8gcfpt-control-plane-lhr6p, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-rbdems: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000281741s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-dswbqb" namespace
STEP: Deleting cluster kcp-upgrade-dswbqb/kcp-upgrade-8gcfpt
STEP: Deleting cluster kcp-upgrade-8gcfpt
INFO: Waiting for the Cluster kcp-upgrade-dswbqb/kcp-upgrade-8gcfpt to be deleted
STEP: Waiting for cluster kcp-upgrade-8gcfpt to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-7c8g4, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-k2hb2, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-8gcfpt-control-plane-7nmd9, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-8gcfpt-control-plane-27tzf, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-xd5zx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-8gcfpt-control-plane-27tzf, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-5jc8v, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-8gcfpt-control-plane-lhr6p, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-8gcfpt-control-plane-7nmd9, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-8gcfpt-control-plane-7nmd9, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-62h4n, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-8gcfpt-control-plane-lhr6p, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-8gcfpt-control-plane-7nmd9, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cbbs7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-g25m8, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-8gcfpt-control-plane-27tzf, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-8gcfpt-control-plane-27tzf, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8rmq2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vdqvj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-wd9qs, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-s5ftg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-8gcfpt-control-plane-lhr6p, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-k2hb2, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-zxmp5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-5jc8v, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-x7595, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-q6h77, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-42qgw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-8gcfpt-control-plane-lhr6p, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-dswbqb
STEP: Redacting sensitive information from logs


• [SLOW TEST:2865.571 seconds]
... skipping 91 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-pd7tq, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-6ku8yx-control-plane-jl7wx, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-9scz6, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-6ku8yx-control-plane-cckzl, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-c4cml, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-6ku8yx-control-plane-m7vhr, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-ivec88: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000669946s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-0e9d1e" namespace
STEP: Deleting cluster kcp-upgrade-0e9d1e/kcp-upgrade-6ku8yx
STEP: Deleting cluster kcp-upgrade-6ku8yx
INFO: Waiting for the Cluster kcp-upgrade-0e9d1e/kcp-upgrade-6ku8yx to be deleted
STEP: Waiting for cluster kcp-upgrade-6ku8yx to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-ks8sb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-6ku8yx-control-plane-m7vhr, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-6ku8yx-control-plane-cckzl, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-c4cml, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-ctl99, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-dk8c4, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-6ku8yx-control-plane-jl7wx, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-6ku8yx-control-plane-cckzl, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-6ku8yx-control-plane-jl7wx, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-6ku8yx-control-plane-jl7wx, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-6ku8yx-control-plane-m7vhr, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-6ku8yx-control-plane-m7vhr, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-6ku8yx-control-plane-cckzl, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9scz6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-clf2l, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-6ku8yx-control-plane-cckzl, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xlct2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-6ku8yx-control-plane-jl7wx, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-pd7tq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-jw2bd, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-6ku8yx-control-plane-m7vhr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-689fp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-hhfdf, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-0e9d1e
STEP: Redacting sensitive information from logs


• [SLOW TEST:2339.579 seconds]
... skipping 7 lines ...
Running the Cluster API E2E tests Running the self-hosted spec 
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107

STEP: Creating namespace "self-hosted" for hosting the cluster
Nov 16 18:21:17.569: INFO: starting to create namespace for hosting the "self-hosted" test spec
2021/11/16 18:21:17 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-j8irbw" using the "management" template (Kubernetes v1.22.3, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-j8irbw --infrastructure (default) --kubernetes-version v1.22.3 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management
... skipping 73 lines ...
STEP: Fetching activity logs took 552.352595ms
STEP: Dumping all the Cluster API resources in the "self-hosted" namespace
STEP: Deleting all clusters in the self-hosted namespace
STEP: Deleting cluster self-hosted-j8irbw
INFO: Waiting for the Cluster self-hosted/self-hosted-j8irbw to be deleted
STEP: Waiting for cluster self-hosted-j8irbw to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-self-hosted-j8irbw-control-plane-r89w5, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ntnvp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-29rjf, container coredns: http2: client connection lost
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-znszs, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-self-hosted-j8irbw-control-plane-r89w5, container kube-controller-manager: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-d47f94765-wl6lb, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-self-hosted-j8irbw-control-plane-r89w5, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-bmw79, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-d6pxz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-self-hosted-j8irbw-control-plane-r89w5, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-86wph, container kube-proxy: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-7g4vd, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-s74sn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-7bwtb, container calico-kube-controllers: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-zmzlv, container manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "self-hosted" test spec
INFO: Deleting namespace self-hosted
STEP: Checking if any resources are left over in Azure for spec "self-hosted"
STEP: Redacting sensitive information from logs
STEP: Redacting sensitive information from logs

... skipping 76 lines ...
STEP: Fetching activity logs took 568.68805ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-3au2lc" namespace
STEP: Deleting cluster mhc-remediation-3au2lc/mhc-remediation-r8wj45
STEP: Deleting cluster mhc-remediation-r8wj45
INFO: Waiting for the Cluster mhc-remediation-3au2lc/mhc-remediation-r8wj45 to be deleted
STEP: Waiting for cluster mhc-remediation-r8wj45 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-r8wj45-control-plane-kblwz, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-h5k7g, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xtpb9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-r8wj45-control-plane-kblwz, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-rfzq2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-94jzr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-r8wj45-control-plane-kblwz, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-f4m4q, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-r8wj45-control-plane-kblwz, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-3au2lc
STEP: Redacting sensitive information from logs


• [SLOW TEST:939.567 seconds]
... skipping 66 lines ...
Nov 16 18:28:53.490: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-ii2oas-md-0-v34o41-gdz72

Nov 16 18:28:53.863: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster md-rollout-ii2oas in namespace md-rollout-akrcoi

Nov 16 18:29:44.200: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-ii2oas-md-win-nlnbn

Failed to get logs for machine md-rollout-ii2oas-md-win-664dd489d7-8cncz, cluster md-rollout-akrcoi/md-rollout-ii2oas: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 16 18:29:44.579: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster md-rollout-ii2oas in namespace md-rollout-akrcoi

Nov 16 18:32:13.342: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-ii2oas-md-win-xtxrh

Failed to get logs for machine md-rollout-ii2oas-md-win-664dd489d7-sgwds, cluster md-rollout-akrcoi/md-rollout-ii2oas: [[dialing from control plane to target node at 10.1.0.5: ssh: rejected: connect failed (Connection timed out), [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]], failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollou-xtxrh' under resource group 'capz-e2e-st8k0m' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"]
Nov 16 18:32:13.890: INFO: INFO: Collecting logs for node 10.1.0.8 in cluster md-rollout-ii2oas in namespace md-rollout-akrcoi

Nov 16 18:32:37.315: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-ii2oas-md-win-a6xukl-t7bht

Failed to get logs for machine md-rollout-ii2oas-md-win-7ff669695f-dzfll, cluster md-rollout-akrcoi/md-rollout-ii2oas: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-rollout-akrcoi/md-rollout-ii2oas kube-system pod logs
STEP: Fetching kube-system pod logs took 176.457334ms
STEP: Dumping workload cluster md-rollout-akrcoi/md-rollout-ii2oas Azure activity log
STEP: Creating log watcher for controller kube-system/kube-proxy-4dl7l, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-2kpzz, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-c765g, container kube-proxy
... skipping 11 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-windows-57b4k, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-windows-57b4k, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-ljw22, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-5zzm7, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-windows-rdvs8, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-windows-rdvs8, container calico-node-felix
STEP: Error starting logs stream for pod kube-system/kube-proxy-windows-fqwrt, container kube-proxy: container "kube-proxy" in pod "kube-proxy-windows-fqwrt" is waiting to start: ContainerCreating
STEP: Error starting logs stream for pod kube-system/calico-node-windows-czrmw, container calico-node-startup: container "calico-node-startup" in pod "calico-node-windows-czrmw" is waiting to start: PodInitializing
STEP: Error starting logs stream for pod kube-system/calico-node-windows-czrmw, container calico-node-felix: container "calico-node-felix" in pod "calico-node-windows-czrmw" is waiting to start: PodInitializing
STEP: Got error while iterating over activity logs for resource group capz-e2e-st8k0m: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001355143s
STEP: Dumping all the Cluster API resources in the "md-rollout-akrcoi" namespace
STEP: Deleting cluster md-rollout-akrcoi/md-rollout-ii2oas
STEP: Deleting cluster md-rollout-ii2oas
INFO: Waiting for the Cluster md-rollout-akrcoi/md-rollout-ii2oas to be deleted
STEP: Waiting for cluster md-rollout-ii2oas to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-c765g, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8ctlz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-rdvs8, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-57b4k, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-5zzm7, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-xppb9, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ghk8t, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-ljw22, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-rollout-ii2oas-control-plane-6dzrv, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-57b4k, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-rdvs8, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-rollout-ii2oas-control-plane-6dzrv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-rollout-ii2oas-control-plane-6dzrv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-rollout-ii2oas-control-plane-6dzrv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-2kpzz, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4dl7l, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-6q2js, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-rollout" test spec
INFO: Deleting namespace md-rollout-akrcoi
STEP: Redacting sensitive information from logs


• [SLOW TEST:1766.229 seconds]
... skipping 58 lines ...
STEP: Fetching activity logs took 583.776452ms
STEP: Dumping all the Cluster API resources in the "kcp-adoption-6090wv" namespace
STEP: Deleting cluster kcp-adoption-6090wv/kcp-adoption-czytob
STEP: Deleting cluster kcp-adoption-czytob
INFO: Waiting for the Cluster kcp-adoption-6090wv/kcp-adoption-czytob to be deleted
STEP: Waiting for cluster kcp-adoption-czytob to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dlgls, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-adoption-czytob-control-plane-0, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-r829z, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-49dqv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-adoption-czytob-control-plane-0, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fdkfm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-pzhcp, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-adoption-czytob-control-plane-0, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-adoption-czytob-control-plane-0, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-adoption" test spec
INFO: Deleting namespace kcp-adoption-6090wv
STEP: Redacting sensitive information from logs


• [SLOW TEST:589.252 seconds]
... skipping 90 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-mk8sd, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-awje9v-control-plane-hqg55, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-awje9v-control-plane-w8kt6, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-awje9v-control-plane-w8kt6, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-rmrb7, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-awje9v-control-plane-cqskt, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-ocxrlc: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000827131s
STEP: Dumping all the Cluster API resources in the "mhc-remediation-9sjhtu" namespace
STEP: Deleting cluster mhc-remediation-9sjhtu/mhc-remediation-awje9v
STEP: Deleting cluster mhc-remediation-awje9v
INFO: Waiting for the Cluster mhc-remediation-9sjhtu/mhc-remediation-awje9v to be deleted
STEP: Waiting for cluster mhc-remediation-awje9v to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-nqvkv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-awje9v-control-plane-cqskt, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6qsfx, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-awje9v-control-plane-w8kt6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-awje9v-control-plane-hqg55, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mk8sd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-bd2f9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-awje9v-control-plane-cqskt, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-q7srp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-awje9v-control-plane-cqskt, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-awje9v-control-plane-cqskt, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rmrb7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-awje9v-control-plane-hqg55, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-awje9v-control-plane-w8kt6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-awje9v-control-plane-hqg55, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-fnztr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-awje9v-control-plane-w8kt6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-9w77t, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-awje9v-control-plane-hqg55, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8j8cb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-cqbsh, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-jd22l, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-awje9v-control-plane-w8kt6, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-9sjhtu
STEP: Redacting sensitive information from logs


• [SLOW TEST:1447.019 seconds]
... skipping 60 lines ...
Nov 16 19:09:27.661: INFO: INFO: Collecting boot logs for AzureMachine machine-pool-a6nwrf-control-plane-b5ggv

Nov 16 19:09:28.347: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-a6nwrf in namespace machine-pool-74j4az

Nov 16 19:10:09.732: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set machine-pool-a6nwrf-mp-0

Failed to get logs for machine pool machine-pool-a6nwrf-mp-0, cluster machine-pool-74j4az/machine-pool-a6nwrf: [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1]
Nov 16 19:10:09.992: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-a6nwrf in namespace machine-pool-74j4az

Nov 16 19:11:21.893: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

STEP: Redacting sensitive information from logs

... skipping 4 lines ...
  Should successfully exercise machine pools
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:202
    Should successfully create a cluster with machine pool machines [AfterEach]
    /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/machine_pool.go:76

    Test Panicked
    runtime error: index out of range [1] with length 1
    /usr/local/go/src/runtime/panic.go:88

    Full Stack Trace
    sigs.k8s.io/cluster-api-provider-azure/test/e2e.AzureLogCollector.CollectMachinePoolLog(0x2595320, 0xc0000560d0, 0x25afe80, 0xc00020eaf0, 0xc00018b898, 0xc00210d0e0, 0x55, 0xc00210d080, 0xc0004fcc20)
    	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_logcollector.go:104 +0x7be
    sigs.k8s.io/cluster-api/test/framework.(*clusterProxy).CollectWorkloadClusterLogs(0xc000a04680, 0x2595320, 0xc0000560d0, 0xc001fee228, 0x13, 0xc001fee210, 0x13, 0xc0006fe1e0, 0x2c)
... skipping 92 lines ...
Nov 16 19:04:42.138: INFO: INFO: Collecting boot logs for AzureMachine md-scale-xd0qj4-md-0-zcjfb

Nov 16 19:04:42.431: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster md-scale-xd0qj4 in namespace md-scale-kkglj5

Nov 16 19:05:33.269: INFO: INFO: Collecting boot logs for AzureMachine md-scale-xd0qj4-md-win-tzmrq

Failed to get logs for machine md-scale-xd0qj4-md-win-55fdcd748d-7dkx8, cluster md-scale-kkglj5/md-scale-xd0qj4: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 16 19:05:33.525: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster md-scale-xd0qj4 in namespace md-scale-kkglj5

Nov 16 19:07:04.471: INFO: INFO: Collecting boot logs for AzureMachine md-scale-xd0qj4-md-win-4jswn

Failed to get logs for machine md-scale-xd0qj4-md-win-55fdcd748d-fvcrb, cluster md-scale-kkglj5/md-scale-xd0qj4: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-scale-kkglj5/md-scale-xd0qj4 kube-system pod logs
STEP: Fetching kube-system pod logs took 225.568687ms
STEP: Dumping workload cluster md-scale-kkglj5/md-scale-xd0qj4 Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-rvdbl, container calico-node
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-pjgh2, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-windows-4dfbz, container calico-node-felix
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-windows-dxc6r, container calico-node-felix
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-7ktbg, container coredns
STEP: Creating log watcher for controller kube-system/kube-scheduler-md-scale-xd0qj4-control-plane-n75ck, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-hrtbl, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-md-scale-xd0qj4-control-plane-n75ck, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-md-scale-xd0qj4-control-plane-n75ck, container kube-apiserver
STEP: Got error while iterating over activity logs for resource group capz-e2e-07do5a: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000977111s
STEP: Dumping all the Cluster API resources in the "md-scale-kkglj5" namespace
STEP: Deleting cluster md-scale-kkglj5/md-scale-xd0qj4
STEP: Deleting cluster md-scale-xd0qj4
INFO: Waiting for the Cluster md-scale-kkglj5/md-scale-xd0qj4 to be deleted
STEP: Waiting for cluster md-scale-xd0qj4 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-rvdbl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-pjgh2, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-hrtbl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-scale-xd0qj4-control-plane-n75ck, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mf6mm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-4dfbz, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-scale-xd0qj4-control-plane-n75ck, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-wkbsv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-4dfbz, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-tvxw6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8ctph, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7ktbg, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-scale-xd0qj4-control-plane-n75ck, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-dxc6r, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jqfjw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-dxc6r, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-scale-xd0qj4-control-plane-n75ck, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-scale" test spec
INFO: Deleting namespace md-scale-kkglj5
STEP: Redacting sensitive information from logs


• [SLOW TEST:1890.266 seconds]
... skipping 56 lines ...
STEP: Dumping logs from the "node-drain-pf7sm1" workload cluster
STEP: Dumping workload cluster node-drain-5eh81q/node-drain-pf7sm1 logs
Nov 16 19:23:49.969: INFO: INFO: Collecting logs for node node-drain-pf7sm1-control-plane-9q8js in cluster node-drain-pf7sm1 in namespace node-drain-5eh81q

Nov 16 19:26:00.111: INFO: INFO: Collecting boot logs for AzureMachine node-drain-pf7sm1-control-plane-9q8js

Failed to get logs for machine node-drain-pf7sm1-control-plane-hknnk, cluster node-drain-5eh81q/node-drain-pf7sm1: dialing public load balancer at node-drain-pf7sm1-33b60576.northcentralus.cloudapp.azure.com: dial tcp 52.159.88.209:22: connect: connection timed out
STEP: Dumping workload cluster node-drain-5eh81q/node-drain-pf7sm1 kube-system pod logs
STEP: Fetching kube-system pod logs took 187.50641ms
STEP: Dumping workload cluster node-drain-5eh81q/node-drain-pf7sm1 Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-pt7zq, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/etcd-node-drain-pf7sm1-control-plane-9q8js, container etcd
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-n2l2d, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-5nzsb, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-node-drain-pf7sm1-control-plane-9q8js, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-apiserver-node-drain-pf7sm1-control-plane-9q8js, container kube-apiserver
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-trzww, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-r6zhz, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-node-drain-pf7sm1-control-plane-9q8js, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-q0v78w: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001042661s
STEP: Dumping all the Cluster API resources in the "node-drain-5eh81q" namespace
STEP: Deleting cluster node-drain-5eh81q/node-drain-pf7sm1
STEP: Deleting cluster node-drain-pf7sm1
INFO: Waiting for the Cluster node-drain-5eh81q/node-drain-pf7sm1 to be deleted
STEP: Waiting for cluster node-drain-pf7sm1 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-5nzsb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-pt7zq, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-node-drain-pf7sm1-control-plane-9q8js, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-node-drain-pf7sm1-control-plane-9q8js, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-node-drain-pf7sm1-control-plane-9q8js, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-r6zhz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-node-drain-pf7sm1-control-plane-9q8js, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-n2l2d, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-trzww, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "node-drain" test spec
INFO: Deleting namespace node-drain-5eh81q
STEP: Redacting sensitive information from logs


• [SLOW TEST:2027.137 seconds]
... skipping 139 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-rpfz5, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-clusterctl-upgrade-dljgky-control-plane-w8cnp, container etcd
STEP: Dumping workload cluster clusterctl-upgrade-n5qv8z/clusterctl-upgrade-dljgky Azure activity log
STEP: Creating log watcher for controller kube-system/kube-scheduler-clusterctl-upgrade-dljgky-control-plane-w8cnp, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-nvgfr, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-hsfn7, container calico-node
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 191.352746ms
STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-n5qv8z" namespace
STEP: Deleting cluster clusterctl-upgrade-n5qv8z/clusterctl-upgrade-dljgky
STEP: Deleting cluster clusterctl-upgrade-dljgky
INFO: Waiting for the Cluster clusterctl-upgrade-n5qv8z/clusterctl-upgrade-dljgky to be deleted
STEP: Waiting for cluster clusterctl-upgrade-dljgky to be deleted
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-q4lf5, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-clusterctl-upgrade-dljgky-control-plane-w8cnp, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-clusterctl-upgrade-dljgky-control-plane-w8cnp, container kube-apiserver: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-v82tv, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-clusterctl-upgrade-dljgky-control-plane-w8cnp, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-hhknm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nvgfr, container kube-proxy: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-q6dpr, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wk42v, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-hsfn7, container calico-node: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-d47f94765-4vt9h, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rpfz5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-lgz74, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-clusterctl-upgrade-dljgky-control-plane-w8cnp, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-8dqnm, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
INFO: Deleting namespace clusterctl-upgrade-n5qv8z
STEP: Redacting sensitive information from logs


• [SLOW TEST:1844.271 seconds]
... skipping 185 lines ...
STEP: Dumping logs from the "k8s-upgrade-and-conformance-ka9n1q" workload cluster
STEP: Dumping workload cluster k8s-upgrade-and-conformance-0haeqr/k8s-upgrade-and-conformance-ka9n1q logs
Nov 16 19:51:20.411: INFO: INFO: Collecting logs for node k8s-upgrade-and-conformance-ka9n1q-control-plane-x6rbx in cluster k8s-upgrade-and-conformance-ka9n1q in namespace k8s-upgrade-and-conformance-0haeqr

Nov 16 19:53:30.799: INFO: INFO: Collecting boot logs for AzureMachine k8s-upgrade-and-conformance-ka9n1q-control-plane-x6rbx

Failed to get logs for machine k8s-upgrade-and-conformance-ka9n1q-control-plane-v9ht2, cluster k8s-upgrade-and-conformance-0haeqr/k8s-upgrade-and-conformance-ka9n1q: dialing public load balancer at k8s-upgrade-and-conformance-ka9n1q-b7fcde60.northcentralus.cloudapp.azure.com: dial tcp 157.55.161.236:22: connect: connection timed out
Nov 16 19:53:31.513: INFO: INFO: Collecting logs for node k8s-upgrade-and-conformance-ka9n1q-md-0-rjtng in cluster k8s-upgrade-and-conformance-ka9n1q in namespace k8s-upgrade-and-conformance-0haeqr

Nov 16 19:55:41.867: INFO: INFO: Collecting boot logs for AzureMachine k8s-upgrade-and-conformance-ka9n1q-md-0-rjtng

Failed to get logs for machine k8s-upgrade-and-conformance-ka9n1q-md-0-85cf9c8c47-42845, cluster k8s-upgrade-and-conformance-0haeqr/k8s-upgrade-and-conformance-ka9n1q: dialing public load balancer at k8s-upgrade-and-conformance-ka9n1q-b7fcde60.northcentralus.cloudapp.azure.com: dial tcp 157.55.161.236:22: connect: connection timed out
Nov 16 19:55:42.542: INFO: INFO: Collecting logs for node k8s-upgrade-and-conformance-ka9n1q-md-0-47wf9 in cluster k8s-upgrade-and-conformance-ka9n1q in namespace k8s-upgrade-and-conformance-0haeqr

Nov 16 19:57:52.943: INFO: INFO: Collecting boot logs for AzureMachine k8s-upgrade-and-conformance-ka9n1q-md-0-47wf9

Failed to get logs for machine k8s-upgrade-and-conformance-ka9n1q-md-0-85cf9c8c47-qs68b, cluster k8s-upgrade-and-conformance-0haeqr/k8s-upgrade-and-conformance-ka9n1q: dialing public load balancer at k8s-upgrade-and-conformance-ka9n1q-b7fcde60.northcentralus.cloudapp.azure.com: dial tcp 157.55.161.236:22: connect: connection timed out
Nov 16 19:57:53.695: INFO: INFO: Collecting logs for node k8s-upgrade-and-conformance-ka9n1q-mp-0000003 in cluster k8s-upgrade-and-conformance-ka9n1q in namespace k8s-upgrade-and-conformance-0haeqr

Nov 16 20:00:04.011: INFO: INFO: Collecting boot logs for VMSS instance 3 of scale set k8s-upgrade-and-conformance-ka9n1q-mp-0

Nov 16 20:00:04.862: INFO: INFO: Collecting logs for node k8s-upgrade-and-conformance-ka9n1q-mp-0000002 in cluster k8s-upgrade-and-conformance-ka9n1q in namespace k8s-upgrade-and-conformance-0haeqr

Nov 16 20:02:15.083: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set k8s-upgrade-and-conformance-ka9n1q-mp-0

Failed to get logs for machine pool k8s-upgrade-and-conformance-ka9n1q-mp-0, cluster k8s-upgrade-and-conformance-0haeqr/k8s-upgrade-and-conformance-ka9n1q: dialing public load balancer at k8s-upgrade-and-conformance-ka9n1q-b7fcde60.northcentralus.cloudapp.azure.com: dial tcp 157.55.161.236:22: connect: connection timed out
STEP: Dumping workload cluster k8s-upgrade-and-conformance-0haeqr/k8s-upgrade-and-conformance-ka9n1q kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-node-n28w4, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-k8s-upgrade-and-conformance-ka9n1q-control-plane-x6rbx, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-g9kpl, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-4245w, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-cbts9, container calico-node
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-fn6cm, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-n5wfj, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-v4mhz, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-47ck6, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-s5l26, container coredns
STEP: Creating log watcher for controller kube-system/kube-scheduler-k8s-upgrade-and-conformance-ka9n1q-control-plane-x6rbx, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-pg81ph: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000647613s
STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-0haeqr" namespace
STEP: Deleting cluster k8s-upgrade-and-conformance-0haeqr/k8s-upgrade-and-conformance-ka9n1q
STEP: Deleting cluster k8s-upgrade-and-conformance-ka9n1q
INFO: Waiting for the Cluster k8s-upgrade-and-conformance-0haeqr/k8s-upgrade-and-conformance-ka9n1q to be deleted
STEP: Waiting for cluster k8s-upgrade-and-conformance-ka9n1q to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-4245w, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-k8s-upgrade-and-conformance-ka9n1q-control-plane-x6rbx, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-j5rdz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-n5wfj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ngsh2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-k8s-upgrade-and-conformance-ka9n1q-control-plane-x6rbx, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-v4mhz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-k8s-upgrade-and-conformance-ka9n1q-control-plane-x6rbx, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-k8s-upgrade-and-conformance-ka9n1q-control-plane-x6rbx, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-5rf44, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-g9l2r, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-47ck6, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-cbts9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-fn6cm, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec
INFO: Deleting namespace k8s-upgrade-and-conformance-0haeqr
STEP: Redacting sensitive information from logs


• [SLOW TEST:2940.711 seconds]
... skipping 11 lines ...
Summarizing 1 Failure:

[Panic!] Running the Cluster API E2E tests Should successfully exercise machine pools [AfterEach] Should successfully create a cluster with machine pool machines 
/usr/local/go/src/runtime/panic.go:88

Ran 14 of 24 Specs in 9644.572 seconds
FAIL! -- 13 Passed | 1 Failed | 0 Pending | 10 Skipped


Ginkgo ran 1 suite in 2h42m11.729431034s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:176: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:184: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...