This job view page is being replaced by Spyglass soon. Check out the new job view.
PRCecileRobertMichon: Enable node drain timeout CAPI test
ResultFAILURE
Tests 1 failed / 11 succeeded
Started2021-07-01 20:54
Elapsed2h5m
Revisionb34671d4d50397ff49b4d227e1d05e0db7af10c4
Refs 1465

Test Failures


capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster using scale in rollout Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd 35m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sKCP\supgrade\sspec\sin\sa\sHA\scluster\susing\sscale\sin\srollout\sShould\ssuccessfully\supgrade\sKubernetes\,\sDNS\,\skube\-proxy\,\sand\setcd$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.0/e2e/kcp_upgrade.go:112
Test Panicked
/usr/local/go/src/runtime/panic.go:212

Panic: runtime error: invalid memory address or nil pointer dereference

Full stack:
sigs.k8s.io/cluster-api-provider-azure/test/e2e.collectVMBootLog(0x24beb20, 0xc000126018, 0xc000223900, 0xc0017244e0, 0x5b, 0xc000031f20, 0x26)
	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_logcollector.go:340 +0xd4
sigs.k8s.io/cluster-api-provider-azure/test/e2e.AzureLogCollector.CollectMachineLog(0x24beb20, 0xc000126018, 0x24d8e18, 0xc0001f68c0, 0xc000556240, 0xc0017244e0, 0x5b, 0xc001724480, 0xc0005e8b80)
	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_logcollector.go:72 +0x2eb
sigs.k8s.io/cluster-api/test/framework.(*clusterProxy).CollectWorkloadClusterLogs(0xc0003248c0, 0x24beb20, 0xc000126018, 0xc00158e690, 0x12, 0xc00158e678, 0x12, 0xc000a660f0, 0x2b)
	/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.0/framework/cluster_proxy.go:238 +0x3be
sigs.k8s.io/cluster-api-provider-azure/test/e2e.(*AzureClusterProxy).CollectWorkloadClusterLogs(0xc000a35d80, 0x24beb20, 0xc000126018, 0xc00158e690, 0x12, 0xc00158e678, 0x12, 0xc000a660f0, 0x2b)
	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:127 +0x174
sigs.k8s.io/cluster-api/test/e2e.dumpSpecResourcesAndCleanup(0x24beb20, 0xc000126018, 0x21d49a9, 0xb, 0x24dcf80, 0xc000a35d80, 0xc001118480, 0xf, 0xc000c3d340, 0xc000491f10, ...)
	/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.0/e2e/common.go:66 +0x1b2
sigs.k8s.io/cluster-api/test/e2e.KCPUpgradeSpec.func3()
	/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.0/e2e/kcp_upgrade.go:114 +0xc7
github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0009013e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.4/internal/leafnodes/runner.go:113 +0xa3
github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0009013e0, 0xc000a711ee, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.4/internal/leafnodes/runner.go:64 +0x15c
github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000330808, 0x247bae0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.4/internal/leafnodes/setup_nodes.go:15 +0x87
github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc000a716a0, 0xc0000c03c0, 0x247bae0, 0xc00016c8c0)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.4/internal/spec/spec.go:180 +0x3cd
github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0000c03c0, 0x0, 0x247bae0, 0xc00016c8c0)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.4/internal/spec/spec.go:218 +0x809
github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0000c03c0, 0x247bae0, 0xc00016c8c0)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.4/internal/spec/spec.go:138 +0xf2
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc00090e420, 0xc0000c03c0, 0x1)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.4/internal/specrunner/spec_runner.go:200 +0x111
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc00090e420, 0x1)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.4/internal/specrunner/spec_runner.go:170 +0x147
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc00090e420, 0xc000917ef0)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.4/internal/specrunner/spec_runner.go:66 +0x117
github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc00019cc40, 0x7efd50252238, 0xc00091e300, 0x21cf392, 0x8, 0xc000879780, 0x2, 0x2, 0x24c7f98, 0xc00016c8c0, ...)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.4/internal/suite/suite.go:79 +0x546
github.com/onsi/ginkgo.runSpecsWithCustomReporters(0x247d880, 0xc00091e300, 0x21cf392, 0x8, 0xc000879760, 0x2, 0x2, 0x2)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.4/ginkgo_dsl.go:238 +0x218
github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x247d880, 0xc00091e300, 0x21cf392, 0x8, 0xc000098f30, 0x1, 0x1, 0xc000515f48)
	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.4/ginkgo_dsl.go:221 +0x136
sigs.k8s.io/cluster-api-provider-azure/test/e2e.TestE2E(0xc00091e300)
	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:258 +0x1f7
testing.tRunner(0xc00091e300, 0x22bb0f8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 11 Passed Tests

Show 11 Skipped Tests

Error lines from build-log.txt

... skipping 561 lines ...
STEP: Dumping logs from the "kcp-upgrade-phylth" workload cluster
STEP: Dumping workload cluster kcp-upgrade-ztw9e7/kcp-upgrade-phylth logs
Jul  1 21:17:14.084: INFO: INFO: Collecting logs for node kcp-upgrade-phylth-control-plane-r5sf4 in cluster kcp-upgrade-phylth in namespace kcp-upgrade-ztw9e7

Jul  1 21:19:24.921: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-phylth-control-plane-r5sf4

Failed to get logs for machine kcp-upgrade-phylth-control-plane-dwp5g, cluster kcp-upgrade-ztw9e7/kcp-upgrade-phylth: dialing public load balancer at kcp-upgrade-phylth-b362e34a.northeurope.cloudapp.azure.com: dial tcp 40.69.7.100:22: connect: connection timed out
Jul  1 21:19:26.092: INFO: INFO: Collecting logs for node kcp-upgrade-phylth-md-0-wtrg9 in cluster kcp-upgrade-phylth in namespace kcp-upgrade-ztw9e7

Jul  1 21:21:35.989: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-phylth-md-0-wtrg9

Failed to get logs for machine kcp-upgrade-phylth-md-0-64cd76566-q5qp5, cluster kcp-upgrade-ztw9e7/kcp-upgrade-phylth: dialing public load balancer at kcp-upgrade-phylth-b362e34a.northeurope.cloudapp.azure.com: dial tcp 40.69.7.100:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-ztw9e7/kcp-upgrade-phylth kube-system pod logs
STEP: Fetching kube-system pod logs took 883.297255ms
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-phylth-control-plane-r5sf4, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/coredns-66bff467f8-qfdxn, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-djc8v, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-phylth-control-plane-r5sf4, container kube-apiserver
... skipping 111 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-zzqigr-control-plane-szs5j, container kube-scheduler
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-zzqigr-control-plane-996ph, container etcd
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-zzqigr-control-plane-szs5j, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-hf5zb, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-zzqigr-control-plane-996ph, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-zzqigr-control-plane-996ph, container kube-apiserver
STEP: Got error while iterating over activity logs for resource group capz-e2e-ke6tc5: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001199992s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-qnjzuw" namespace
STEP: Deleting cluster kcp-upgrade-qnjzuw/kcp-upgrade-zzqigr
STEP: Deleting cluster kcp-upgrade-zzqigr
INFO: Waiting for the Cluster kcp-upgrade-qnjzuw/kcp-upgrade-zzqigr to be deleted
STEP: Waiting for cluster kcp-upgrade-zzqigr to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-zzqigr-control-plane-szs5j, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-zzqigr-control-plane-996ph, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4vmr2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-zzqigr-control-plane-krk7j, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6gl5m, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4p449, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-799fb94867-g4zrd, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-g4l7j, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2rh8f, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-zzqigr-control-plane-szs5j, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-zzqigr-control-plane-996ph, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-zzqigr-control-plane-996ph, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-zzqigr-control-plane-996ph, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-zzqigr-control-plane-krk7j, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-zzqigr-control-plane-krk7j, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-zzqigr-control-plane-szs5j, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-hf5zb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-66bff467f8-2zz99, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-66bff467f8-m7m5k, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-7b8hf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-zzqigr-control-plane-krk7j, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qqlq2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-zzqigr-control-plane-szs5j, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-qnjzuw
STEP: Redacting sensitive information from logs


• [SLOW TEST:2712.254 seconds]
... skipping 56 lines ...
Jul  1 21:39:06.641: INFO: INFO: Collecting boot logs for AzureMachine md-upgrades-gvonuo-control-plane-zklkg

Jul  1 21:39:07.825: INFO: INFO: Collecting logs for node md-upgrades-gvonuo-md-0-mtvx2 in cluster md-upgrades-gvonuo in namespace md-upgrades-81lq1v

Jul  1 21:39:13.242: INFO: INFO: Collecting boot logs for AzureMachine md-upgrades-gvonuo-md-0-mtvx2

Failed to get logs for machine md-upgrades-gvonuo-md-0-5db5f4bc7c-rlmlb, cluster md-upgrades-81lq1v/md-upgrades-gvonuo: dialing from control plane to target node at md-upgrades-gvonuo-md-0-mtvx2: ssh: rejected: connect failed (Connection refused)
Jul  1 21:39:13.649: INFO: INFO: Collecting logs for node md-upgrades-gvonuo-md-0-m3vuot-98dmh in cluster md-upgrades-gvonuo in namespace md-upgrades-81lq1v

Jul  1 21:39:25.357: INFO: INFO: Collecting boot logs for AzureMachine md-upgrades-gvonuo-md-0-m3vuot-98dmh

STEP: Dumping workload cluster md-upgrades-81lq1v/md-upgrades-gvonuo kube-system pod logs
STEP: Fetching kube-system pod logs took 941.049195ms
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-65gpt, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-66bff467f8-5sf5m, container coredns
STEP: Creating log watcher for controller kube-system/etcd-md-upgrades-gvonuo-control-plane-zklkg, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-md-upgrades-gvonuo-control-plane-zklkg, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-ldqmt, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-5znwt, container calico-node
STEP: Error starting logs stream for pod kube-system/calico-node-nbtm2, container calico-node: Get https://10.1.0.5:10250/containerLogs/kube-system/calico-node-nbtm2/calico-node?follow=true: dial tcp 10.1.0.5:10250: connect: connection refused
STEP: Error starting logs stream for pod kube-system/kube-proxy-65gpt, container kube-proxy: Get https://10.1.0.5:10250/containerLogs/kube-system/kube-proxy-65gpt/kube-proxy?follow=true: dial tcp 10.1.0.5:10250: connect: connection refused
STEP: Fetching activity logs took 530.817808ms
STEP: Dumping all the Cluster API resources in the "md-upgrades-81lq1v" namespace
STEP: Deleting cluster md-upgrades-81lq1v/md-upgrades-gvonuo
STEP: Deleting cluster md-upgrades-gvonuo
INFO: Waiting for the Cluster md-upgrades-81lq1v/md-upgrades-gvonuo to be deleted
STEP: Waiting for cluster md-upgrades-gvonuo to be deleted
... skipping 74 lines ...
  Running the KCP upgrade spec in a HA cluster using scale in rollout
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:85
    Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd [AfterEach]
    /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.0/e2e/kcp_upgrade.go:75

    Test Panicked
    runtime error: invalid memory address or nil pointer dereference
    /usr/local/go/src/runtime/panic.go:212

    Full Stack Trace
    sigs.k8s.io/cluster-api-provider-azure/test/e2e.collectVMBootLog(0x24beb20, 0xc000126018, 0xc000223900, 0xc0017244e0, 0x5b, 0xc000031f20, 0x26)
    	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_logcollector.go:340 +0xd4
    sigs.k8s.io/cluster-api-provider-azure/test/e2e.AzureLogCollector.CollectMachineLog(0x24beb20, 0xc000126018, 0x24d8e18, 0xc0001f68c0, 0xc000556240, 0xc0017244e0, 0x5b, 0xc001724480, 0xc0005e8b80)
... skipping 125 lines ...
STEP: Fetching activity logs took 512.589271ms
STEP: Dumping all the Cluster API resources in the "self-hosted-xndc01" namespace
STEP: Deleting cluster self-hosted-xndc01/self-hosted-h5ngzr
STEP: Deleting cluster self-hosted-h5ngzr
INFO: Waiting for the Cluster self-hosted-xndc01/self-hosted-h5ngzr to be deleted
STEP: Waiting for cluster self-hosted-h5ngzr to be deleted
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-744674697-lvvr5, container kube-rbac-proxy: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-6d8546bc55-rtlmq, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-7f8f7648f-zjmt8, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-fsc9c, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-self-hosted-h5ngzr-control-plane-q94pk, container kube-apiserver: http2: client connection lost
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-858f8ff867-nsp2j, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-self-hosted-h5ngzr-control-plane-q94pk, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xc4sh, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-whjgf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ks4gh, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-8f59968d4-n7p47, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-self-hosted-h5ngzr-control-plane-q94pk, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-x79mf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-self-hosted-h5ngzr-control-plane-q94pk, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-q6lsv, container coredns: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-744674697-lvvr5, container manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "self-hosted" test spec
INFO: Deleting namespace self-hosted-xndc01
STEP: Redacting sensitive information from logs


• [SLOW TEST:1056.872 seconds]
... skipping 248 lines ...
STEP: Fetching activity logs took 1.082585122s
STEP: Dumping all the Cluster API resources in the "mhc-remediation-sp9ca8" namespace
STEP: Deleting cluster mhc-remediation-sp9ca8/mhc-remediation-r0yg9m
STEP: Deleting cluster mhc-remediation-r0yg9m
INFO: Waiting for the Cluster mhc-remediation-sp9ca8/mhc-remediation-r0yg9m to be deleted
STEP: Waiting for cluster mhc-remediation-r0yg9m to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-r0yg9m-control-plane-mq7c6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-r0yg9m-control-plane-mq7c6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-klp9r, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-8f59968d4-nqjdn, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-r0yg9m-control-plane-vjg44, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-f7qml, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-r0yg9m-control-plane-8scs8, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-d77ts, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-s58hs, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-r0yg9m-control-plane-mq7c6, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-r0yg9m-control-plane-8scs8, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-dsnpj, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2gvgg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-r0yg9m-control-plane-8scs8, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-r0yg9m-control-plane-vjg44, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-r0yg9m-control-plane-vjg44, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-trc7x, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-sdv5r, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-r0yg9m-control-plane-vjg44, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-r0yg9m-control-plane-8scs8, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-drwjk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-r0yg9m-control-plane-mq7c6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wwvr7, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-sp9ca8
STEP: Redacting sensitive information from logs


• [SLOW TEST:1790.346 seconds]
... skipping 108 lines ...
STEP: Creating log watcher for controller kube-system/coredns-66bff467f8-q4vk7, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-wschh, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-66bff467f8-78sqs, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-vsv6n, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-machine-pool-0bom36-control-plane-mxgjs, container etcd
STEP: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-0bom36-control-plane-mxgjs, container kube-scheduler
STEP: Error starting logs stream for pod kube-system/calico-node-9zgc6, container calico-node: pods "machine-pool-0bom36-mp-0000002" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-j8drg, container kube-proxy: pods "machine-pool-0bom36-mp-0000002" not found
STEP: Fetching activity logs took 653.843736ms
STEP: Dumping all the Cluster API resources in the "machine-pool-rnotn6" namespace
STEP: Deleting cluster machine-pool-rnotn6/machine-pool-0bom36
STEP: Deleting cluster machine-pool-0bom36
INFO: Waiting for the Cluster machine-pool-rnotn6/machine-pool-0bom36 to be deleted
STEP: Waiting for cluster machine-pool-0bom36 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-4wmtq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-wschh, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-0bom36-control-plane-mxgjs, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-66bff467f8-q4vk7, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bcmdf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-0bom36-control-plane-mxgjs, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vsv6n, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-799fb94867-5pn56, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-0bom36-control-plane-mxgjs, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-66bff467f8-78sqs, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-0bom36-control-plane-mxgjs, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-rnotn6
STEP: Redacting sensitive information from logs


• [SLOW TEST:1415.574 seconds]
... skipping 72 lines ...
STEP: Creating log watcher for controller kube-system/etcd-md-scale-wxgozz-control-plane-dg255, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-wblq9, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-qgwqm, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-md-scale-wxgozz-control-plane-dg255, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-wxgozz-control-plane-dg255, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/coredns-f9fd979d6-579gj, container coredns
STEP: Error starting logs stream for pod kube-system/calico-node-ppnlz, container calico-node: pods "md-scale-wxgozz-md-0-hlxvq" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-x6gh7, container kube-proxy: pods "md-scale-wxgozz-md-0-hlxvq" not found
STEP: Error starting logs stream for pod kube-system/calico-node-wblq9, container calico-node: pods "md-scale-wxgozz-md-0-7vtgt" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-wb8xp, container kube-proxy: pods "md-scale-wxgozz-md-0-7vtgt" not found
STEP: Fetching activity logs took 570.328696ms
STEP: Dumping all the Cluster API resources in the "md-scale-f8231w" namespace
STEP: Deleting cluster md-scale-f8231w/md-scale-wxgozz
STEP: Deleting cluster md-scale-wxgozz
INFO: Waiting for the Cluster md-scale-f8231w/md-scale-wxgozz to be deleted
STEP: Waiting for cluster md-scale-wxgozz to be deleted
... skipping 61 lines ...
STEP: Dumping logs from the "node-drain-xkmkzf" workload cluster
STEP: Dumping workload cluster node-drain-yjmujh/node-drain-xkmkzf logs
Jul  1 22:48:06.325: INFO: INFO: Collecting logs for node node-drain-xkmkzf-control-plane-tdbgt in cluster node-drain-xkmkzf in namespace node-drain-yjmujh

Jul  1 22:50:16.697: INFO: INFO: Collecting boot logs for AzureMachine node-drain-xkmkzf-control-plane-tdbgt

Failed to get logs for machine node-drain-xkmkzf-control-plane-v8qrh, cluster node-drain-yjmujh/node-drain-xkmkzf: dialing public load balancer at node-drain-xkmkzf-7c4a08a9.northeurope.cloudapp.azure.com: dial tcp 137.135.224.170:22: connect: connection timed out
STEP: Dumping workload cluster node-drain-yjmujh/node-drain-xkmkzf kube-system pod logs
STEP: Fetching kube-system pod logs took 868.08025ms
STEP: Dumping workload cluster node-drain-yjmujh/node-drain-xkmkzf Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-f9fd979d6-vn6kg, container coredns
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-8f59968d4-svncd, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/etcd-node-drain-xkmkzf-control-plane-tdbgt, container etcd
... skipping 30 lines ...
Summarizing 1 Failure:

[Panic!] Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster using scale in rollout [AfterEach] Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd 
/usr/local/go/src/runtime/panic.go:212

Ran 12 of 23 Specs in 7155.445 seconds
FAIL! -- 11 Passed | 1 Failed | 0 Pending | 11 Skipped


Ginkgo ran 1 suite in 2h0m36.624716388s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...