This job view page is being replaced by Spyglass soon. Check out the new job view.
PRCecileRobertMichon: Refactor infrastructure-azure templates required by CAPI tests to match CAPD
ResultFAILURE
Tests 1 failed / 10 succeeded
Started2021-06-21 17:17
Elapsed2h11m
Revisione834444c79579755b103b2799afb4811a7f6ae87
Refs 1423

Test Failures


capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation 1h2m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sShould\ssuccessfully\sremediate\sunhealthy\smachines\swith\sMachineHealthCheck\sShould\ssuccessfully\strigger\sKCP\sremediation$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.0-beta.1/e2e/mhc_remediations.go:101
Failed to get controller-runtime client
Unexpected error:
    <*url.Error | 0xc001080750>: {
        Op: "Get",
        URL: "https://mhc-remediation-o4rtud-36b8c8ea.eastus.cloudapp.azure.com:6443/api?timeout=32s",
        Err: <*http.httpError | 0xc0011942d0>{
            err: "net/http: request canceled (Client.Timeout exceeded while awaiting headers)",
            timeout: true,
        },
    }
    Get "https://mhc-remediation-o4rtud-36b8c8ea.eastus.cloudapp.azure.com:6443/api?timeout=32s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
occurred
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.0-beta.1/framework/cluster_proxy.go:171
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 10 Passed Tests

Show 11 Skipped Tests

Error lines from build-log.txt

... skipping 568 lines ...
STEP: Dumping logs from the "kcp-upgrade-7lhqkf" workload cluster
STEP: Dumping workload cluster kcp-upgrade-h8ssil/kcp-upgrade-7lhqkf logs
Jun 21 17:42:11.142: INFO: INFO: Collecting logs for node kcp-upgrade-7lhqkf-control-plane-v4m2r in cluster kcp-upgrade-7lhqkf in namespace kcp-upgrade-h8ssil

Jun 21 17:44:21.909: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-7lhqkf-control-plane-v4m2r

Failed to get logs for machine kcp-upgrade-7lhqkf-control-plane-k8n6b, cluster kcp-upgrade-h8ssil/kcp-upgrade-7lhqkf: dialing public load balancer at kcp-upgrade-7lhqkf-b807a43b.eastus.cloudapp.azure.com: dial tcp 104.41.128.241:22: connect: connection timed out
Jun 21 17:44:22.897: INFO: INFO: Collecting logs for node kcp-upgrade-7lhqkf-md-0-ss44s in cluster kcp-upgrade-7lhqkf in namespace kcp-upgrade-h8ssil

Jun 21 17:46:32.981: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-7lhqkf-md-0-ss44s

Failed to get logs for machine kcp-upgrade-7lhqkf-md-0-c5899b7c4-x42l5, cluster kcp-upgrade-h8ssil/kcp-upgrade-7lhqkf: dialing public load balancer at kcp-upgrade-7lhqkf-b807a43b.eastus.cloudapp.azure.com: dial tcp 104.41.128.241:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-h8ssil/kcp-upgrade-7lhqkf kube-system pod logs
STEP: Fetching kube-system pod logs took 313.348238ms
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-7lhqkf-control-plane-v4m2r, container kube-apiserver
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-7lhqkf-control-plane-v4m2r, container etcd
STEP: Dumping workload cluster kcp-upgrade-h8ssil/kcp-upgrade-7lhqkf Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-799fb94867-vmtpw, container calico-kube-controllers
... skipping 8 lines ...
STEP: Fetching activity logs took 1.03005372s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-h8ssil" namespace
STEP: Deleting cluster kcp-upgrade-h8ssil/kcp-upgrade-7lhqkf
STEP: Deleting cluster kcp-upgrade-7lhqkf
INFO: Waiting for the Cluster kcp-upgrade-h8ssil/kcp-upgrade-7lhqkf to be deleted
STEP: Waiting for cluster kcp-upgrade-7lhqkf to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-qjltz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-7lhqkf-control-plane-v4m2r, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9qwzb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-66bff467f8-5rhbc, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rq6lg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-7lhqkf-control-plane-v4m2r, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-799fb94867-vmtpw, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-7lhqkf-control-plane-v4m2r, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-7lhqkf-control-plane-v4m2r, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-66bff467f8-4ph45, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wmgq7, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-h8ssil
STEP: Redacting sensitive information from logs


• [SLOW TEST:1748.206 seconds]
... skipping 91 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-acj8uf-control-plane-79zr6, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-acj8uf-control-plane-ldfwp, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-acj8uf-control-plane-gsf9p, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-acj8uf-control-plane-ldfwp, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-acj8uf-control-plane-gsf9p, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-265tj, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-fnav4r: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000880569s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-psdaax" namespace
STEP: Deleting cluster kcp-upgrade-psdaax/kcp-upgrade-acj8uf
STEP: Deleting cluster kcp-upgrade-acj8uf
INFO: Waiting for the Cluster kcp-upgrade-psdaax/kcp-upgrade-acj8uf to be deleted
STEP: Waiting for cluster kcp-upgrade-acj8uf to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-9dckt, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8z9pw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-z997g, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-acj8uf-control-plane-79zr6, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-acj8uf-control-plane-gsf9p, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-799fb94867-b5swg, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-66bff467f8-4kzd2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-knhjx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-acj8uf-control-plane-ldfwp, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qdfwb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-acj8uf-control-plane-gsf9p, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-acj8uf-control-plane-79zr6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-acj8uf-control-plane-ldfwp, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-s9jm8, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-66bff467f8-vqsg4, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-acj8uf-control-plane-79zr6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-265tj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-acj8uf-control-plane-79zr6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-acj8uf-control-plane-gsf9p, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-7nkkj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-acj8uf-control-plane-ldfwp, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-acj8uf-control-plane-gsf9p, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-acj8uf-control-plane-ldfwp, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-psdaax
STEP: Redacting sensitive information from logs


• [SLOW TEST:2742.213 seconds]
... skipping 60 lines ...
Jun 21 18:07:38.142: INFO: INFO: Collecting boot logs for AzureMachine md-upgrades-7hqmaw-md-0-8sjru2-m6lbm

Jun 21 18:07:38.433: INFO: INFO: Collecting logs for node md-upgrades-7hqmaw-md-0-lwn8c in cluster md-upgrades-7hqmaw in namespace md-upgrades-zniuc1

Jun 21 18:07:43.854: INFO: INFO: Collecting boot logs for AzureMachine md-upgrades-7hqmaw-md-0-lwn8c

Failed to get logs for machine md-upgrades-7hqmaw-md-0-7c4fd7b8dd-vr7tz, cluster md-upgrades-zniuc1/md-upgrades-7hqmaw: dialing from control plane to target node at md-upgrades-7hqmaw-md-0-lwn8c: ssh: rejected: connect failed (Connection refused)
STEP: Dumping workload cluster md-upgrades-zniuc1/md-upgrades-7hqmaw kube-system pod logs
STEP: Fetching kube-system pod logs took 363.264641ms
STEP: Dumping workload cluster md-upgrades-zniuc1/md-upgrades-7hqmaw Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-xhlht, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-66bff467f8-d988b, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-275bg, container kube-proxy
... skipping 4 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-dsjpm, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-upgrades-7hqmaw-control-plane-gwn9x, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-md-upgrades-7hqmaw-control-plane-gwn9x, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-hvx6w, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-md-upgrades-7hqmaw-control-plane-gwn9x, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-zrps5, container calico-node
STEP: Error starting logs stream for pod kube-system/kube-proxy-275bg, container kube-proxy: Get https://10.1.0.5:10250/containerLogs/kube-system/kube-proxy-275bg/kube-proxy?follow=true: dial tcp 10.1.0.5:10250: connect: connection refused
STEP: Error starting logs stream for pod kube-system/calico-node-zrps5, container calico-node: Get https://10.1.0.5:10250/containerLogs/kube-system/calico-node-zrps5/calico-node?follow=true: dial tcp 10.1.0.5:10250: connect: connection refused
STEP: Fetching activity logs took 703.106947ms
STEP: Dumping all the Cluster API resources in the "md-upgrades-zniuc1" namespace
STEP: Deleting cluster md-upgrades-zniuc1/md-upgrades-7hqmaw
STEP: Deleting cluster md-upgrades-7hqmaw
INFO: Waiting for the Cluster md-upgrades-zniuc1/md-upgrades-7hqmaw to be deleted
STEP: Waiting for cluster md-upgrades-7hqmaw to be deleted
... skipping 96 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-6c5br, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-92n4s, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-w24mw, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-wb05ne-control-plane-b4vw4, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-wb05ne-control-plane-r28bj, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-5jd6b, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-1siv5t: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000483715s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-wu9ata" namespace
STEP: Deleting cluster kcp-upgrade-wu9ata/kcp-upgrade-wb05ne
STEP: Deleting cluster kcp-upgrade-wb05ne
INFO: Waiting for the Cluster kcp-upgrade-wu9ata/kcp-upgrade-wb05ne to be deleted
STEP: Waiting for cluster kcp-upgrade-wb05ne to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4jp2m, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8cbcs, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-wb05ne-control-plane-b4vw4, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-wb05ne-control-plane-7l4nx, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-wb05ne-control-plane-r28bj, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-66bff467f8-7s5sv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-wb05ne-control-plane-7l4nx, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-wb05ne-control-plane-b4vw4, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-wb05ne-control-plane-r28bj, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6c5br, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-92n4s, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5jd6b, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-vpm9d, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-799fb94867-mdqjz, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-wb05ne-control-plane-r28bj, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-wb05ne-control-plane-7l4nx, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-wb05ne-control-plane-7l4nx, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-wb05ne-control-plane-b4vw4, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-w24mw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-wb05ne-control-plane-b4vw4, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-wb05ne-control-plane-r28bj, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-7ctpb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-66bff467f8-cvh7k, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-wu9ata
STEP: Redacting sensitive information from logs


• [SLOW TEST:2893.448 seconds]
... skipping 92 lines ...
STEP: Fetching activity logs took 558.526802ms
STEP: Dumping all the Cluster API resources in the "self-hosted-kybljg" namespace
STEP: Deleting cluster self-hosted-kybljg/self-hosted-v743jg
STEP: Deleting cluster self-hosted-v743jg
INFO: Waiting for the Cluster self-hosted-kybljg/self-hosted-v743jg to be deleted
STEP: Waiting for cluster self-hosted-v743jg to be deleted
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-86ddc47bd5-8dpdr, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hltrz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-self-hosted-v743jg-control-plane-xqfh6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-self-hosted-v743jg-control-plane-xqfh6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-l7zvv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qfzv8, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-self-hosted-v743jg-control-plane-xqfh6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-t6x66, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-self-hosted-v743jg-control-plane-xqfh6, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-g7s7b, container kube-proxy: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-75894bdf5c-kj8zk, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-8f59968d4-b5755, container calico-kube-controllers: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-568bc896b5-2fv6n, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-568bc896b5-2fv6n, container kube-rbac-proxy: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-74fc446c7f-rvzvp, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-8pkqh, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "self-hosted" test spec
INFO: Deleting namespace self-hosted-kybljg
STEP: Redacting sensitive information from logs


• [SLOW TEST:1046.353 seconds]
... skipping 56 lines ...
STEP: Fetching activity logs took 544.968084ms
STEP: Dumping all the Cluster API resources in the "kcp-adoption-957r0z" namespace
STEP: Deleting cluster kcp-adoption-957r0z/kcp-adoption-wvrfr0
STEP: Deleting cluster kcp-adoption-wvrfr0
INFO: Waiting for the Cluster kcp-adoption-957r0z/kcp-adoption-wvrfr0 to be deleted
STEP: Waiting for cluster kcp-adoption-wvrfr0 to be deleted
STEP: Error starting logs stream for pod kube-system/coredns-f9fd979d6-wj2js, container coredns: container "coredns" in pod "coredns-f9fd979d6-wj2js" is waiting to start: ContainerCreating
STEP: Error starting logs stream for pod kube-system/coredns-f9fd979d6-gmjkb, container coredns: container "coredns" in pod "coredns-f9fd979d6-gmjkb" is waiting to start: ContainerCreating
STEP: Error starting logs stream for pod kube-system/calico-kube-controllers-8f59968d4-2599d, container calico-kube-controllers: container "calico-kube-controllers" in pod "calico-kube-controllers-8f59968d4-2599d" is waiting to start: ContainerCreating
STEP: Error starting logs stream for pod kube-system/calico-node-rz2m6, container calico-node: container "calico-node" in pod "calico-node-rz2m6" is waiting to start: PodInitializing
STEP: Deleting namespace used for hosting the "kcp-adoption" test spec
INFO: Deleting namespace kcp-adoption-957r0z
STEP: Redacting sensitive information from logs


• [SLOW TEST:609.958 seconds]
... skipping 155 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-md-scale-9mrq3e-control-plane-fwjw9, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-zwrvz, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-f9fd979d6-p8mmz, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-hthkw, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-h7j8v, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-9nggt, container calico-node
STEP: Error starting logs stream for pod kube-system/calico-node-zwrvz, container calico-node: pods "md-scale-9mrq3e-md-0-qb4sb" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-dznqq, container kube-proxy: pods "md-scale-9mrq3e-md-0-82hqw" not found
STEP: Error starting logs stream for pod kube-system/calico-node-9nggt, container calico-node: pods "md-scale-9mrq3e-md-0-82hqw" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-hthkw, container kube-proxy: pods "md-scale-9mrq3e-md-0-qb4sb" not found
STEP: Fetching activity logs took 529.024717ms
STEP: Dumping all the Cluster API resources in the "md-scale-64ujf1" namespace
STEP: Deleting cluster md-scale-64ujf1/md-scale-9mrq3e
STEP: Deleting cluster md-scale-9mrq3e
INFO: Waiting for the Cluster md-scale-64ujf1/md-scale-9mrq3e to be deleted
STEP: Waiting for cluster md-scale-9mrq3e to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-p79vm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-scale-9mrq3e-control-plane-fwjw9, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rgnkr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2l7mk, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-p8mmz, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-scale-9mrq3e-control-plane-fwjw9, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-scale-9mrq3e-control-plane-fwjw9, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-8f59968d4-gl47h, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-scale-9mrq3e-control-plane-fwjw9, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-pqdd9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-h7j8v, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-scale" test spec
INFO: Deleting namespace md-scale-64ujf1
STEP: Redacting sensitive information from logs


• [SLOW TEST:1637.140 seconds]
... skipping 105 lines ...
STEP: Creating log watcher for controller kube-system/coredns-66bff467f8-ggtsp, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-95phv, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-2b6rxo-control-plane-p6nt2, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-pnsvt, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-machine-pool-2b6rxo-control-plane-p6nt2, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-xt94m, container kube-proxy
STEP: Error starting logs stream for pod kube-system/calico-node-nfsbs, container calico-node: pods "machine-pool-2b6rxo-mp-0000002" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-95phv, container kube-proxy: pods "machine-pool-2b6rxo-mp-0000002" not found
STEP: Fetching activity logs took 566.888628ms
STEP: Dumping all the Cluster API resources in the "machine-pool-2lmyq4" namespace
STEP: Deleting cluster machine-pool-2lmyq4/machine-pool-2b6rxo
STEP: Deleting cluster machine-pool-2b6rxo
INFO: Waiting for the Cluster machine-pool-2lmyq4/machine-pool-2b6rxo to be deleted
STEP: Waiting for cluster machine-pool-2b6rxo to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-66bff467f8-6dbxz, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xwpdf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-66bff467f8-ggtsp, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-2b6rxo-control-plane-p6nt2, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-799fb94867-4vgr4, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-zfkvd, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-2b6rxo-control-plane-p6nt2, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-pnsvt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-2b6rxo-control-plane-p6nt2, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xt94m, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-2b6rxo-control-plane-p6nt2, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-2lmyq4
STEP: Redacting sensitive information from logs


• [SLOW TEST:1915.673 seconds]
... skipping 105 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:33
  Should successfully remediate unhealthy machines with MachineHealthCheck
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:125
    Should successfully trigger KCP remediation [It]
    /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.0-beta.1/e2e/mhc_remediations.go:101

    Failed to get controller-runtime client
    Unexpected error:
        <*url.Error | 0xc001080750>: {
            Op: "Get",
            URL: "https://mhc-remediation-o4rtud-36b8c8ea.eastus.cloudapp.azure.com:6443/api?timeout=32s",
            Err: <*http.httpError | 0xc0011942d0>{
                err: "net/http: request canceled (Client.Timeout exceeded while awaiting headers)",
                timeout: true,
            },
... skipping 52 lines ...
    	/usr/local/go/src/testing/testing.go:1193 +0xef
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1238 +0x2b3
------------------------------
STEP: Tearing down the management cluster
W0621 19:26:47.068266   23789 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: watch of *v1.Event ended with: very short watch: pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
E0621 19:26:48.560567   23789 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:43285/api/v1/namespaces/mhc-remediation-eehxy0/events?resourceVersion=37342": dial tcp 127.0.0.1:43285: connect: connection refused



Summarizing 1 Failure:

[Fail] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck [It] Should successfully trigger KCP remediation 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.0-beta.1/framework/cluster_proxy.go:171

Ran 11 of 22 Specs in 7415.757 seconds
FAIL! -- 10 Passed | 1 Failed | 0 Pending | 11 Skipped


Ginkgo ran 1 suite in 2h5m17.283548518s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...