This job view page is being replaced by Spyglass soon. Check out the new job view.
PRCecileRobertMichon: Enable node drain timeout CAPI test
ResultFAILURE
Tests 1 failed / 11 succeeded
Started2021-07-01 18:36
Elapsed1h42m
Revisiona380086de24a469a211847abbff4c05052dfd661
Refs 1465

Test Failures


capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time 1m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sShould\ssuccessfully\sset\sand\suse\snode\sdrain\stimeout\sA\snode\sshould\sbe\sforcefully\sremoved\sif\sit\scannot\sbe\sdrained\sin\stime$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.0/e2e/node_drain_timeout.go:76
Expected success, but got an error:
    <*errors.withStack | 0xc0009be138>: {
        error: <*exec.ExitError | 0xc000378000>{
            ProcessState: {
                pid: 109992,
                status: 256,
                rusage: {
                    Utime: {Sec: 0, Usec: 386032},
                    Stime: {Sec: 0, Usec: 164169},
                    Maxrss: 365936,
                    Ixrss: 0,
                    Idrss: 0,
                    Isrss: 0,
                    Minflt: 17951,
                    Majflt: 0,
                    Nswap: 0,
                    Inblock: 0,
                    Oublock: 23520,
                    Msgsnd: 0,
                    Msgrcv: 0,
                    Nsignals: 0,
                    Nvcsw: 1385,
                    Nivcsw: 140,
                },
            },
            Stderr: nil,
        },
        stack: [0x17dcd5e, 0x17dd425, 0x193ff7c, 0x1afe2da, 0x1c29ce5, 0x8048e3, 0x8044fc, 0x803827, 0x80a7cf, 0x809e72, 0x819771, 0x819287, 0x818a77, 0x81b186, 0x828eb8, 0x828bf6, 0x1c5d937, 0x528b6f, 0x4748c1],
    }
    exit status 1
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.0/framework/clusterctl/clusterctl_helpers.go:241
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 11 Passed Tests

Show 11 Skipped Tests

Error lines from build-log.txt

... skipping 499 lines ...
STEP: Fetching activity logs took 499.028926ms
STEP: Dumping all the Cluster API resources in the "quick-start-mdyvad" namespace
STEP: Deleting cluster quick-start-mdyvad/quick-start-esomnr
STEP: Deleting cluster quick-start-esomnr
INFO: Waiting for the Cluster quick-start-mdyvad/quick-start-esomnr to be deleted
STEP: Waiting for cluster quick-start-esomnr to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-lv5vd, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mj9nd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-8f59968d4-48mfl, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-quick-start-esomnr-control-plane-tbvhr, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-8rtv2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bddfl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-quick-start-esomnr-control-plane-tbvhr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-b4pnp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-quick-start-esomnr-control-plane-tbvhr, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-quick-start-esomnr-control-plane-tbvhr, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5k74n, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "quick-start" test spec
INFO: Deleting namespace quick-start-mdyvad
STEP: Redacting sensitive information from logs


• [SLOW TEST:731.244 seconds]
... skipping 51 lines ...
STEP: Dumping logs from the "kcp-upgrade-wfzrxk" workload cluster
STEP: Dumping workload cluster kcp-upgrade-ck6qai/kcp-upgrade-wfzrxk logs
Jul  1 19:00:30.407: INFO: INFO: Collecting logs for node kcp-upgrade-wfzrxk-control-plane-jd8qm in cluster kcp-upgrade-wfzrxk in namespace kcp-upgrade-ck6qai

Jul  1 19:02:40.708: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-wfzrxk-control-plane-jd8qm

Failed to get logs for machine kcp-upgrade-wfzrxk-control-plane-ts5c9, cluster kcp-upgrade-ck6qai/kcp-upgrade-wfzrxk: dialing public load balancer at kcp-upgrade-wfzrxk-9d458f77.northeurope.cloudapp.azure.com: dial tcp 137.116.238.26:22: connect: connection timed out
Jul  1 19:02:42.190: INFO: INFO: Collecting logs for node kcp-upgrade-wfzrxk-md-0-l9zc2 in cluster kcp-upgrade-wfzrxk in namespace kcp-upgrade-ck6qai

Jul  1 19:04:51.780: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-wfzrxk-md-0-l9zc2

Failed to get logs for machine kcp-upgrade-wfzrxk-md-0-599fcbbccc-6cl9g, cluster kcp-upgrade-ck6qai/kcp-upgrade-wfzrxk: dialing public load balancer at kcp-upgrade-wfzrxk-9d458f77.northeurope.cloudapp.azure.com: dial tcp 137.116.238.26:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-ck6qai/kcp-upgrade-wfzrxk kube-system pod logs
STEP: Fetching kube-system pod logs took 945.628487ms
STEP: Dumping workload cluster kcp-upgrade-ck6qai/kcp-upgrade-wfzrxk Azure activity log
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-wfzrxk-control-plane-jd8qm, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-wfzrxk-control-plane-jd8qm, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-8xgsp, container calico-node
... skipping 111 lines ...
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-3uwa2b-control-plane-m5ccn, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-3uwa2b-control-plane-5kgff, container kube-apiserver
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-3uwa2b-control-plane-5kgff, container etcd
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-3uwa2b-control-plane-qhqql, container etcd
STEP: Creating log watcher for controller kube-system/coredns-66bff467f8-lzzlp, container coredns
STEP: Creating log watcher for controller kube-system/coredns-66bff467f8-m8755, container coredns
STEP: Got error while iterating over activity logs for resource group capz-e2e-jy64ch: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001178679s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-46ddsj" namespace
STEP: Deleting cluster kcp-upgrade-46ddsj/kcp-upgrade-3uwa2b
STEP: Deleting cluster kcp-upgrade-3uwa2b
INFO: Waiting for the Cluster kcp-upgrade-46ddsj/kcp-upgrade-3uwa2b to be deleted
STEP: Waiting for cluster kcp-upgrade-3uwa2b to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-66bff467f8-lzzlp, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-799fb94867-k88v2, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-3uwa2b-control-plane-qhqql, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-3uwa2b-control-plane-qhqql, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-3uwa2b-control-plane-m5ccn, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-3uwa2b-control-plane-5kgff, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-3uwa2b-control-plane-m5ccn, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-3uwa2b-control-plane-m5ccn, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rz2jb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-3uwa2b-control-plane-qhqql, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-hjnsc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-3uwa2b-control-plane-5kgff, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-3uwa2b-control-plane-5kgff, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cnn5n, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-66bff467f8-m8755, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-jd7xq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wp87j, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7dnbf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-3uwa2b-control-plane-5kgff, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hcwxt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-3uwa2b-control-plane-qhqql, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-dhzdr, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-3uwa2b-control-plane-m5ccn, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-46ddsj
STEP: Redacting sensitive information from logs


• [SLOW TEST:2947.602 seconds]
... skipping 60 lines ...
Jul  1 19:24:04.052: INFO: INFO: Collecting boot logs for AzureMachine md-upgrades-naalyb-md-0-813wnm-cdsxv

Jul  1 19:24:04.465: INFO: INFO: Collecting logs for node md-upgrades-naalyb-md-0-brtkb in cluster md-upgrades-naalyb in namespace md-upgrades-c8xjo4

Jul  1 19:24:08.917: INFO: INFO: Collecting boot logs for AzureMachine md-upgrades-naalyb-md-0-brtkb

Failed to get logs for machine md-upgrades-naalyb-md-0-5c499695d6-lp258, cluster md-upgrades-c8xjo4/md-upgrades-naalyb: dialing from control plane to target node at md-upgrades-naalyb-md-0-brtkb: ssh: rejected: connect failed (Connection refused)
STEP: Dumping workload cluster md-upgrades-c8xjo4/md-upgrades-naalyb kube-system pod logs
STEP: Fetching kube-system pod logs took 1.0154737s
STEP: Dumping workload cluster md-upgrades-c8xjo4/md-upgrades-naalyb Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-j7brz, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-66bff467f8-6xhbf, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-4rpxx, container kube-proxy
... skipping 4 lines ...
STEP: Creating log watcher for controller kube-system/coredns-66bff467f8-s4t9x, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-h6bwk, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-xdhtl, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-upgrades-naalyb-control-plane-796dv, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-799fb94867-2nwbc, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-apiserver-md-upgrades-naalyb-control-plane-796dv, container kube-apiserver
STEP: Error starting logs stream for pod kube-system/kube-proxy-xdhtl, container kube-proxy: Get https://10.1.0.5:10250/containerLogs/kube-system/kube-proxy-xdhtl/kube-proxy?follow=true: dial tcp 10.1.0.5:10250: connect: connection refused
STEP: Error starting logs stream for pod kube-system/calico-node-j7brz, container calico-node: Get https://10.1.0.5:10250/containerLogs/kube-system/calico-node-j7brz/calico-node?follow=true: dial tcp 10.1.0.5:10250: connect: connection refused
STEP: Fetching activity logs took 690.060982ms
STEP: Dumping all the Cluster API resources in the "md-upgrades-c8xjo4" namespace
STEP: Deleting cluster md-upgrades-c8xjo4/md-upgrades-naalyb
STEP: Deleting cluster md-upgrades-naalyb
INFO: Waiting for the Cluster md-upgrades-c8xjo4/md-upgrades-naalyb to be deleted
STEP: Waiting for cluster md-upgrades-naalyb to be deleted
... skipping 96 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-4jb8k, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-e6ietn-control-plane-m974p, container kube-scheduler
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-e6ietn-control-plane-2mb28, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-cg5l2, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-e6ietn-control-plane-2mb28, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-2qbdm, container calico-node
STEP: Got error while iterating over activity logs for resource group capz-e2e-4t1bmd: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000497553s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-dpstul" namespace
STEP: Deleting cluster kcp-upgrade-dpstul/kcp-upgrade-e6ietn
STEP: Deleting cluster kcp-upgrade-e6ietn
INFO: Waiting for the Cluster kcp-upgrade-dpstul/kcp-upgrade-e6ietn to be deleted
STEP: Waiting for cluster kcp-upgrade-e6ietn to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-e6ietn-control-plane-m974p, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-e6ietn-control-plane-2mb28, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-e6ietn-control-plane-m974p, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-e6ietn-control-plane-2mb28, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dq96k, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-pghl6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-66bff467f8-g2xqv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4jb8k, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-e6ietn-control-plane-2mb28, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-e6ietn-control-plane-sjjp2, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xslmp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-e6ietn-control-plane-sjjp2, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-e6ietn-control-plane-m974p, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-x65zs, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xkrmn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-66bff467f8-z2tm2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-e6ietn-control-plane-sjjp2, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-e6ietn-control-plane-sjjp2, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-e6ietn-control-plane-2mb28, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2qbdm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-e6ietn-control-plane-m974p, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-799fb94867-n7rzr, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-cg5l2, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-dpstul
STEP: Redacting sensitive information from logs


• [SLOW TEST:2462.035 seconds]
... skipping 177 lines ...
STEP: Fetching activity logs took 533.334217ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-klpvws" namespace
STEP: Deleting cluster mhc-remediation-klpvws/mhc-remediation-oeatf6
STEP: Deleting cluster mhc-remediation-oeatf6
INFO: Waiting for the Cluster mhc-remediation-klpvws/mhc-remediation-oeatf6 to be deleted
STEP: Waiting for cluster mhc-remediation-oeatf6 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-9tk9c, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-96jx5, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-oeatf6-control-plane-nsrvf, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-x5p7v, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-oeatf6-control-plane-nsrvf, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-oeatf6-control-plane-nsrvf, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-8f59968d4-5cjhv, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cvrxn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-khlbf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-oeatf6-control-plane-nsrvf, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-l8kck, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-klpvws
STEP: Redacting sensitive information from logs


• [SLOW TEST:1034.385 seconds]
... skipping 50 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-adoption-cd87xk-control-plane-0, container kube-scheduler
STEP: Dumping workload cluster kcp-adoption-qr2w4n/kcp-adoption-cd87xk Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-f9fd979d6-w762z, container coredns
STEP: Creating log watcher for controller kube-system/coredns-f9fd979d6-fw4wl, container coredns
STEP: Creating log watcher for controller kube-system/etcd-kcp-adoption-cd87xk-control-plane-0, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-adoption-cd87xk-control-plane-0, container kube-apiserver
STEP: Error starting logs stream for pod kube-system/calico-kube-controllers-8f59968d4-mssv4, container calico-kube-controllers: container "calico-kube-controllers" in pod "calico-kube-controllers-8f59968d4-mssv4" is waiting to start: ContainerCreating
STEP: Fetching activity logs took 487.896983ms
STEP: Dumping all the Cluster API resources in the "kcp-adoption-qr2w4n" namespace
STEP: Deleting cluster kcp-adoption-qr2w4n/kcp-adoption-cd87xk
STEP: Deleting cluster kcp-adoption-cd87xk
INFO: Waiting for the Cluster kcp-adoption-qr2w4n/kcp-adoption-cd87xk to be deleted
STEP: Waiting for cluster kcp-adoption-cd87xk to be deleted
... skipping 101 lines ...
STEP: Fetching activity logs took 1.129044636s
STEP: Dumping all the Cluster API resources in the "mhc-remediation-sltckn" namespace
STEP: Deleting cluster mhc-remediation-sltckn/mhc-remediation-09fzzs
STEP: Deleting cluster mhc-remediation-09fzzs
INFO: Waiting for the Cluster mhc-remediation-sltckn/mhc-remediation-09fzzs to be deleted
STEP: Waiting for cluster mhc-remediation-09fzzs to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-n85jl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-8f59968d4-nw2lr, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fhwnx, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-09fzzs-control-plane-wdw82, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-6cnk2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-vzpxv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-09fzzs-control-plane-2vs5z, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-09fzzs-control-plane-wdw82, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-x74k7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-09fzzs-control-plane-7c256, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5879s, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dk8rs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-09fzzs-control-plane-2vs5z, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-09fzzs-control-plane-7c256, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-09fzzs-control-plane-2vs5z, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-pfs2g, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-09fzzs-control-plane-2vs5z, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-09fzzs-control-plane-wdw82, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-09fzzs-control-plane-wdw82, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-09fzzs-control-plane-7c256, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-09fzzs-control-plane-7c256, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qmq7q, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-87g9l, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-sltckn
STEP: Redacting sensitive information from logs


• [SLOW TEST:1522.288 seconds]
... skipping 13 lines ...
INFO: Creating event watcher for namespace "node-drain-36ynud"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "node-drain-8i5iby" using the "node-drain" template (Kubernetes v1.19.7, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster node-drain-8i5iby --infrastructure (default) --kubernetes-version v1.19.7 --control-plane-machine-count 3 --worker-machine-count 1 --flavor node-drain
INFO: Applying the cluster template yaml to the cluster
error: error validating "STDIN": error validating data: ValidationError(KubeadmControlPlane.spec): unknown field "nodeDrainTimeout" in io.x-k8s.cluster.controlplane.v1alpha4.KubeadmControlPlane.spec; if you choose to ignore these errors, turn validation off with --validate=false

STEP: Redacting sensitive information from logs


• Failure [60.650 seconds]
Running the Cluster API E2E tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:33
  Should successfully set and use node drain timeout
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:173
    A node should be forcefully removed if it cannot be drained in time [It]
    /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.0/e2e/node_drain_timeout.go:76

    Expected success, but got an error:
        <*errors.withStack | 0xc0009be138>: {
            error: <*exec.ExitError | 0xc000378000>{
                ProcessState: {
                    pid: 109992,
                    status: 256,
                    rusage: {
                        Utime: {Sec: 0, Usec: 386032},
                        Stime: {Sec: 0, Usec: 164169},
... skipping 190 lines ...
STEP: Fetching activity logs took 616.489027ms
STEP: Dumping all the Cluster API resources in the "machine-pool-2zt2m4" namespace
STEP: Deleting cluster machine-pool-2zt2m4/machine-pool-qx30gd
STEP: Deleting cluster machine-pool-qx30gd
INFO: Waiting for the Cluster machine-pool-2zt2m4/machine-pool-qx30gd to be deleted
STEP: Waiting for cluster machine-pool-qx30gd to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-66bff467f8-hkpgw, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-w8jfq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-799fb94867-4lr67, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-qx30gd-control-plane-nzzsn, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-qx30gd-control-plane-nzzsn, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-66bff467f8-srqzj, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-q696s, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-qx30gd-control-plane-nzzsn, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-f5dlx, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-qx30gd-control-plane-nzzsn, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wnrsv, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-2zt2m4
STEP: Redacting sensitive information from logs


• [SLOW TEST:1530.176 seconds]
... skipping 70 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-rbmq3a-control-plane-qzn2b, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-4mzmz, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-76vtk, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-67c6r, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-f9fd979d6-qf8mn, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-d4w7t, container kube-proxy
STEP: Error starting logs stream for pod kube-system/calico-node-67c6r, container calico-node: pods "md-scale-rbmq3a-md-0-kl7k5" not found
STEP: Error starting logs stream for pod kube-system/calico-node-74v86, container calico-node: pods "md-scale-rbmq3a-md-0-wrzqm" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-76vtk, container kube-proxy: pods "md-scale-rbmq3a-md-0-wrzqm" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-d4w7t, container kube-proxy: pods "md-scale-rbmq3a-md-0-kl7k5" not found
STEP: Fetching activity logs took 592.837709ms
STEP: Dumping all the Cluster API resources in the "md-scale-gwlj1z" namespace
STEP: Deleting cluster md-scale-gwlj1z/md-scale-rbmq3a
STEP: Deleting cluster md-scale-rbmq3a
INFO: Waiting for the Cluster md-scale-gwlj1z/md-scale-rbmq3a to be deleted
STEP: Waiting for cluster md-scale-rbmq3a to be deleted
... skipping 9 lines ...
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:161
    Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
    /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.0/e2e/md_scale.go:69
------------------------------
STEP: Tearing down the management cluster
W0701 20:17:53.962278   23535 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: watch of *v1.Event ended with: very short watch: pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
E0701 20:17:54.926060   23535 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:46101/api/v1/namespaces/node-drain-36ynud/events?resourceVersion=30054": dial tcp 127.0.0.1:46101: connect: connection refused



Summarizing 1 Failure:

[Fail] Running the Cluster API E2E tests Should successfully set and use node drain timeout [It] A node should be forcefully removed if it cannot be drained in time 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.0/framework/clusterctl/clusterctl_helpers.go:241

Ran 12 of 23 Specs in 5743.704 seconds
FAIL! -- 11 Passed | 1 Failed | 0 Pending | 11 Skipped


Ginkgo ran 1 suite in 1h37m12.352842558s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...