This job view page is being replaced by Spyglass soon. Check out the new job view.
PRshysank: v1alpha4 -> v1beta1 clusterctl upgrade test
ResultFAILURE
Tests 1 failed / 11 succeeded
Started2021-11-17 21:24
Elapsed1h49m
Revisionee7a6ed67cb87d871a770045a4904a1eda93ad60
Refs 1810

Test Failures


capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd 18m7s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sKCP\supgrade\sspec\sin\sa\sHA\scluster\sShould\ssuccessfully\supgrade\sKubernetes\,\sDNS\,\skube\-proxy\,\sand\setcd$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.1/e2e/kcp_upgrade.go:75
Expected success, but got an error:
    <errors.aggregate | len:1, cap:1>: [
        <*errors.StatusError | 0xc000525180>{
            ErrStatus: {
                TypeMeta: {Kind: "", APIVersion: ""},
                ListMeta: {
                    SelfLink: "",
                    ResourceVersion: "",
                    Continue: "",
                    RemainingItemCount: nil,
                },
                Status: "Failure",
                Message: "admission webhook \"validation.kubeadmcontrolplane.controlplane.cluster.x-k8s.io\" denied the request: KubeadmControlPlane.controlplane.cluster.x-k8s.io \"kcp-upgrade-dx82ao-control-plane\" is invalid: spec.kubeadmConfigSpec.clusterConfiguration.etcd.local.dataDir: Forbidden: cannot be modified",
                Reason: "Invalid",
                Details: {
                    Name: "kcp-upgrade-dx82ao-control-plane",
                    Group: "controlplane.cluster.x-k8s.io",
                    Kind: "KubeadmControlPlane",
                    UID: "",
                    Causes: [
                        {
                            Type: "FieldValueForbidden",
                            Message: "Forbidden: cannot be modified",
                            Field: "spec.kubeadmConfigSpec.clusterConfiguration.etcd.local.dataDir",
                        },
                        {
                            Type: "FieldValueForbidden",
                            Message: "Forbidden: cannot be modified",
                            Field: "spec.kubeadmConfigSpec.clusterConfiguration.etcd.local.dataDir",
                        },
                        {
                            Type: "FieldValueForbidden",
                            Message: "Forbidden: cannot be modified",
                            Field: "spec.kubeadmConfigSpec.clusterConfiguration.etcd.local.dataDir",
                        },
                    ],
                    RetryAfterSeconds: 0,
                },
                Code: 422,
            },
        },
    ]
    admission webhook "validation.kubeadmcontrolplane.controlplane.cluster.x-k8s.io" denied the request: KubeadmControlPlane.controlplane.cluster.x-k8s.io "kcp-upgrade-dx82ao-control-plane" is invalid: spec.kubeadmConfigSpec.clusterConfiguration.etcd.local.dataDir: Forbidden: cannot be modified
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.1/framework/controlplane_helpers.go:322
				
				Click to see stdout/stderrfrom junit.e2e_suite.3.xml

Filter through log files | View test history on testgrid


Show 11 Passed Tests

Show 12 Skipped Tests

Error lines from build-log.txt

... skipping 473 lines ...
Nov 17 21:37:42.380: INFO: INFO: Collecting boot logs for AzureMachine quick-start-4t0xk0-md-0-jmw24

Nov 17 21:37:42.836: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster quick-start-4t0xk0 in namespace quick-start-s87hlo

Nov 17 21:38:15.094: INFO: INFO: Collecting boot logs for AzureMachine quick-start-4t0xk0-md-win-97lb4

Failed to get logs for machine quick-start-4t0xk0-md-win-68b689795b-5m7th, cluster quick-start-s87hlo/quick-start-4t0xk0: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 17 21:40:18.408: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster quick-start-4t0xk0 in namespace quick-start-s87hlo

Nov 17 21:40:52.458: INFO: INFO: Collecting boot logs for AzureMachine quick-start-4t0xk0-md-win-zfqhj

Failed to get logs for machine quick-start-4t0xk0-md-win-68b689795b-cflbt, cluster quick-start-s87hlo/quick-start-4t0xk0: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster quick-start-s87hlo/quick-start-4t0xk0 kube-system pod logs
STEP: Fetching kube-system pod logs took 1.06885229s
STEP: Dumping workload cluster quick-start-s87hlo/quick-start-4t0xk0 Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-59x6q, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-windows-m4qfn, container calico-node-startup
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-vhlkt, container coredns
... skipping 14 lines ...
STEP: Fetching activity logs took 493.427515ms
STEP: Dumping all the Cluster API resources in the "quick-start-s87hlo" namespace
STEP: Deleting cluster quick-start-s87hlo/quick-start-4t0xk0
STEP: Deleting cluster quick-start-4t0xk0
INFO: Waiting for the Cluster quick-start-s87hlo/quick-start-4t0xk0 to be deleted
STEP: Waiting for cluster quick-start-4t0xk0 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-m4qfn, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-9gzch, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-dlrg8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-quick-start-4t0xk0-control-plane-f96dk, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-59x6q, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-h9n8m, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-quick-start-4t0xk0-control-plane-f96dk, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vg7dn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-9wh2l, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-m4qfn, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-82srl, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-quick-start-4t0xk0-control-plane-f96dk, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-nhjwn, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dfg89, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-vhlkt, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-nhjwn, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-quick-start-4t0xk0-control-plane-f96dk, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "quick-start" test spec
INFO: Deleting namespace quick-start-s87hlo
STEP: Redacting sensitive information from logs


• [SLOW TEST:933.524 seconds]
... skipping 66 lines ...
Nov 17 21:39:21.350: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-dx82ao-md-0-hnqmg

Nov 17 21:39:21.771: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster kcp-upgrade-dx82ao in namespace kcp-upgrade-6kjbu1

Nov 17 21:39:49.628: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-dx82ao-md-win-kfpc7

Failed to get logs for machine kcp-upgrade-dx82ao-md-win-6bd75bdcd-2456c, cluster kcp-upgrade-6kjbu1/kcp-upgrade-dx82ao: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 17 21:39:50.049: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-dx82ao in namespace kcp-upgrade-6kjbu1

Nov 17 21:40:16.419: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-dx82ao-md-win-j7ds5

Failed to get logs for machine kcp-upgrade-dx82ao-md-win-6bd75bdcd-9htm9, cluster kcp-upgrade-6kjbu1/kcp-upgrade-dx82ao: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster kcp-upgrade-6kjbu1/kcp-upgrade-dx82ao kube-system pod logs
STEP: Fetching kube-system pod logs took 1.046676324s
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-dx82ao-control-plane-nqk7w, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-n54jq, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-f2rcn, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-8tgc5, container coredns
... skipping 26 lines ...
STEP: Fetching activity logs took 600.627498ms
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-6kjbu1" namespace
STEP: Deleting cluster kcp-upgrade-6kjbu1/kcp-upgrade-dx82ao
STEP: Deleting cluster kcp-upgrade-dx82ao
INFO: Waiting for the Cluster kcp-upgrade-6kjbu1/kcp-upgrade-dx82ao to be deleted
STEP: Waiting for cluster kcp-upgrade-dx82ao to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-n54jq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-dx82ao-control-plane-np29k, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8tgc5, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-dx82ao-control-plane-6h5kp, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-dx82ao-control-plane-6h5kp, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-cp7tt, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-srqjs, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-dx82ao-control-plane-np29k, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jbdhp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-dx82ao-control-plane-6h5kp, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-s9wgn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-6k2zq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-5kpss, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-dx82ao-control-plane-nqk7w, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-cp7tt, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4fbzc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-7rfbp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-f2rcn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-dx82ao-control-plane-np29k, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-dx82ao-control-plane-nqk7w, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7p5q5, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jvkt4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-dx82ao-control-plane-6h5kp, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-9vkrk, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5jrpf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-5kpss, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-dx82ao-control-plane-nqk7w, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-dx82ao-control-plane-nqk7w, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-dx82ao-control-plane-np29k, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-6kjbu1
STEP: Redacting sensitive information from logs


• Failure [1087.469 seconds]
Running the Cluster API E2E tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:41
  Running the KCP upgrade spec in a HA cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:120
    Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd [It]
    /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.1/e2e/kcp_upgrade.go:75

    Expected success, but got an error:
        <errors.aggregate | len:1, cap:1>: [
            <*errors.StatusError | 0xc000525180>{
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
... skipping 122 lines ...
STEP: Dumping logs from the "kcp-upgrade-ic6cdh" workload cluster
STEP: Dumping workload cluster kcp-upgrade-8w9wda/kcp-upgrade-ic6cdh logs
Nov 17 21:44:58.494: INFO: INFO: Collecting logs for node kcp-upgrade-ic6cdh-control-plane-xrtv4 in cluster kcp-upgrade-ic6cdh in namespace kcp-upgrade-8w9wda

Nov 17 21:47:09.612: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-ic6cdh-control-plane-xrtv4

Failed to get logs for machine kcp-upgrade-ic6cdh-control-plane-pzblm, cluster kcp-upgrade-8w9wda/kcp-upgrade-ic6cdh: dialing public load balancer at kcp-upgrade-ic6cdh-3bac04f8.westeurope.cloudapp.azure.com: dial tcp 20.76.112.100:22: connect: connection timed out
Nov 17 21:47:11.173: INFO: INFO: Collecting logs for node kcp-upgrade-ic6cdh-md-0-6hl2b in cluster kcp-upgrade-ic6cdh in namespace kcp-upgrade-8w9wda

Nov 17 21:49:20.687: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-ic6cdh-md-0-6hl2b

Failed to get logs for machine kcp-upgrade-ic6cdh-md-0-5f76fdb7f6-x57rp, cluster kcp-upgrade-8w9wda/kcp-upgrade-ic6cdh: dialing public load balancer at kcp-upgrade-ic6cdh-3bac04f8.westeurope.cloudapp.azure.com: dial tcp 20.76.112.100:22: connect: connection timed out
Nov 17 21:49:21.996: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-ic6cdh in namespace kcp-upgrade-8w9wda

Nov 17 21:55:53.899: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-ic6cdh-md-win-lcdf8

Failed to get logs for machine kcp-upgrade-ic6cdh-md-win-59ffb8fccf-27zjk, cluster kcp-upgrade-8w9wda/kcp-upgrade-ic6cdh: dialing public load balancer at kcp-upgrade-ic6cdh-3bac04f8.westeurope.cloudapp.azure.com: dial tcp 20.76.112.100:22: connect: connection timed out
Nov 17 21:55:55.065: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster kcp-upgrade-ic6cdh in namespace kcp-upgrade-8w9wda

Nov 17 22:02:27.115: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-ic6cdh-md-win-q7tl4

Failed to get logs for machine kcp-upgrade-ic6cdh-md-win-59ffb8fccf-5gbx7, cluster kcp-upgrade-8w9wda/kcp-upgrade-ic6cdh: dialing public load balancer at kcp-upgrade-ic6cdh-3bac04f8.westeurope.cloudapp.azure.com: dial tcp 20.76.112.100:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-8w9wda/kcp-upgrade-ic6cdh kube-system pod logs
STEP: Fetching kube-system pod logs took 1.076909457s
STEP: Dumping workload cluster kcp-upgrade-8w9wda/kcp-upgrade-ic6cdh Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-lhxmf, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-dx5h5, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-7vhwn, container kube-proxy
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-windows-gf2l5, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-windows-dkt6d, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-windows-gf2l5, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-ic6cdh-control-plane-xrtv4, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-windows-dkt6d, container calico-node-felix
STEP: Creating log watcher for controller kube-system/calico-node-x6zj8, container calico-node
STEP: Got error while iterating over activity logs for resource group capz-e2e-0mhr2k: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000353782s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-8w9wda" namespace
STEP: Deleting cluster kcp-upgrade-8w9wda/kcp-upgrade-ic6cdh
STEP: Deleting cluster kcp-upgrade-ic6cdh
INFO: Waiting for the Cluster kcp-upgrade-8w9wda/kcp-upgrade-ic6cdh to be deleted
STEP: Waiting for cluster kcp-upgrade-ic6cdh to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-b8tc5, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-8mmxd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-lhxmf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mg8kd, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-47fz5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-dkt6d, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-gf2l5, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7vhwn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-gf2l5, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-dkt6d, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-ic6cdh-control-plane-xrtv4, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-x6zj8, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-ic6cdh-control-plane-xrtv4, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-ic6cdh-control-plane-xrtv4, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-ic6cdh-control-plane-xrtv4, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dx5h5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-2xnj8, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-8w9wda
STEP: Redacting sensitive information from logs


• [SLOW TEST:2265.500 seconds]
... skipping 91 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-xm2zs, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-fom03f-control-plane-mxmps, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-z7lqv, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-bksxn, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-fom03f-control-plane-swtps, container kube-scheduler
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-fom03f-control-plane-mxmps, container etcd
STEP: Got error while iterating over activity logs for resource group capz-e2e-8aya9n: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000410387s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-anymmr" namespace
STEP: Deleting cluster kcp-upgrade-anymmr/kcp-upgrade-fom03f
STEP: Deleting cluster kcp-upgrade-fom03f
INFO: Waiting for the Cluster kcp-upgrade-anymmr/kcp-upgrade-fom03f to be deleted
STEP: Waiting for cluster kcp-upgrade-fom03f to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-fom03f-control-plane-dtqqq, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-jh82h, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-fom03f-control-plane-dtqqq, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-fom03f-control-plane-dtqqq, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-fom03f-control-plane-mxmps, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-fom03f-control-plane-dtqqq, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xm2zs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-z7lqv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-jj2f7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-fom03f-control-plane-mxmps, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-fom03f-control-plane-mxmps, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-jfzwm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-fom03f-control-plane-mxmps, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-anymmr
STEP: Redacting sensitive information from logs


• [SLOW TEST:2089.760 seconds]
... skipping 66 lines ...
Nov 17 22:00:50.416: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-uydu7l-md-0-n2rr0d-tkh99

Nov 17 22:00:50.852: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster md-rollout-uydu7l in namespace md-rollout-wivlqj

Nov 17 22:02:17.127: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-uydu7l-md-win-jvn95

Failed to get logs for machine md-rollout-uydu7l-md-win-5b9d49c564-pt9hl, cluster md-rollout-wivlqj/md-rollout-uydu7l: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 17 22:02:17.940: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster md-rollout-uydu7l in namespace md-rollout-wivlqj

Nov 17 22:03:42.964: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-uydu7l-md-win-txzpp

Failed to get logs for machine md-rollout-uydu7l-md-win-5b9d49c564-wk2t2, cluster md-rollout-wivlqj/md-rollout-uydu7l: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 17 22:03:43.438: INFO: INFO: Collecting logs for node 10.1.0.8 in cluster md-rollout-uydu7l in namespace md-rollout-wivlqj

Nov 17 22:04:48.763: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-uydu7l-md-win-fiq23r-cf7sl

Failed to get logs for machine md-rollout-uydu7l-md-win-76cfc7947d-dr44h, cluster md-rollout-wivlqj/md-rollout-uydu7l: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-rollout-wivlqj/md-rollout-uydu7l kube-system pod logs
STEP: Fetching kube-system pod logs took 1.075248105s
STEP: Dumping workload cluster md-rollout-wivlqj/md-rollout-uydu7l Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-df7zq, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-vf2mb, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-85d4d, container calico-node-felix
... skipping 17 lines ...
STEP: Fetching activity logs took 1.288532281s
STEP: Dumping all the Cluster API resources in the "md-rollout-wivlqj" namespace
STEP: Deleting cluster md-rollout-wivlqj/md-rollout-uydu7l
STEP: Deleting cluster md-rollout-uydu7l
INFO: Waiting for the Cluster md-rollout-wivlqj/md-rollout-uydu7l to be deleted
STEP: Waiting for cluster md-rollout-uydu7l to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-nx42q, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-zp8pf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-dktkn, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-rollout-uydu7l-control-plane-qj5xj, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-rollout-uydu7l-control-plane-qj5xj, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2lq55, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-5mxm6, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-qh2bx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-zf2wt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-79qsz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-2tcd8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-df7zq, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-rollout-uydu7l-control-plane-qj5xj, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-85d4d, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-5mxm6, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-rollout-uydu7l-control-plane-qj5xj, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-dktkn, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-85d4d, container calico-node-startup: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-rollout" test spec
INFO: Deleting namespace md-rollout-wivlqj
STEP: Redacting sensitive information from logs


• [SLOW TEST:1976.120 seconds]
... skipping 7 lines ...
Running the Cluster API E2E tests Running the self-hosted spec 
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107

STEP: Creating namespace "self-hosted" for hosting the cluster
Nov 17 22:08:59.958: INFO: starting to create namespace for hosting the "self-hosted" test spec
2021/11/17 22:08:59 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-snff1f" using the "management" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-snff1f --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management
... skipping 73 lines ...
STEP: Fetching activity logs took 549.651182ms
STEP: Dumping all the Cluster API resources in the "self-hosted" namespace
STEP: Deleting all clusters in the self-hosted namespace
STEP: Deleting cluster self-hosted-snff1f
INFO: Waiting for the Cluster self-hosted/self-hosted-snff1f to be deleted
STEP: Waiting for cluster self-hosted-snff1f to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-self-hosted-snff1f-control-plane-xmh8t, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-6kncg, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-self-hosted-snff1f-control-plane-xmh8t, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rfmpg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-wxvdx, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-self-hosted-snff1f-control-plane-xmh8t, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-c5ch4, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rwhnt, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-self-hosted-snff1f-control-plane-xmh8t, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "self-hosted" test spec
INFO: Deleting namespace self-hosted
STEP: Checking if any resources are left over in Azure for spec "self-hosted"
STEP: Redacting sensitive information from logs
STEP: Redacting sensitive information from logs

... skipping 60 lines ...
STEP: Fetching activity logs took 579.062826ms
STEP: Dumping all the Cluster API resources in the "kcp-adoption-2xpkww" namespace
STEP: Deleting cluster kcp-adoption-2xpkww/kcp-adoption-oyv8a9
STEP: Deleting cluster kcp-adoption-oyv8a9
INFO: Waiting for the Cluster kcp-adoption-2xpkww/kcp-adoption-oyv8a9 to be deleted
STEP: Waiting for cluster kcp-adoption-oyv8a9 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-adoption-oyv8a9-control-plane-0, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6fgrb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-pnwfh, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-nt85h, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-adoption-oyv8a9-control-plane-0, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-adoption-oyv8a9-control-plane-0, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-58sb9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-6khd6, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-adoption-oyv8a9-control-plane-0, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-adoption" test spec
INFO: Deleting namespace kcp-adoption-2xpkww
STEP: Redacting sensitive information from logs


• [SLOW TEST:582.508 seconds]
... skipping 68 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-f4olt5-control-plane-6df2r, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-sg9gq, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-mhc-remediation-f4olt5-control-plane-6df2r, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-f4olt5-control-plane-6df2r, container kube-apiserver
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-8tqht, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-rk8cj, container calico-node
STEP: Error starting logs stream for pod kube-system/calico-node-5rmb8, container calico-node: container "calico-node" in pod "calico-node-5rmb8" is waiting to start: PodInitializing
STEP: Fetching activity logs took 848.502826ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-06n8an" namespace
STEP: Deleting cluster mhc-remediation-06n8an/mhc-remediation-f4olt5
STEP: Deleting cluster mhc-remediation-f4olt5
INFO: Waiting for the Cluster mhc-remediation-06n8an/mhc-remediation-f4olt5 to be deleted
STEP: Waiting for cluster mhc-remediation-f4olt5 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-k2sp8, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-06n8an
STEP: Redacting sensitive information from logs


• [SLOW TEST:881.382 seconds]
... skipping 96 lines ...
STEP: Fetching activity logs took 1.036241312s
STEP: Dumping all the Cluster API resources in the "mhc-remediation-nelff6" namespace
STEP: Deleting cluster mhc-remediation-nelff6/mhc-remediation-mtmav9
STEP: Deleting cluster mhc-remediation-mtmav9
INFO: Waiting for the Cluster mhc-remediation-nelff6/mhc-remediation-mtmav9 to be deleted
STEP: Waiting for cluster mhc-remediation-mtmav9 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-qn55z, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fj7gq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-289d5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6rg2f, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-297ds, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-mtmav9-control-plane-wwmgt, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-mtmav9-control-plane-k5mz2, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-mtmav9-control-plane-wnsp2, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-mtmav9-control-plane-wwmgt, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-mtmav9-control-plane-k5mz2, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-mtmav9-control-plane-wnsp2, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-mtmav9-control-plane-wwmgt, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mj2gr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-mtmav9-control-plane-wnsp2, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-mtmav9-control-plane-k5mz2, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dlsqf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-mtmav9-control-plane-k5mz2, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-mtmav9-control-plane-wnsp2, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-mtmav9-control-plane-wwmgt, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-nelff6
STEP: Redacting sensitive information from logs


• [SLOW TEST:1283.712 seconds]
... skipping 61 lines ...
Nov 17 22:47:09.697: INFO: INFO: Collecting boot logs for AzureMachine md-scale-vjhl59-md-0-bd78r

Nov 17 22:47:10.496: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster md-scale-vjhl59 in namespace md-scale-yw47kx

Nov 17 22:48:26.770: INFO: INFO: Collecting boot logs for AzureMachine md-scale-vjhl59-md-win-twkfm

Failed to get logs for machine md-scale-vjhl59-md-win-c98f9bc65-rhw9b, cluster md-scale-yw47kx/md-scale-vjhl59: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 17 22:48:27.232: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster md-scale-vjhl59 in namespace md-scale-yw47kx

Nov 17 22:49:01.861: INFO: INFO: Collecting boot logs for AzureMachine md-scale-vjhl59-md-win-tk48d

Failed to get logs for machine md-scale-vjhl59-md-win-c98f9bc65-t4ll7, cluster md-scale-yw47kx/md-scale-vjhl59: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-scale-yw47kx/md-scale-vjhl59 kube-system pod logs
STEP: Fetching kube-system pod logs took 1.134731706s
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-ttk94, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-windows-7pxdj, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-vjhl59-control-plane-49stn, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/etcd-md-scale-vjhl59-control-plane-49stn, container etcd
... skipping 14 lines ...
STEP: Fetching activity logs took 692.70896ms
STEP: Dumping all the Cluster API resources in the "md-scale-yw47kx" namespace
STEP: Deleting cluster md-scale-yw47kx/md-scale-vjhl59
STEP: Deleting cluster md-scale-vjhl59
INFO: Waiting for the Cluster md-scale-yw47kx/md-scale-vjhl59 to be deleted
STEP: Waiting for cluster md-scale-vjhl59 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-scale-vjhl59-control-plane-49stn, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-scale-vjhl59-control-plane-49stn, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-4m82r, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-ttk94, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-dxcrm, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-nbppn, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-scale-vjhl59-control-plane-49stn, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-7pxdj, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-scale-vjhl59-control-plane-49stn, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-q4bpd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-4n7v5, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-7pxdj, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ksrbl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5br2t, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-dxcrm, container calico-node-startup: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-scale" test spec
INFO: Deleting namespace md-scale-yw47kx
STEP: Redacting sensitive information from logs


• [SLOW TEST:1876.401 seconds]
... skipping 58 lines ...
STEP: Dumping logs from the "node-drain-2mcrcm" workload cluster
STEP: Dumping workload cluster node-drain-321co4/node-drain-2mcrcm logs
Nov 17 23:04:28.264: INFO: INFO: Collecting logs for node node-drain-2mcrcm-control-plane-crxjd in cluster node-drain-2mcrcm in namespace node-drain-321co4

Nov 17 23:06:39.403: INFO: INFO: Collecting boot logs for AzureMachine node-drain-2mcrcm-control-plane-crxjd

Failed to get logs for machine node-drain-2mcrcm-control-plane-w96wt, cluster node-drain-321co4/node-drain-2mcrcm: dialing public load balancer at node-drain-2mcrcm-c0aa8dff.westeurope.cloudapp.azure.com: dial tcp 51.124.77.115:22: connect: connection timed out
STEP: Dumping workload cluster node-drain-321co4/node-drain-2mcrcm kube-system pod logs
STEP: Fetching kube-system pod logs took 1.008974772s
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-b7t87, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-proxy-xst7l, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-28pv8, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-sbq5m, container coredns
... skipping 6 lines ...
STEP: Fetching activity logs took 1.018440615s
STEP: Dumping all the Cluster API resources in the "node-drain-321co4" namespace
STEP: Deleting cluster node-drain-321co4/node-drain-2mcrcm
STEP: Deleting cluster node-drain-2mcrcm
INFO: Waiting for the Cluster node-drain-321co4/node-drain-2mcrcm to be deleted
STEP: Waiting for cluster node-drain-2mcrcm to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xst7l, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-sbq5m, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-node-drain-2mcrcm-control-plane-crxjd, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-node-drain-2mcrcm-control-plane-crxjd, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-node-drain-2mcrcm-control-plane-crxjd, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-b7t87, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-98mfb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-28pv8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-node-drain-2mcrcm-control-plane-crxjd, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "node-drain" test spec
INFO: Deleting namespace node-drain-321co4
STEP: Redacting sensitive information from logs


• [SLOW TEST:1769.500 seconds]
... skipping 60 lines ...
Nov 17 22:58:44.670: INFO: INFO: Collecting boot logs for AzureMachine machine-pool-3izvsb-control-plane-vbcqq

Nov 17 22:58:46.105: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-3izvsb in namespace machine-pool-uk0rv8

Nov 17 22:59:08.123: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set machine-pool-3izvsb-mp-0

Failed to get logs for machine pool machine-pool-3izvsb-mp-0, cluster machine-pool-uk0rv8/machine-pool-3izvsb: [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1]
Nov 17 22:59:08.716: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-3izvsb in namespace machine-pool-uk0rv8

Nov 17 22:59:54.989: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set win-p-win

Failed to get logs for machine pool machine-pool-3izvsb-mp-win, cluster machine-pool-uk0rv8/machine-pool-3izvsb: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster machine-pool-uk0rv8/machine-pool-3izvsb kube-system pod logs
STEP: Fetching kube-system pod logs took 1.009103501s
STEP: Dumping workload cluster machine-pool-uk0rv8/machine-pool-3izvsb Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-8vxmx, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-72tr7, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-j7wg6, container calico-node-startup
... skipping 11 lines ...
STEP: Fetching activity logs took 554.594838ms
STEP: Dumping all the Cluster API resources in the "machine-pool-uk0rv8" namespace
STEP: Deleting cluster machine-pool-uk0rv8/machine-pool-3izvsb
STEP: Deleting cluster machine-pool-3izvsb
INFO: Waiting for the Cluster machine-pool-uk0rv8/machine-pool-3izvsb to be deleted
STEP: Waiting for cluster machine-pool-3izvsb to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-72tr7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-tx6k4, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-j7wg6, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-hp2dv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-3izvsb-control-plane-vbcqq, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-3izvsb-control-plane-vbcqq, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-3izvsb-control-plane-vbcqq, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5vg9m, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-3izvsb-control-plane-vbcqq, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-8vxmx, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-j7wg6, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-qv9jz, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-uk0rv8
STEP: Redacting sensitive information from logs


• [SLOW TEST:2314.876 seconds]
... skipping 7 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster [It] Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.1/framework/controlplane_helpers.go:322

Ran 12 of 24 Specs in 6263.015 seconds
FAIL! -- 11 Passed | 1 Failed | 0 Pending | 12 Skipped


Ginkgo ran 1 suite in 1h45m35.009802325s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:176: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:184: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...