This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-17 18:33
Elapsed1h59m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node 30m22s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\san\sAKS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454
Timed out after 1200.000s.
System machine pools not ready
Expected
    <bool>: false
to equal
    <bool>: true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 431 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Wed, 17 Nov 2021 18:40:13 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-9fv5z0" for hosting the cluster
Nov 17 18:40:13.717: INFO: starting to create namespace for hosting the "capz-e2e-9fv5z0" test spec
2021/11/17 18:40:13 failed trying to get namespace (capz-e2e-9fv5z0):namespaces "capz-e2e-9fv5z0" not found
INFO: Creating namespace capz-e2e-9fv5z0
INFO: Creating event watcher for namespace "capz-e2e-9fv5z0"
Nov 17 18:40:13.785: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-9fv5z0-ipv6
INFO: Creating the workload cluster with name "capz-e2e-9fv5z0-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 525.436363ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-9fv5z0" namespace
STEP: Deleting all clusters in the capz-e2e-9fv5z0 namespace
STEP: Deleting cluster capz-e2e-9fv5z0-ipv6
INFO: Waiting for the Cluster capz-e2e-9fv5z0/capz-e2e-9fv5z0-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-9fv5z0-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-cn72p, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-sqjqh, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xlsw9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rq64z, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-9fv5z0-ipv6-control-plane-fwcj6, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-9fv5z0-ipv6-control-plane-plplt, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-d6xqr, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-9fv5z0-ipv6-control-plane-xsv7n, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-9fv5z0-ipv6-control-plane-fwcj6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-9fv5z0-ipv6-control-plane-xsv7n, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-9fv5z0-ipv6-control-plane-xsv7n, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4t76j, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-9fv5z0-ipv6-control-plane-plplt, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-q28zs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-9fv5z0-ipv6-control-plane-plplt, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-9fv5z0-ipv6-control-plane-fwcj6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-p2mkx, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-9fv5z0-ipv6-control-plane-fwcj6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-6tdhf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-9fv5z0-ipv6-control-plane-xsv7n, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-xpl7x, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-9fv5z0-ipv6-control-plane-plplt, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-csjjd, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-9fv5z0
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 17m13s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Wed, 17 Nov 2021 18:57:26 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-g5uq8b" for hosting the cluster
Nov 17 18:57:26.352: INFO: starting to create namespace for hosting the "capz-e2e-g5uq8b" test spec
2021/11/17 18:57:26 failed trying to get namespace (capz-e2e-g5uq8b):namespaces "capz-e2e-g5uq8b" not found
INFO: Creating namespace capz-e2e-g5uq8b
INFO: Creating event watcher for namespace "capz-e2e-g5uq8b"
Nov 17 18:57:26.382: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-g5uq8b-vmss
INFO: Creating the workload cluster with name "capz-e2e-g5uq8b-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 106 lines ...
STEP: Fetching activity logs took 752.406356ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-g5uq8b" namespace
STEP: Deleting all clusters in the capz-e2e-g5uq8b namespace
STEP: Deleting cluster capz-e2e-g5uq8b-vmss
INFO: Waiting for the Cluster capz-e2e-g5uq8b/capz-e2e-g5uq8b-vmss to be deleted
STEP: Waiting for cluster capz-e2e-g5uq8b-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-26md5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-krgh2, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-fvl4t, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-s8p4c, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-x9kcp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-g5uq8b-vmss-control-plane-vfkdr, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-g5uq8b-vmss-control-plane-vfkdr, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ddglj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-g5uq8b-vmss-control-plane-vfkdr, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-l5rpt, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8tq9w, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-g5uq8b-vmss-control-plane-vfkdr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2ktn6, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-g5uq8b
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 18m43s on Ginkgo node 2 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Wed, 17 Nov 2021 18:40:13 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-9c5ie8" for hosting the cluster
Nov 17 18:40:13.716: INFO: starting to create namespace for hosting the "capz-e2e-9c5ie8" test spec
2021/11/17 18:40:13 failed trying to get namespace (capz-e2e-9c5ie8):namespaces "capz-e2e-9c5ie8" not found
INFO: Creating namespace capz-e2e-9c5ie8
INFO: Creating event watcher for namespace "capz-e2e-9c5ie8"
Nov 17 18:40:13.804: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-9c5ie8-ha
INFO: Creating the workload cluster with name "capz-e2e-9c5ie8-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 67 lines ...
Nov 17 18:50:33.193: INFO: starting to delete external LB service webqx3wwq-elb
Nov 17 18:50:33.270: INFO: starting to delete deployment webqx3wwq
Nov 17 18:50:33.317: INFO: starting to delete job curl-to-elb-job7411m3o88zc
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 17 18:50:33.399: INFO: starting to create dev deployment namespace
2021/11/17 18:50:33 failed trying to get namespace (development):namespaces "development" not found
2021/11/17 18:50:33 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 17 18:50:33.498: INFO: starting to create prod deployment namespace
2021/11/17 18:50:33 failed trying to get namespace (production):namespaces "production" not found
2021/11/17 18:50:33 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 17 18:50:33.586: INFO: starting to create frontend-prod deployments
Nov 17 18:50:33.630: INFO: starting to create frontend-dev deployments
Nov 17 18:50:33.698: INFO: starting to create backend deployments
Nov 17 18:50:33.811: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 17 18:50:57.115: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.217.4 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 17 18:53:07.678: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 17 18:53:07.915: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.217.4 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.217.4 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 17 18:57:29.718: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 17 18:57:29.920: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.75.195 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 17 18:59:40.788: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 17 18:59:40.968: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.217.2 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.75.195 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 17 19:04:02.937: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 17 19:04:03.165: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.217.4 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 17 19:06:14.103: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 17 19:06:14.324: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.217.4 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-9c5ie8-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-9c5ie8/capz-e2e-9c5ie8-ha logs
Nov 17 19:08:25.476: INFO: INFO: Collecting logs for node capz-e2e-9c5ie8-ha-control-plane-z4lz5 in cluster capz-e2e-9c5ie8-ha in namespace capz-e2e-9c5ie8

Nov 17 19:08:35.738: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-9c5ie8-ha-control-plane-z4lz5
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-krgjc, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-9c5ie8-ha-control-plane-z4lz5, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-9c5ie8-ha-control-plane-z4lz5, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-9c5ie8-ha-control-plane-9q4fk, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-6tsf8, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-8wh58, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-9c5ie8-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000860692s
STEP: Dumping all the Cluster API resources in the "capz-e2e-9c5ie8" namespace
STEP: Deleting all clusters in the capz-e2e-9c5ie8 namespace
STEP: Deleting cluster capz-e2e-9c5ie8-ha
INFO: Waiting for the Cluster capz-e2e-9c5ie8/capz-e2e-9c5ie8-ha to be deleted
STEP: Waiting for cluster capz-e2e-9c5ie8-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-9c5ie8-ha-control-plane-9q4fk, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8wh58, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8f6g2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-gj9bv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-9c5ie8-ha-control-plane-z4lz5, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-9c5ie8-ha-control-plane-z4lz5, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-9c5ie8-ha-control-plane-9q4fk, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6tsf8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-b5w8j, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4v6kq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-9c5ie8-ha-control-plane-z4lz5, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-9c5ie8-ha-control-plane-z4lz5, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-9c5ie8-ha-control-plane-9q4fk, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-9c5ie8-ha-control-plane-9q4fk, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-9c5ie8
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 41m39s on Ginkgo node 3 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Wed, 17 Nov 2021 18:40:13 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-rrcg49" for hosting the cluster
Nov 17 18:40:13.698: INFO: starting to create namespace for hosting the "capz-e2e-rrcg49" test spec
2021/11/17 18:40:13 failed trying to get namespace (capz-e2e-rrcg49):namespaces "capz-e2e-rrcg49" not found
INFO: Creating namespace capz-e2e-rrcg49
INFO: Creating event watcher for namespace "capz-e2e-rrcg49"
Nov 17 18:40:13.757: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-rrcg49-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-jtbcl, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-rrcg49-public-custom-vnet-control-plane-2vcdc, container kube-scheduler
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-rrcg49-public-custom-vnet-control-plane-2vcdc, container etcd
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-fmqlv, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-pmbxb, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-2bpc8, container coredns
STEP: Got error while iterating over activity logs for resource group capz-e2e-rrcg49-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000616506s
STEP: Dumping all the Cluster API resources in the "capz-e2e-rrcg49" namespace
STEP: Deleting all clusters in the capz-e2e-rrcg49 namespace
STEP: Deleting cluster capz-e2e-rrcg49-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-rrcg49/capz-e2e-rrcg49-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-rrcg49-public-custom-vnet to be deleted
W1117 19:27:51.053457   24117 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1117 19:28:22.127147   24117 trace.go:205] Trace[879392369]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (17-Nov-2021 19:27:52.126) (total time: 30000ms):
Trace[879392369]: [30.000578841s] [30.000578841s] END
E1117 19:28:22.127193   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp 20.75.125.98:6443: i/o timeout
I1117 19:28:54.406845   24117 trace.go:205] Trace[1896607582]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (17-Nov-2021 19:28:24.405) (total time: 30001ms):
Trace[1896607582]: [30.001216881s] [30.001216881s] END
E1117 19:28:54.406894   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp 20.75.125.98:6443: i/o timeout
I1117 19:29:28.658385   24117 trace.go:205] Trace[1072826204]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (17-Nov-2021 19:28:58.657) (total time: 30000ms):
Trace[1072826204]: [30.000856159s] [30.000856159s] END
E1117 19:29:28.658445   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp 20.75.125.98:6443: i/o timeout
I1117 19:30:07.353703   24117 trace.go:205] Trace[1615727579]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (17-Nov-2021 19:29:37.352) (total time: 30001ms):
Trace[1615727579]: [30.001520786s] [30.001520786s] END
E1117 19:30:07.353855   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp 20.75.125.98:6443: i/o timeout
I1117 19:30:55.095929   24117 trace.go:205] Trace[302473836]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (17-Nov-2021 19:30:25.094) (total time: 30000ms):
Trace[302473836]: [30.000981473s] [30.000981473s] END
E1117 19:30:55.095988   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp 20.75.125.98:6443: i/o timeout
E1117 19:31:39.600515   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-rrcg49
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 17 19:31:58.237: INFO: deleting an existing virtual network "custom-vnet"
Nov 17 19:32:08.859: INFO: deleting an existing route table "node-routetable"
Nov 17 19:32:19.188: INFO: deleting an existing network security group "node-nsg"
E1117 19:32:23.802937   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 17 19:32:29.853: INFO: deleting an existing network security group "control-plane-nsg"
Nov 17 19:32:40.213: INFO: verifying the existing resource group "capz-e2e-rrcg49-public-custom-vnet" is empty
Nov 17 19:32:40.267: INFO: deleting the existing resource group "capz-e2e-rrcg49-public-custom-vnet"
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1117 19:33:00.776467   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:33:34.865602   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:34:24.178737   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:35:05.254072   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 55m31s on Ginkgo node 1 of 3


• [SLOW TEST:3331.145 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Wed, 17 Nov 2021 19:16:09 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-8lh5g4" for hosting the cluster
Nov 17 19:16:09.629: INFO: starting to create namespace for hosting the "capz-e2e-8lh5g4" test spec
2021/11/17 19:16:09 failed trying to get namespace (capz-e2e-8lh5g4):namespaces "capz-e2e-8lh5g4" not found
INFO: Creating namespace capz-e2e-8lh5g4
INFO: Creating event watcher for namespace "capz-e2e-8lh5g4"
Nov 17 19:16:09.667: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-8lh5g4-gpu
INFO: Creating the workload cluster with name "capz-e2e-8lh5g4-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 58 lines ...
STEP: Fetching activity logs took 508.889359ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-8lh5g4" namespace
STEP: Deleting all clusters in the capz-e2e-8lh5g4 namespace
STEP: Deleting cluster capz-e2e-8lh5g4-gpu
INFO: Waiting for the Cluster capz-e2e-8lh5g4/capz-e2e-8lh5g4-gpu to be deleted
STEP: Waiting for cluster capz-e2e-8lh5g4-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-clm7s, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kkcv8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-8lh5g4-gpu-control-plane-dm7bn, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-8lh5g4-gpu-control-plane-dm7bn, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-n2bwf, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-lr5nq, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-dzlgb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-8lh5g4-gpu-control-plane-dm7bn, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-8lh5g4-gpu-control-plane-dm7bn, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-8lh5g4
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 20m5s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Wed, 17 Nov 2021 19:21:52 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-mgxju2" for hosting the cluster
Nov 17 19:21:52.537: INFO: starting to create namespace for hosting the "capz-e2e-mgxju2" test spec
2021/11/17 19:21:52 failed trying to get namespace (capz-e2e-mgxju2):namespaces "capz-e2e-mgxju2" not found
INFO: Creating namespace capz-e2e-mgxju2
INFO: Creating event watcher for namespace "capz-e2e-mgxju2"
Nov 17 19:21:52.571: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-mgxju2-oot
INFO: Creating the workload cluster with name "capz-e2e-mgxju2-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-jobwbeshrxilp3 to be complete
Nov 17 19:34:27.089: INFO: waiting for job default/curl-to-elb-jobwbeshrxilp3 to be complete
Nov 17 19:34:37.167: INFO: job default/curl-to-elb-jobwbeshrxilp3 is complete, took 10.078343748s
STEP: connecting directly to the external LB service
Nov 17 19:34:37.168: INFO: starting attempts to connect directly to the external LB service
2021/11/17 19:34:37 [DEBUG] GET http://40.70.225.92
2021/11/17 19:35:07 [ERR] GET http://40.70.225.92 request failed: Get "http://40.70.225.92": dial tcp 40.70.225.92:80: i/o timeout
2021/11/17 19:35:07 [DEBUG] GET http://40.70.225.92: retrying in 1s (4 left)
Nov 17 19:35:08.238: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 17 19:35:08.238: INFO: starting to delete external LB service webe9fe5o-elb
Nov 17 19:35:08.290: INFO: starting to delete deployment webe9fe5o
Nov 17 19:35:08.327: INFO: starting to delete job curl-to-elb-jobwbeshrxilp3
... skipping 34 lines ...
STEP: Fetching activity logs took 585.643342ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-mgxju2" namespace
STEP: Deleting all clusters in the capz-e2e-mgxju2 namespace
STEP: Deleting cluster capz-e2e-mgxju2-oot
INFO: Waiting for the Cluster capz-e2e-mgxju2/capz-e2e-mgxju2-oot to be deleted
STEP: Waiting for cluster capz-e2e-mgxju2-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-mgxju2-oot-control-plane-rlpdw, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-6gkmf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-m9pkm, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-mgxju2-oot-control-plane-rlpdw, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-flnrp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-mgxju2-oot-control-plane-rlpdw, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-bgdhc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-r47s2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-r2hkv, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-mgxju2
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 30m35s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Wed, 17 Nov 2021 19:35:44 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-whg5gs" for hosting the cluster
Nov 17 19:35:44.846: INFO: starting to create namespace for hosting the "capz-e2e-whg5gs" test spec
2021/11/17 19:35:44 failed trying to get namespace (capz-e2e-whg5gs):namespaces "capz-e2e-whg5gs" not found
INFO: Creating namespace capz-e2e-whg5gs
INFO: Creating event watcher for namespace "capz-e2e-whg5gs"
Nov 17 19:35:44.881: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-whg5gs-aks
INFO: Creating the workload cluster with name "capz-e2e-whg5gs-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1117 19:36:03.032404   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:36:54.398730   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:37:42.735933   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:38:29.012282   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 17 19:39:18.000: INFO: Waiting for the first control plane machine managed by capz-e2e-whg5gs/capz-e2e-whg5gs-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
E1117 19:39:23.085558   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:40:18.623012   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:41:09.262377   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:41:55.197675   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:42:50.076934   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:43:42.222890   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:44:32.354038   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:45:23.884034   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:46:13.222798   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:46:51.651743   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:47:42.197633   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:48:15.271060   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:49:08.057993   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:49:56.719631   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:50:27.335177   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:51:07.886263   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:51:48.158731   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:52:36.761921   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:53:30.445852   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:54:27.680476   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:55:06.851362   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:55:43.115709   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:56:16.957727   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:56:47.672569   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:57:32.863808   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:58:10.360833   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 19:58:52.383119   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Dumping logs from the "capz-e2e-whg5gs-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-whg5gs/capz-e2e-whg5gs-aks logs
STEP: Dumping workload cluster capz-e2e-whg5gs/capz-e2e-whg5gs-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 425.631013ms
STEP: Dumping workload cluster capz-e2e-whg5gs/capz-e2e-whg5gs-aks Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-9bdmz, container calico-node
... skipping 10 lines ...
STEP: Fetching activity logs took 1.375800434s
STEP: Dumping all the Cluster API resources in the "capz-e2e-whg5gs" namespace
STEP: Deleting all clusters in the capz-e2e-whg5gs namespace
STEP: Deleting cluster capz-e2e-whg5gs-aks
INFO: Waiting for the Cluster capz-e2e-whg5gs/capz-e2e-whg5gs-aks to be deleted
STEP: Waiting for cluster capz-e2e-whg5gs-aks to be deleted
E1117 19:59:40.676452   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:00:23.150288   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:01:12.946998   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:02:05.475460   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:02:46.401914   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:03:19.259731   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:04:18.429882   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-whg5gs
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1117 20:04:56.072091   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:05:49.476683   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 30m23s on Ginkgo node 1 of 3


• Failure [1822.928 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 59 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Wed, 17 Nov 2021 19:36:14 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-onuqx2" for hosting the cluster
Nov 17 19:36:14.421: INFO: starting to create namespace for hosting the "capz-e2e-onuqx2" test spec
2021/11/17 19:36:14 failed trying to get namespace (capz-e2e-onuqx2):namespaces "capz-e2e-onuqx2" not found
INFO: Creating namespace capz-e2e-onuqx2
INFO: Creating event watcher for namespace "capz-e2e-onuqx2"
Nov 17 19:36:16.180: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-onuqx2-win-ha
INFO: Creating the workload cluster with name "capz-e2e-onuqx2-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 151 lines ...
STEP: Fetching activity logs took 1.130242032s
STEP: Dumping all the Cluster API resources in the "capz-e2e-onuqx2" namespace
STEP: Deleting all clusters in the capz-e2e-onuqx2 namespace
STEP: Deleting cluster capz-e2e-onuqx2-win-ha
INFO: Waiting for the Cluster capz-e2e-onuqx2/capz-e2e-onuqx2-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-onuqx2-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-onuqx2-win-ha-control-plane-vdzvp, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-onuqx2-win-ha-control-plane-vxllr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-onuqx2-win-ha-control-plane-vdzvp, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5nk2w, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-k4sqm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-onuqx2-win-ha-control-plane-vxllr, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-4pr24, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-nxs7k, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-onuqx2-win-ha-control-plane-vdzvp, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-q8x46, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-69j5n, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-onuqx2-win-ha-control-plane-vxllr, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-onuqx2-win-ha-control-plane-vxllr, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-onuqx2-win-ha-control-plane-blrv6, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5227v, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-t8zzf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-onuqx2-win-ha-control-plane-blrv6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-onuqx2-win-ha-control-plane-blrv6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-onuqx2-win-ha-control-plane-blrv6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-onuqx2-win-ha-control-plane-vdzvp, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-onuqx2
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 34m13s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Wed, 17 Nov 2021 19:52:27 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-mccng5" for hosting the cluster
Nov 17 19:52:27.433: INFO: starting to create namespace for hosting the "capz-e2e-mccng5" test spec
2021/11/17 19:52:27 failed trying to get namespace (capz-e2e-mccng5):namespaces "capz-e2e-mccng5" not found
INFO: Creating namespace capz-e2e-mccng5
INFO: Creating event watcher for namespace "capz-e2e-mccng5"
Nov 17 19:52:27.469: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-mccng5-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-mccng5-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-jobsottic65wfe to be complete
Nov 17 20:04:53.017: INFO: waiting for job default/curl-to-elb-jobsottic65wfe to be complete
Nov 17 20:05:03.096: INFO: job default/curl-to-elb-jobsottic65wfe is complete, took 10.078967534s
STEP: connecting directly to the external LB service
Nov 17 20:05:03.096: INFO: starting attempts to connect directly to the external LB service
2021/11/17 20:05:03 [DEBUG] GET http://20.62.24.51
2021/11/17 20:05:33 [ERR] GET http://20.62.24.51 request failed: Get "http://20.62.24.51": dial tcp 20.62.24.51:80: i/o timeout
2021/11/17 20:05:33 [DEBUG] GET http://20.62.24.51: retrying in 1s (4 left)
Nov 17 20:05:34.164: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 17 20:05:34.164: INFO: starting to delete external LB service web8lvbvz-elb
Nov 17 20:05:34.228: INFO: starting to delete deployment web8lvbvz
Nov 17 20:05:34.270: INFO: starting to delete job curl-to-elb-jobsottic65wfe
... skipping 40 lines ...
Nov 17 20:12:26.667: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-mccng5-win-vmss-control-plane-j4tm2

Nov 17 20:12:27.534: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-mccng5-win-vmss in namespace capz-e2e-mccng5

Nov 17 20:12:44.441: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-mccng5-win-vmss-mp-0

Failed to get logs for machine pool capz-e2e-mccng5-win-vmss-mp-0, cluster capz-e2e-mccng5/capz-e2e-mccng5-win-vmss: [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1]
Nov 17 20:12:44.751: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-mccng5-win-vmss in namespace capz-e2e-mccng5

Nov 17 20:13:20.980: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

STEP: Dumping workload cluster capz-e2e-mccng5/capz-e2e-mccng5-win-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 390.486714ms
... skipping 13 lines ...
STEP: Fetching activity logs took 1.10384811s
STEP: Dumping all the Cluster API resources in the "capz-e2e-mccng5" namespace
STEP: Deleting all clusters in the capz-e2e-mccng5 namespace
STEP: Deleting cluster capz-e2e-mccng5-win-vmss
INFO: Waiting for the Cluster capz-e2e-mccng5/capz-e2e-mccng5-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-mccng5-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-7tjwb, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-psktl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-mccng5-win-vmss-control-plane-j4tm2, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-mccng5-win-vmss-control-plane-j4tm2, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-mccng5-win-vmss-control-plane-j4tm2, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-jj2wq, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-92rpb, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-pr2t7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-mccng5-win-vmss-control-plane-j4tm2, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-4dlzj, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-mccng5
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 38m0s on Ginkgo node 3 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows enabled VMSS cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:542
    with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543
------------------------------
E1117 20:06:41.798636   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:07:20.457154   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:08:11.407883   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:08:46.229251   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:09:43.611864   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:10:39.613467   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:11:29.343608   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:12:23.717178   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:12:55.704679   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:13:52.678615   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:14:47.942555   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:15:25.922296   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:16:15.795682   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:17:13.759831   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:18:12.927846   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:18:55.403662   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:19:46.345406   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:20:41.382278   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:21:31.867535   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:22:04.549111   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:22:54.789692   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:23:49.580219   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:24:47.919519   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:25:47.683556   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:26:21.805724   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:27:04.546228   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:27:35.308524   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:28:29.203557   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:29:06.012134   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:29:39.221012   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1117 20:30:13.864390   24117 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rrcg49/events?resourceVersion=8721": dial tcp: lookup capz-e2e-rrcg49-public-custom-vnet-659c0ec5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating an AKS cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216

Ran 9 of 22 Specs in 6731.141 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 1h53m41.805134595s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...