This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-19 18:34
Elapsed1h59m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node 55m59s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\san\sAKS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454
Timed out after 1200.000s.
System machine pools not ready
Expected
    <bool>: false
to equal
    <bool>: true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 431 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Fri, 19 Nov 2021 18:41:12 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-dteje8" for hosting the cluster
Nov 19 18:41:12.906: INFO: starting to create namespace for hosting the "capz-e2e-dteje8" test spec
2021/11/19 18:41:12 failed trying to get namespace (capz-e2e-dteje8):namespaces "capz-e2e-dteje8" not found
INFO: Creating namespace capz-e2e-dteje8
INFO: Creating event watcher for namespace "capz-e2e-dteje8"
Nov 19 18:41:12.974: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-dteje8-ipv6
INFO: Creating the workload cluster with name "capz-e2e-dteje8-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 1.617905377s
STEP: Dumping all the Cluster API resources in the "capz-e2e-dteje8" namespace
STEP: Deleting all clusters in the capz-e2e-dteje8 namespace
STEP: Deleting cluster capz-e2e-dteje8-ipv6
INFO: Waiting for the Cluster capz-e2e-dteje8/capz-e2e-dteje8-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-dteje8-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2jx8g, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-dteje8-ipv6-control-plane-tww66, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-dteje8-ipv6-control-plane-qjnwh, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6pwqj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-dteje8-ipv6-control-plane-qjnwh, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mknht, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-dteje8-ipv6-control-plane-tww66, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-jnbvj, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-dteje8-ipv6-control-plane-pdh4q, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-dteje8-ipv6-control-plane-pdh4q, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-lhzhm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-gbhpt, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-dteje8-ipv6-control-plane-tww66, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6pjpf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-vx6hd, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-n58kf, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-s87vq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-dteje8-ipv6-control-plane-qjnwh, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-dteje8-ipv6-control-plane-tww66, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-dteje8-ipv6-control-plane-pdh4q, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-dteje8-ipv6-control-plane-pdh4q, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cfcnz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-dteje8-ipv6-control-plane-qjnwh, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-dteje8
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 16m15s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Fri, 19 Nov 2021 18:57:28 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-sepz0m" for hosting the cluster
Nov 19 18:57:28.381: INFO: starting to create namespace for hosting the "capz-e2e-sepz0m" test spec
2021/11/19 18:57:28 failed trying to get namespace (capz-e2e-sepz0m):namespaces "capz-e2e-sepz0m" not found
INFO: Creating namespace capz-e2e-sepz0m
INFO: Creating event watcher for namespace "capz-e2e-sepz0m"
Nov 19 18:57:28.423: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-sepz0m-vmss
INFO: Creating the workload cluster with name "capz-e2e-sepz0m-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 106 lines ...
STEP: Fetching activity logs took 531.309963ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-sepz0m" namespace
STEP: Deleting all clusters in the capz-e2e-sepz0m namespace
STEP: Deleting cluster capz-e2e-sepz0m-vmss
INFO: Waiting for the Cluster capz-e2e-sepz0m/capz-e2e-sepz0m-vmss to be deleted
STEP: Waiting for cluster capz-e2e-sepz0m-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-w84sn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-j52nc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6xt9t, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-7rssj, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-sepz0m
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 21m47s on Ginkgo node 2 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Fri, 19 Nov 2021 18:41:12 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-yepb1z" for hosting the cluster
Nov 19 18:41:12.905: INFO: starting to create namespace for hosting the "capz-e2e-yepb1z" test spec
2021/11/19 18:41:12 failed trying to get namespace (capz-e2e-yepb1z):namespaces "capz-e2e-yepb1z" not found
INFO: Creating namespace capz-e2e-yepb1z
INFO: Creating event watcher for namespace "capz-e2e-yepb1z"
Nov 19 18:41:12.972: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-yepb1z-ha
INFO: Creating the workload cluster with name "capz-e2e-yepb1z-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 59 lines ...
STEP: waiting for job default/curl-to-elb-job2t9yhfojkv1 to be complete
Nov 19 18:50:40.597: INFO: waiting for job default/curl-to-elb-job2t9yhfojkv1 to be complete
Nov 19 18:50:50.683: INFO: job default/curl-to-elb-job2t9yhfojkv1 is complete, took 10.085514672s
STEP: connecting directly to the external LB service
Nov 19 18:50:50.683: INFO: starting attempts to connect directly to the external LB service
2021/11/19 18:50:50 [DEBUG] GET http://52.151.200.108
2021/11/19 18:51:20 [ERR] GET http://52.151.200.108 request failed: Get "http://52.151.200.108": dial tcp 52.151.200.108:80: i/o timeout
2021/11/19 18:51:20 [DEBUG] GET http://52.151.200.108: retrying in 1s (4 left)
Nov 19 18:51:21.743: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 19 18:51:21.743: INFO: starting to delete external LB service webua3rdt-elb
Nov 19 18:51:21.807: INFO: starting to delete deployment webua3rdt
Nov 19 18:51:21.843: INFO: starting to delete job curl-to-elb-job2t9yhfojkv1
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 19 18:51:21.923: INFO: starting to create dev deployment namespace
2021/11/19 18:51:21 failed trying to get namespace (development):namespaces "development" not found
2021/11/19 18:51:21 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 19 18:51:21.999: INFO: starting to create prod deployment namespace
2021/11/19 18:51:22 failed trying to get namespace (production):namespaces "production" not found
2021/11/19 18:51:22 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 19 18:51:22.071: INFO: starting to create frontend-prod deployments
Nov 19 18:51:22.109: INFO: starting to create frontend-dev deployments
Nov 19 18:51:22.162: INFO: starting to create backend deployments
Nov 19 18:51:22.214: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 19 18:51:45.315: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.219.197 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 19 18:53:54.831: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 19 18:53:55.518: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.219.197 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.219.197 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 19 18:58:16.897: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 19 18:58:17.107: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.219.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 19 19:00:28.044: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 19 19:00:28.247: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.219.195 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.219.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 19 19:04:50.190: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 19 19:04:50.341: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.219.197 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 19 19:07:01.260: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 19 19:07:01.445: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.219.197 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-yepb1z-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-yepb1z/capz-e2e-yepb1z-ha logs
Nov 19 19:09:12.750: INFO: INFO: Collecting logs for node capz-e2e-yepb1z-ha-control-plane-5khkn in cluster capz-e2e-yepb1z-ha in namespace capz-e2e-yepb1z

Nov 19 19:09:24.440: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-yepb1z-ha-control-plane-5khkn
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-2dstn, container coredns
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-yepb1z-ha-control-plane-b6km9, container etcd
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-yepb1z-ha-control-plane-5khkn, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-yepb1z-ha-control-plane-b6km9, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-fqrmz, container coredns
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-yepb1z-ha-control-plane-5khkn, container kube-apiserver
STEP: Got error while iterating over activity logs for resource group capz-e2e-yepb1z-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001135101s
STEP: Dumping all the Cluster API resources in the "capz-e2e-yepb1z" namespace
STEP: Deleting all clusters in the capz-e2e-yepb1z namespace
STEP: Deleting cluster capz-e2e-yepb1z-ha
INFO: Waiting for the Cluster capz-e2e-yepb1z/capz-e2e-yepb1z-ha to be deleted
STEP: Waiting for cluster capz-e2e-yepb1z-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-w6kz2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-yepb1z-ha-control-plane-tftct, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-yepb1z-ha-control-plane-b6km9, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-yepb1z-ha-control-plane-b6km9, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jnznp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-s7cq6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-yepb1z-ha-control-plane-b6km9, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-yepb1z-ha-control-plane-b6km9, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-yepb1z-ha-control-plane-5khkn, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-yepb1z-ha-control-plane-tftct, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vhs2z, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-wn2zn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-yepb1z-ha-control-plane-tftct, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-yepb1z-ha-control-plane-5khkn, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-2dstn, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-f697p, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qbphn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-gjsdb, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-fqrmz, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-z7tqj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cs6zf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-yepb1z-ha-control-plane-tftct, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-yepb1z-ha-control-plane-5khkn, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rrqq6, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-yepb1z-ha-control-plane-5khkn, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-yepb1z
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 38m37s on Ginkgo node 3 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Fri, 19 Nov 2021 18:41:12 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-5o3550" for hosting the cluster
Nov 19 18:41:12.901: INFO: starting to create namespace for hosting the "capz-e2e-5o3550" test spec
2021/11/19 18:41:12 failed trying to get namespace (capz-e2e-5o3550):namespaces "capz-e2e-5o3550" not found
INFO: Creating namespace capz-e2e-5o3550
INFO: Creating event watcher for namespace "capz-e2e-5o3550"
Nov 19 18:41:12.948: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-5o3550-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-wxpcf, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-proxy-gqxc7, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-jh8wq, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-nf557, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-wswlk, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-5o3550-public-custom-vnet-control-plane-975xb, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-5o3550-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000960091s
STEP: Dumping all the Cluster API resources in the "capz-e2e-5o3550" namespace
STEP: Deleting all clusters in the capz-e2e-5o3550 namespace
STEP: Deleting cluster capz-e2e-5o3550-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-5o3550/capz-e2e-5o3550-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-5o3550-public-custom-vnet to be deleted
W1119 19:28:04.488692   24184 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1119 19:28:35.921618   24184 trace.go:205] Trace[121081535]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (19-Nov-2021 19:28:05.920) (total time: 30000ms):
Trace[121081535]: [30.000591026s] [30.000591026s] END
E1119 19:28:35.921685   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp 20.83.143.124:6443: i/o timeout
I1119 19:29:08.059991   24184 trace.go:205] Trace[416991315]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (19-Nov-2021 19:28:38.058) (total time: 30001ms):
Trace[416991315]: [30.00133031s] [30.00133031s] END
E1119 19:29:08.060058   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp 20.83.143.124:6443: i/o timeout
I1119 19:29:42.745405   24184 trace.go:205] Trace[265106114]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (19-Nov-2021 19:29:12.744) (total time: 30001ms):
Trace[265106114]: [30.001032529s] [30.001032529s] END
E1119 19:29:42.745462   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp 20.83.143.124:6443: i/o timeout
I1119 19:30:21.012530   24184 trace.go:205] Trace[500952130]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (19-Nov-2021 19:29:51.010) (total time: 30001ms):
Trace[500952130]: [30.001524374s] [30.001524374s] END
E1119 19:30:21.012607   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp 20.83.143.124:6443: i/o timeout
I1119 19:31:06.544904   24184 trace.go:205] Trace[2135147662]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (19-Nov-2021 19:30:36.543) (total time: 30001ms):
Trace[2135147662]: [30.001650874s] [30.001650874s] END
E1119 19:31:06.544968   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp 20.83.143.124:6443: i/o timeout
I1119 19:32:13.677877   24184 trace.go:205] Trace[488373013]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (19-Nov-2021 19:31:43.676) (total time: 30001ms):
Trace[488373013]: [30.001134891s] [30.001134891s] END
E1119 19:32:13.677935   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp 20.83.143.124:6443: i/o timeout
E1119 19:33:07.972721   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-5o3550
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 19 19:33:28.480: INFO: deleting an existing virtual network "custom-vnet"
Nov 19 19:33:38.967: INFO: deleting an existing route table "node-routetable"
E1119 19:33:43.210704   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 19 19:33:49.223: INFO: deleting an existing network security group "node-nsg"
Nov 19 19:33:59.553: INFO: deleting an existing network security group "control-plane-nsg"
Nov 19 19:34:09.870: INFO: verifying the existing resource group "capz-e2e-5o3550-public-custom-vnet" is empty
Nov 19 19:34:11.053: INFO: deleting the existing resource group "capz-e2e-5o3550-public-custom-vnet"
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1119 19:34:35.075901   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:35:33.143320   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 54m35s on Ginkgo node 1 of 3


• [SLOW TEST:3274.804 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Fri, 19 Nov 2021 19:19:50 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-c7g2ln" for hosting the cluster
Nov 19 19:19:50.132: INFO: starting to create namespace for hosting the "capz-e2e-c7g2ln" test spec
2021/11/19 19:19:50 failed trying to get namespace (capz-e2e-c7g2ln):namespaces "capz-e2e-c7g2ln" not found
INFO: Creating namespace capz-e2e-c7g2ln
INFO: Creating event watcher for namespace "capz-e2e-c7g2ln"
Nov 19 19:19:50.165: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-c7g2ln-oot
INFO: Creating the workload cluster with name "capz-e2e-c7g2ln-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-jobp82zdwg191u to be complete
Nov 19 19:31:53.566: INFO: waiting for job default/curl-to-elb-jobp82zdwg191u to be complete
Nov 19 19:32:03.629: INFO: job default/curl-to-elb-jobp82zdwg191u is complete, took 10.063141531s
STEP: connecting directly to the external LB service
Nov 19 19:32:03.630: INFO: starting attempts to connect directly to the external LB service
2021/11/19 19:32:03 [DEBUG] GET http://20.81.32.110
2021/11/19 19:32:33 [ERR] GET http://20.81.32.110 request failed: Get "http://20.81.32.110": dial tcp 20.81.32.110:80: i/o timeout
2021/11/19 19:32:33 [DEBUG] GET http://20.81.32.110: retrying in 1s (4 left)
Nov 19 19:32:50.172: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 19 19:32:50.172: INFO: starting to delete external LB service webyysvzv-elb
Nov 19 19:32:50.224: INFO: starting to delete deployment webyysvzv
Nov 19 19:32:50.253: INFO: starting to delete job curl-to-elb-jobp82zdwg191u
... skipping 34 lines ...
STEP: Fetching activity logs took 1.158682025s
STEP: Dumping all the Cluster API resources in the "capz-e2e-c7g2ln" namespace
STEP: Deleting all clusters in the capz-e2e-c7g2ln namespace
STEP: Deleting cluster capz-e2e-c7g2ln-oot
INFO: Waiting for the Cluster capz-e2e-c7g2ln/capz-e2e-c7g2ln-oot to be deleted
STEP: Waiting for cluster capz-e2e-c7g2ln-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-bfrv2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-nkptz, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-nlcnb, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-c7g2ln-oot-control-plane-ts4jt, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-controller-manager, container cloud-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-thqnf, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-pdqg4, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-kqbq9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-drqb7, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-hvwxf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dnslb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-c7g2ln-oot-control-plane-ts4jt, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-r9f6b, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-tb7gp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-c7g2ln-oot-control-plane-ts4jt, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xtsvp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-c7g2ln-oot-control-plane-ts4jt, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-c7g2ln
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 20m11s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Fri, 19 Nov 2021 19:19:15 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-bet3v1" for hosting the cluster
Nov 19 19:19:15.196: INFO: starting to create namespace for hosting the "capz-e2e-bet3v1" test spec
2021/11/19 19:19:15 failed trying to get namespace (capz-e2e-bet3v1):namespaces "capz-e2e-bet3v1" not found
INFO: Creating namespace capz-e2e-bet3v1
INFO: Creating event watcher for namespace "capz-e2e-bet3v1"
Nov 19 19:19:15.232: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-bet3v1-gpu
INFO: Creating the workload cluster with name "capz-e2e-bet3v1-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 58 lines ...
STEP: Fetching activity logs took 492.25402ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-bet3v1" namespace
STEP: Deleting all clusters in the capz-e2e-bet3v1 namespace
STEP: Deleting cluster capz-e2e-bet3v1-gpu
INFO: Waiting for the Cluster capz-e2e-bet3v1/capz-e2e-bet3v1-gpu to be deleted
STEP: Waiting for cluster capz-e2e-bet3v1-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-l7kjs, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-fvsjt, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-bet3v1
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 24m43s on Ginkgo node 2 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Fri, 19 Nov 2021 19:40:00 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-uto7dm" for hosting the cluster
Nov 19 19:40:00.690: INFO: starting to create namespace for hosting the "capz-e2e-uto7dm" test spec
2021/11/19 19:40:00 failed trying to get namespace (capz-e2e-uto7dm):namespaces "capz-e2e-uto7dm" not found
INFO: Creating namespace capz-e2e-uto7dm
INFO: Creating event watcher for namespace "capz-e2e-uto7dm"
Nov 19 19:40:00.733: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-uto7dm-win-ha
INFO: Creating the workload cluster with name "capz-e2e-uto7dm-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 91 lines ...
STEP: waiting for job default/curl-to-elb-job1y8761udine to be complete
Nov 19 19:54:46.637: INFO: waiting for job default/curl-to-elb-job1y8761udine to be complete
Nov 19 19:54:56.707: INFO: job default/curl-to-elb-job1y8761udine is complete, took 10.069738351s
STEP: connecting directly to the external LB service
Nov 19 19:54:56.707: INFO: starting attempts to connect directly to the external LB service
2021/11/19 19:54:56 [DEBUG] GET http://20.81.110.72
2021/11/19 19:55:26 [ERR] GET http://20.81.110.72 request failed: Get "http://20.81.110.72": dial tcp 20.81.110.72:80: i/o timeout
2021/11/19 19:55:26 [DEBUG] GET http://20.81.110.72: retrying in 1s (4 left)
Nov 19 19:55:30.810: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 19 19:55:30.810: INFO: starting to delete external LB service web-windows0e7zpc-elb
Nov 19 19:55:30.889: INFO: starting to delete deployment web-windows0e7zpc
Nov 19 19:55:30.930: INFO: starting to delete job curl-to-elb-job1y8761udine
... skipping 43 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-uto7dm-win-ha-control-plane-jfw99, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-uto7dm-win-ha-control-plane-bjrfg, container etcd
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-chnz7, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-uto7dm-win-ha-control-plane-jfw99, container kube-apiserver
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-uto7dm-win-ha-control-plane-jfw99, container etcd
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-vg6nx, container coredns
STEP: Got error while iterating over activity logs for resource group capz-e2e-uto7dm-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000537682s
STEP: Dumping all the Cluster API resources in the "capz-e2e-uto7dm" namespace
STEP: Deleting all clusters in the capz-e2e-uto7dm namespace
STEP: Deleting cluster capz-e2e-uto7dm-win-ha
INFO: Waiting for the Cluster capz-e2e-uto7dm/capz-e2e-uto7dm-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-uto7dm-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-zgm5f, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-pdz4q, container kube-flannel: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-uto7dm
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 34m55s on Ginkgo node 3 of 3

... skipping 12 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Fri, 19 Nov 2021 19:43:57 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-mmfkwk" for hosting the cluster
Nov 19 19:43:57.961: INFO: starting to create namespace for hosting the "capz-e2e-mmfkwk" test spec
2021/11/19 19:43:57 failed trying to get namespace (capz-e2e-mmfkwk):namespaces "capz-e2e-mmfkwk" not found
INFO: Creating namespace capz-e2e-mmfkwk
INFO: Creating event watcher for namespace "capz-e2e-mmfkwk"
Nov 19 19:43:57.993: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-mmfkwk-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-mmfkwk-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-jobznw7snlvz8c to be complete
Nov 19 19:55:00.478: INFO: waiting for job default/curl-to-elb-jobznw7snlvz8c to be complete
Nov 19 19:55:10.541: INFO: job default/curl-to-elb-jobznw7snlvz8c is complete, took 10.063357536s
STEP: connecting directly to the external LB service
Nov 19 19:55:10.541: INFO: starting attempts to connect directly to the external LB service
2021/11/19 19:55:10 [DEBUG] GET http://20.81.110.12
2021/11/19 19:55:40 [ERR] GET http://20.81.110.12 request failed: Get "http://20.81.110.12": dial tcp 20.81.110.12:80: i/o timeout
2021/11/19 19:55:40 [DEBUG] GET http://20.81.110.12: retrying in 1s (4 left)
Nov 19 19:55:41.604: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 19 19:55:41.604: INFO: starting to delete external LB service web0u3ypi-elb
Nov 19 19:55:41.667: INFO: starting to delete deployment web0u3ypi
Nov 19 19:55:41.700: INFO: starting to delete job curl-to-elb-jobznw7snlvz8c
... skipping 25 lines ...
STEP: waiting for job default/curl-to-elb-job8m41dvh30j8 to be complete
Nov 19 20:04:24.096: INFO: waiting for job default/curl-to-elb-job8m41dvh30j8 to be complete
Nov 19 20:04:34.162: INFO: job default/curl-to-elb-job8m41dvh30j8 is complete, took 10.066675257s
STEP: connecting directly to the external LB service
Nov 19 20:04:34.163: INFO: starting attempts to connect directly to the external LB service
2021/11/19 20:04:34 [DEBUG] GET http://20.81.110.225
2021/11/19 20:05:04 [ERR] GET http://20.81.110.225 request failed: Get "http://20.81.110.225": dial tcp 20.81.110.225:80: i/o timeout
2021/11/19 20:05:04 [DEBUG] GET http://20.81.110.225: retrying in 1s (4 left)
Nov 19 20:05:05.222: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 19 20:05:05.222: INFO: starting to delete external LB service web-windowsfthu5e-elb
Nov 19 20:05:05.272: INFO: starting to delete deployment web-windowsfthu5e
Nov 19 20:05:05.305: INFO: starting to delete job curl-to-elb-job8m41dvh30j8
... skipping 29 lines ...
STEP: Fetching activity logs took 1.210578007s
STEP: Dumping all the Cluster API resources in the "capz-e2e-mmfkwk" namespace
STEP: Deleting all clusters in the capz-e2e-mmfkwk namespace
STEP: Deleting cluster capz-e2e-mmfkwk-win-vmss
INFO: Waiting for the Cluster capz-e2e-mmfkwk/capz-e2e-mmfkwk-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-mmfkwk-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-zwgzc, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-9zf7h, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-mmfkwk
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 35m26s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Fri, 19 Nov 2021 19:35:47 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-ms8duz" for hosting the cluster
Nov 19 19:35:47.709: INFO: starting to create namespace for hosting the "capz-e2e-ms8duz" test spec
2021/11/19 19:35:47 failed trying to get namespace (capz-e2e-ms8duz):namespaces "capz-e2e-ms8duz" not found
INFO: Creating namespace capz-e2e-ms8duz
INFO: Creating event watcher for namespace "capz-e2e-ms8duz"
Nov 19 19:35:47.756: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-ms8duz-aks
INFO: Creating the workload cluster with name "capz-e2e-ms8duz-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1119 19:36:25.805239   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:37:18.132322   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:38:00.837192   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:38:33.006843   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:39:16.022002   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:40:07.021008   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 19 19:40:19.155: INFO: Waiting for the first control plane machine managed by capz-e2e-ms8duz/capz-e2e-ms8duz-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
E1119 19:40:38.215395   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:41:30.985759   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:42:11.474365   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:42:53.156700   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:43:39.475521   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:44:09.915088   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:44:40.284861   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:45:35.748072   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:46:33.059320   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:47:13.623424   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:48:08.143717   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:48:40.935252   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:49:29.582865   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:50:13.695367   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:50:43.960401   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:51:14.472566   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:51:58.668574   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:52:56.513455   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:53:34.143651   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:54:25.642993   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:55:24.700862   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:55:57.468459   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:56:53.424989   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:57:30.378709   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:58:09.920761   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:58:48.090278   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 19:59:40.328321   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Dumping logs from the "capz-e2e-ms8duz-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-ms8duz/capz-e2e-ms8duz-aks logs
STEP: Dumping workload cluster capz-e2e-ms8duz/capz-e2e-ms8duz-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 415.431955ms
STEP: Dumping workload cluster capz-e2e-ms8duz/capz-e2e-ms8duz-aks Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-84d976c568-t69wf, container coredns
... skipping 10 lines ...
STEP: Fetching activity logs took 839.731976ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-ms8duz" namespace
STEP: Deleting all clusters in the capz-e2e-ms8duz namespace
STEP: Deleting cluster capz-e2e-ms8duz-aks
INFO: Waiting for the Cluster capz-e2e-ms8duz/capz-e2e-ms8duz-aks to be deleted
STEP: Waiting for cluster capz-e2e-ms8duz-aks to be deleted
E1119 20:00:25.657449   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:00:59.208068   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:01:43.241973   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:02:28.556178   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:03:11.115744   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:03:43.648719   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:04:36.397424   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:05:20.909000   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:06:17.406144   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:07:06.593420   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:08:01.284442   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:08:46.798587   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:09:34.579850   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:10:18.977671   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:11:17.218824   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:11:54.927717   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:12:31.843755   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:13:24.607577   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:14:14.345162   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:14:47.913888   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:15:39.586800   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:16:36.214336   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:17:30.428841   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:18:15.081803   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:18:47.575782   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:19:17.807551   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:20:06.878185   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:20:55.753352   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:21:44.738714   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:22:40.983316   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:23:22.919794   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:24:06.835895   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:24:38.022869   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:25:37.409752   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:26:36.447775   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:27:22.750286   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:27:57.226272   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:28:34.186152   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:29:27.224239   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:30:01.060167   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Redacting sensitive information from logs
E1119 20:30:37.502328   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 20:31:25.872741   24184 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5o3550/events?resourceVersion=8423": dial tcp: lookup capz-e2e-5o3550-public-custom-vnet-3a4e0e7d.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host


• Failure [3359.994 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating an AKS cluster
... skipping 55 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating an AKS cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216

Ran 9 of 22 Specs in 6752.657 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 1h53m54.815280556s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...