This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-13 18:32
Elapsed1h46m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node 35m34s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\san\sAKS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454
Timed out after 1200.000s.
System machine pools not ready
Expected
    <bool>: false
to equal
    <bool>: true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 429 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Sat, 13 Nov 2021 18:39:15 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-c7rglp" for hosting the cluster
Nov 13 18:39:15.396: INFO: starting to create namespace for hosting the "capz-e2e-c7rglp" test spec
2021/11/13 18:39:15 failed trying to get namespace (capz-e2e-c7rglp):namespaces "capz-e2e-c7rglp" not found
INFO: Creating namespace capz-e2e-c7rglp
INFO: Creating event watcher for namespace "capz-e2e-c7rglp"
Nov 13 18:39:15.466: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-c7rglp-ipv6
INFO: Creating the workload cluster with name "capz-e2e-c7rglp-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 615.11083ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-c7rglp" namespace
STEP: Deleting all clusters in the capz-e2e-c7rglp namespace
STEP: Deleting cluster capz-e2e-c7rglp-ipv6
INFO: Waiting for the Cluster capz-e2e-c7rglp/capz-e2e-c7rglp-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-c7rglp-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-7n2sm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mzg7m, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-c7rglp-ipv6-control-plane-7j4jg, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-p55rl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-s7grk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-c7rglp-ipv6-control-plane-klmql, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-c7rglp-ipv6-control-plane-klmql, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-c7rglp-ipv6-control-plane-v5dds, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mvr9j, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-c7rglp-ipv6-control-plane-klmql, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-c7rglp-ipv6-control-plane-v5dds, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-c7rglp-ipv6-control-plane-v5dds, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-c7rglp-ipv6-control-plane-v5dds, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fwgk5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-c7rglp-ipv6-control-plane-7j4jg, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-5vwz7, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-c7rglp-ipv6-control-plane-7j4jg, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-c7rglp-ipv6-control-plane-7j4jg, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-c7rglp-ipv6-control-plane-klmql, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qfmg6, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-sqkl5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-dftrd, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2k22c, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-c7rglp
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 19m43s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Sat, 13 Nov 2021 18:58:58 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-0zhbcg" for hosting the cluster
Nov 13 18:58:58.723: INFO: starting to create namespace for hosting the "capz-e2e-0zhbcg" test spec
2021/11/13 18:58:58 failed trying to get namespace (capz-e2e-0zhbcg):namespaces "capz-e2e-0zhbcg" not found
INFO: Creating namespace capz-e2e-0zhbcg
INFO: Creating event watcher for namespace "capz-e2e-0zhbcg"
Nov 13 18:58:58.761: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-0zhbcg-vmss
INFO: Creating the workload cluster with name "capz-e2e-0zhbcg-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 106 lines ...
STEP: Fetching activity logs took 581.061555ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-0zhbcg" namespace
STEP: Deleting all clusters in the capz-e2e-0zhbcg namespace
STEP: Deleting cluster capz-e2e-0zhbcg-vmss
INFO: Waiting for the Cluster capz-e2e-0zhbcg/capz-e2e-0zhbcg-vmss to be deleted
STEP: Waiting for cluster capz-e2e-0zhbcg-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-hw567, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-h4hdv, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ksnc9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-0zhbcg-vmss-control-plane-hp27d, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7zzrc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-0zhbcg-vmss-control-plane-hp27d, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-0zhbcg-vmss-control-plane-hp27d, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-pccqp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-0zhbcg-vmss-control-plane-hp27d, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-dpwvb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-k7t6h, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hxpwd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bkwcz, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-0zhbcg
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 18m22s on Ginkgo node 2 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Sat, 13 Nov 2021 18:39:15 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-rh93sj" for hosting the cluster
Nov 13 18:39:15.393: INFO: starting to create namespace for hosting the "capz-e2e-rh93sj" test spec
2021/11/13 18:39:15 failed trying to get namespace (capz-e2e-rh93sj):namespaces "capz-e2e-rh93sj" not found
INFO: Creating namespace capz-e2e-rh93sj
INFO: Creating event watcher for namespace "capz-e2e-rh93sj"
Nov 13 18:39:15.464: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-rh93sj-ha
INFO: Creating the workload cluster with name "capz-e2e-rh93sj-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 67 lines ...
Nov 13 18:49:11.150: INFO: starting to delete external LB service webd6wsq8-elb
Nov 13 18:49:11.242: INFO: starting to delete deployment webd6wsq8
Nov 13 18:49:11.283: INFO: starting to delete job curl-to-elb-jobescxy1fyulr
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 13 18:49:11.363: INFO: starting to create dev deployment namespace
2021/11/13 18:49:11 failed trying to get namespace (development):namespaces "development" not found
2021/11/13 18:49:11 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 13 18:49:11.456: INFO: starting to create prod deployment namespace
2021/11/13 18:49:11 failed trying to get namespace (production):namespaces "production" not found
2021/11/13 18:49:11 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 13 18:49:11.545: INFO: starting to create frontend-prod deployments
Nov 13 18:49:11.588: INFO: starting to create frontend-dev deployments
Nov 13 18:49:11.628: INFO: starting to create backend deployments
Nov 13 18:49:11.682: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 13 18:49:34.737: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.109.68 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 13 18:51:46.053: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 13 18:51:46.230: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.109.68 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.109.68 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 13 18:56:08.193: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 13 18:56:08.373: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.109.69 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 13 18:58:19.269: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 13 18:58:19.445: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.109.66 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.109.69 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 13 19:02:41.409: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 13 19:02:41.695: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.109.68 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 13 19:04:52.483: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 13 19:04:52.656: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.109.68 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-rh93sj-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-rh93sj/capz-e2e-rh93sj-ha logs
Nov 13 19:07:03.944: INFO: INFO: Collecting logs for node capz-e2e-rh93sj-ha-control-plane-826vr in cluster capz-e2e-rh93sj-ha in namespace capz-e2e-rh93sj

Nov 13 19:07:18.815: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-rh93sj-ha-control-plane-826vr
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-rh93sj-ha-control-plane-26wps, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-q78j6, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-m5spj, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-rh93sj-ha-control-plane-826vr, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-6zknw, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-2jdq8, container coredns
STEP: Got error while iterating over activity logs for resource group capz-e2e-rh93sj-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000939188s
STEP: Dumping all the Cluster API resources in the "capz-e2e-rh93sj" namespace
STEP: Deleting all clusters in the capz-e2e-rh93sj namespace
STEP: Deleting cluster capz-e2e-rh93sj-ha
INFO: Waiting for the Cluster capz-e2e-rh93sj/capz-e2e-rh93sj-ha to be deleted
STEP: Waiting for cluster capz-e2e-rh93sj-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-pbkm4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-sfjbr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6922k, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-rh93sj-ha-control-plane-826vr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-c78qc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-rh93sj-ha-control-plane-826vr, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-rh93sj-ha-control-plane-826vr, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-2jdq8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-4s8ln, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-rh93sj-ha-control-plane-826vr, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-q78j6, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6zknw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-kjnrc, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-rh93sj
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 43m36s on Ginkgo node 3 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Sat, 13 Nov 2021 18:39:15 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-f7g2uf" for hosting the cluster
Nov 13 18:39:15.374: INFO: starting to create namespace for hosting the "capz-e2e-f7g2uf" test spec
2021/11/13 18:39:15 failed trying to get namespace (capz-e2e-f7g2uf):namespaces "capz-e2e-f7g2uf" not found
INFO: Creating namespace capz-e2e-f7g2uf
INFO: Creating event watcher for namespace "capz-e2e-f7g2uf"
Nov 13 18:39:15.414: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-f7g2uf-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-f7g2uf-public-custom-vnet-control-plane-gj99m, container etcd
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-mk9t8, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-xq8f6, container calico-node
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-7zw68, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-f7g2uf-public-custom-vnet-control-plane-gj99m, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-f7g2uf-public-custom-vnet-control-plane-gj99m, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-f7g2uf-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000655129s
STEP: Dumping all the Cluster API resources in the "capz-e2e-f7g2uf" namespace
STEP: Deleting all clusters in the capz-e2e-f7g2uf namespace
STEP: Deleting cluster capz-e2e-f7g2uf-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-f7g2uf/capz-e2e-f7g2uf-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-f7g2uf-public-custom-vnet to be deleted
W1113 19:23:43.135008   24096 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1113 19:24:15.345027   24096 trace.go:205] Trace[1379436901]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (13-Nov-2021 19:23:45.343) (total time: 30001ms):
Trace[1379436901]: [30.001193765s] [30.001193765s] END
E1113 19:24:15.345085   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp 20.120.125.148:6443: i/o timeout
I1113 19:24:50.059637   24096 trace.go:205] Trace[671157570]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (13-Nov-2021 19:24:20.058) (total time: 30000ms):
Trace[671157570]: [30.00092034s] [30.00092034s] END
E1113 19:24:50.059718   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp 20.120.125.148:6443: i/o timeout
I1113 19:25:31.647544   24096 trace.go:205] Trace[350825123]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (13-Nov-2021 19:25:01.646) (total time: 30001ms):
Trace[350825123]: [30.001110134s] [30.001110134s] END
E1113 19:25:31.647609   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp 20.120.125.148:6443: i/o timeout
I1113 19:26:22.828000   24096 trace.go:205] Trace[993101156]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (13-Nov-2021 19:25:52.826) (total time: 30001ms):
Trace[993101156]: [30.001286498s] [30.001286498s] END
E1113 19:26:22.828067   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp 20.120.125.148:6443: i/o timeout
I1113 19:27:22.791686   24096 trace.go:205] Trace[1175765054]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (13-Nov-2021 19:26:52.790) (total time: 30001ms):
Trace[1175765054]: [30.001386983s] [30.001386983s] END
E1113 19:27:22.791751   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp 20.120.125.148:6443: i/o timeout
I1113 19:28:47.368772   24096 trace.go:205] Trace[699636561]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (13-Nov-2021 19:28:17.367) (total time: 30000ms):
Trace[699636561]: [30.0007828s] [30.0007828s] END
E1113 19:28:47.368858   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp 20.120.125.148:6443: i/o timeout
E1113 19:29:19.177053   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-f7g2uf
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 13 19:29:29.612: INFO: deleting an existing virtual network "custom-vnet"
Nov 13 19:29:40.245: INFO: deleting an existing route table "node-routetable"
Nov 13 19:29:50.582: INFO: deleting an existing network security group "node-nsg"
Nov 13 19:30:01.044: INFO: deleting an existing network security group "control-plane-nsg"
Nov 13 19:30:11.370: INFO: verifying the existing resource group "capz-e2e-f7g2uf-public-custom-vnet" is empty
Nov 13 19:30:11.658: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
E1113 19:30:17.938330   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 13 19:30:21.932: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 13 19:30:32.165: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 13 19:30:42.510: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 13 19:30:52.741: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 13 19:31:02.983: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
E1113 19:31:04.328729   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 13 19:31:13.222: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 13 19:31:23.452: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 13 19:31:33.712: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 13 19:31:43.957: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 13 19:31:54.190: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
E1113 19:31:56.605011   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 13 19:32:04.418: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 13 19:32:14.652: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 13 19:32:25.036: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 13 19:32:35.271: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
E1113 19:32:45.022403   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 13 19:32:45.621: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 13 19:32:55.880: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 13 19:33:06.118: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 13 19:33:16.466: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 13 19:33:26.719: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 13 19:33:36.959: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
E1113 19:33:42.574758   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 13 19:33:47.190: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 13 19:33:57.544: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 13 19:34:07.831: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 13 19:34:18.443: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-f7g2uf-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-nsecpi-private.capz.io" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
E1113 19:34:24.761755   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 13 19:34:28.648: INFO: deleting the existing resource group "capz-e2e-f7g2uf-public-custom-vnet"
E1113 19:34:59.187022   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:35:36.376246   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1113 19:36:14.821186   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 57m44s on Ginkgo node 1 of 3


• [SLOW TEST:3464.486 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Sat, 13 Nov 2021 19:17:20 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-uz49et" for hosting the cluster
Nov 13 19:17:20.463: INFO: starting to create namespace for hosting the "capz-e2e-uz49et" test spec
2021/11/13 19:17:20 failed trying to get namespace (capz-e2e-uz49et):namespaces "capz-e2e-uz49et" not found
INFO: Creating namespace capz-e2e-uz49et
INFO: Creating event watcher for namespace "capz-e2e-uz49et"
Nov 13 19:17:20.491: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-uz49et-gpu
INFO: Creating the workload cluster with name "capz-e2e-uz49et-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 80 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Sat, 13 Nov 2021 19:22:51 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-dwa3k7" for hosting the cluster
Nov 13 19:22:51.584: INFO: starting to create namespace for hosting the "capz-e2e-dwa3k7" test spec
2021/11/13 19:22:51 failed trying to get namespace (capz-e2e-dwa3k7):namespaces "capz-e2e-dwa3k7" not found
INFO: Creating namespace capz-e2e-dwa3k7
INFO: Creating event watcher for namespace "capz-e2e-dwa3k7"
Nov 13 19:22:51.618: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-dwa3k7-oot
INFO: Creating the workload cluster with name "capz-e2e-dwa3k7-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-jobfopql2ib1u6 to be complete
Nov 13 19:31:34.298: INFO: waiting for job default/curl-to-elb-jobfopql2ib1u6 to be complete
Nov 13 19:31:44.363: INFO: job default/curl-to-elb-jobfopql2ib1u6 is complete, took 10.064452258s
STEP: connecting directly to the external LB service
Nov 13 19:31:44.363: INFO: starting attempts to connect directly to the external LB service
2021/11/13 19:31:44 [DEBUG] GET http://20.85.198.55
2021/11/13 19:32:14 [ERR] GET http://20.85.198.55 request failed: Get "http://20.85.198.55": dial tcp 20.85.198.55:80: i/o timeout
2021/11/13 19:32:14 [DEBUG] GET http://20.85.198.55: retrying in 1s (4 left)
Nov 13 19:32:15.423: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 13 19:32:15.423: INFO: starting to delete external LB service webp68ug5-elb
Nov 13 19:32:15.473: INFO: starting to delete deployment webp68ug5
Nov 13 19:32:15.506: INFO: starting to delete job curl-to-elb-jobfopql2ib1u6
... skipping 34 lines ...
STEP: Fetching activity logs took 532.374019ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-dwa3k7" namespace
STEP: Deleting all clusters in the capz-e2e-dwa3k7 namespace
STEP: Deleting cluster capz-e2e-dwa3k7-oot
INFO: Waiting for the Cluster capz-e2e-dwa3k7/capz-e2e-dwa3k7-oot to be deleted
STEP: Waiting for cluster capz-e2e-dwa3k7-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-4ndvb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qj6bb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9htwq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-fz895, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-r9vnb, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4k97p, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-dwa3k7
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 23m54s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Sat, 13 Nov 2021 19:36:59 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-3i45xd" for hosting the cluster
Nov 13 19:36:59.866: INFO: starting to create namespace for hosting the "capz-e2e-3i45xd" test spec
2021/11/13 19:36:59 failed trying to get namespace (capz-e2e-3i45xd):namespaces "capz-e2e-3i45xd" not found
INFO: Creating namespace capz-e2e-3i45xd
INFO: Creating event watcher for namespace "capz-e2e-3i45xd"
Nov 13 19:36:59.902: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-3i45xd-aks
INFO: Creating the workload cluster with name "capz-e2e-3i45xd-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1113 19:37:09.859038   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:37:49.380518   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:38:30.986477   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:39:27.029219   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:40:22.836802   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:41:14.079207   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 13 19:41:31.910: INFO: Waiting for the first control plane machine managed by capz-e2e-3i45xd/capz-e2e-3i45xd-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
E1113 19:42:03.716645   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:42:52.083870   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:43:46.963089   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:44:39.527152   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:45:11.761401   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:45:52.508575   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:46:51.868827   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:47:43.980227   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:48:36.947044   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:49:27.170027   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:50:14.312717   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:50:46.948827   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:51:26.181852   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:52:17.669942   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:53:01.991517   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:53:36.121846   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:54:34.079864   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:55:18.394443   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:55:57.589164   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:56:39.612613   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:57:18.317691   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:57:56.172005   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:58:41.362364   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:59:33.092148   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 20:00:24.722812   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 20:00:56.445258   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Dumping logs from the "capz-e2e-3i45xd-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-3i45xd/capz-e2e-3i45xd-aks logs
STEP: Dumping workload cluster capz-e2e-3i45xd/capz-e2e-3i45xd-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 397.124451ms
STEP: Creating log watcher for controller kube-system/coredns-84d976c568-9v6q5, container coredns
STEP: Creating log watcher for controller kube-system/metrics-server-569f6547dd-vz726, container metrics-server
... skipping 10 lines ...
STEP: Fetching activity logs took 986.11203ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-3i45xd" namespace
STEP: Deleting all clusters in the capz-e2e-3i45xd namespace
STEP: Deleting cluster capz-e2e-3i45xd-aks
INFO: Waiting for the Cluster capz-e2e-3i45xd/capz-e2e-3i45xd-aks to be deleted
STEP: Waiting for cluster capz-e2e-3i45xd-aks to be deleted
E1113 20:01:52.165224   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 20:02:26.213831   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 20:03:06.422240   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 20:03:44.303897   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 20:04:14.510481   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 20:05:00.573572   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 20:05:56.366262   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 20:06:47.841454   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 20:07:25.391047   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 20:08:25.257878   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 20:09:07.755527   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-3i45xd
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1113 20:09:59.090635   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 20:10:33.849942   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 20:11:17.249047   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 20:12:15.295473   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 35m35s on Ginkgo node 1 of 3


• Failure [2134.777 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 59 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Sat, 13 Nov 2021 19:41:37 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-9bl10o" for hosting the cluster
Nov 13 19:41:37.442: INFO: starting to create namespace for hosting the "capz-e2e-9bl10o" test spec
2021/11/13 19:41:37 failed trying to get namespace (capz-e2e-9bl10o):namespaces "capz-e2e-9bl10o" not found
INFO: Creating namespace capz-e2e-9bl10o
INFO: Creating event watcher for namespace "capz-e2e-9bl10o"
Nov 13 19:41:37.481: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-9bl10o-win-ha
INFO: Creating the workload cluster with name "capz-e2e-9bl10o-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 55 lines ...
STEP: waiting for job default/curl-to-elb-job6s8bdpl40gf to be complete
Nov 13 19:51:30.345: INFO: waiting for job default/curl-to-elb-job6s8bdpl40gf to be complete
Nov 13 19:51:40.418: INFO: job default/curl-to-elb-job6s8bdpl40gf is complete, took 10.073095933s
STEP: connecting directly to the external LB service
Nov 13 19:51:40.418: INFO: starting attempts to connect directly to the external LB service
2021/11/13 19:51:40 [DEBUG] GET http://52.146.86.185
2021/11/13 19:52:10 [ERR] GET http://52.146.86.185 request failed: Get "http://52.146.86.185": dial tcp 52.146.86.185:80: i/o timeout
2021/11/13 19:52:10 [DEBUG] GET http://52.146.86.185: retrying in 1s (4 left)
Nov 13 19:52:11.481: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 13 19:52:11.481: INFO: starting to delete external LB service webk28bno-elb
Nov 13 19:52:11.559: INFO: starting to delete deployment webk28bno
Nov 13 19:52:11.595: INFO: starting to delete job curl-to-elb-job6s8bdpl40gf
... skipping 85 lines ...
STEP: Fetching activity logs took 950.917575ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-9bl10o" namespace
STEP: Deleting all clusters in the capz-e2e-9bl10o namespace
STEP: Deleting cluster capz-e2e-9bl10o-win-ha
INFO: Waiting for the Cluster capz-e2e-9bl10o/capz-e2e-9bl10o-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-9bl10o-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ggwwk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-9bl10o-win-ha-control-plane-krmpr, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-9bl10o-win-ha-control-plane-krmpr, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-9bl10o-win-ha-control-plane-krmpr, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-ws7sv, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-68vks, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-8jwqn, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5h54m, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-9bl10o-win-ha-control-plane-97bwg, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-9bl10o-win-ha-control-plane-krmpr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-9bl10o-win-ha-control-plane-97bwg, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-9bl10o-win-ha-control-plane-97bwg, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nmxlb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-ng7z6, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-9bl10o-win-ha-control-plane-97bwg, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8slv7, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-9bl10o
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 31m9s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Sat, 13 Nov 2021 19:46:45 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-14czul" for hosting the cluster
Nov 13 19:46:45.420: INFO: starting to create namespace for hosting the "capz-e2e-14czul" test spec
2021/11/13 19:46:45 failed trying to get namespace (capz-e2e-14czul):namespaces "capz-e2e-14czul" not found
INFO: Creating namespace capz-e2e-14czul
INFO: Creating event watcher for namespace "capz-e2e-14czul"
Nov 13 19:46:45.466: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-14czul-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-14czul-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 129 lines ...
STEP: Fetching activity logs took 1.210173801s
STEP: Dumping all the Cluster API resources in the "capz-e2e-14czul" namespace
STEP: Deleting all clusters in the capz-e2e-14czul namespace
STEP: Deleting cluster capz-e2e-14czul-win-vmss
INFO: Waiting for the Cluster capz-e2e-14czul/capz-e2e-14czul-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-14czul-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-dhqjf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-9nbcm, container kube-flannel: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-14czul
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 30m45s on Ginkgo node 3 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows enabled VMSS cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:542
    with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543
------------------------------
E1113 20:13:07.745689   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 20:13:43.286284   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 20:14:14.189068   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 20:14:48.505892   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 20:15:40.450350   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 20:16:39.424751   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 20:17:12.171278   24096 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f7g2uf/events?resourceVersion=9429": dial tcp: lookup capz-e2e-f7g2uf-public-custom-vnet-68f49f05.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating an AKS cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216

Ran 9 of 22 Specs in 6022.303 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 1h41m43.97004975s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...