This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-21 18:34
Elapsed2h5m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node 56m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\san\sAKS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454
Timed out after 1200.001s.
System machine pools not ready
Expected
    <bool>: false
to equal
    <bool>: true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 440 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Sun, 21 Nov 2021 18:47:56 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-j3t4dn" for hosting the cluster
Nov 21 18:47:56.269: INFO: starting to create namespace for hosting the "capz-e2e-j3t4dn" test spec
2021/11/21 18:47:56 failed trying to get namespace (capz-e2e-j3t4dn):namespaces "capz-e2e-j3t4dn" not found
INFO: Creating namespace capz-e2e-j3t4dn
INFO: Creating event watcher for namespace "capz-e2e-j3t4dn"
Nov 21 18:47:56.420: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-j3t4dn-ipv6
INFO: Creating the workload cluster with name "capz-e2e-j3t4dn-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 543.648962ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-j3t4dn" namespace
STEP: Deleting all clusters in the capz-e2e-j3t4dn namespace
STEP: Deleting cluster capz-e2e-j3t4dn-ipv6
INFO: Waiting for the Cluster capz-e2e-j3t4dn/capz-e2e-j3t4dn-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-j3t4dn-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-fwtzq, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-t6j9l, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-j3t4dn-ipv6-control-plane-2sxbs, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-j3t4dn-ipv6-control-plane-vfbjj, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vcmr9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-x6c9d, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-j3t4dn-ipv6-control-plane-2sxbs, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-j3t4dn-ipv6-control-plane-2sxbs, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-j3t4dn-ipv6-control-plane-xq8js, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-r2qvn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-l562l, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-zvmlg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-j3t4dn-ipv6-control-plane-vfbjj, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-j3t4dn-ipv6-control-plane-xq8js, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-j3t4dn-ipv6-control-plane-vfbjj, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-j3t4dn-ipv6-control-plane-xq8js, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qqdsd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xj9r8, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-j3t4dn-ipv6-control-plane-2sxbs, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-j3t4dn-ipv6-control-plane-vfbjj, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-j3t4dn-ipv6-control-plane-xq8js, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-jb28q, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-l4fjx, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-j3t4dn
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 20m29s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Sun, 21 Nov 2021 19:08:24 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-49i88k" for hosting the cluster
Nov 21 19:08:24.873: INFO: starting to create namespace for hosting the "capz-e2e-49i88k" test spec
2021/11/21 19:08:24 failed trying to get namespace (capz-e2e-49i88k):namespaces "capz-e2e-49i88k" not found
INFO: Creating namespace capz-e2e-49i88k
INFO: Creating event watcher for namespace "capz-e2e-49i88k"
Nov 21 19:08:24.915: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-49i88k-vmss
INFO: Creating the workload cluster with name "capz-e2e-49i88k-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 106 lines ...
STEP: Fetching activity logs took 611.438596ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-49i88k" namespace
STEP: Deleting all clusters in the capz-e2e-49i88k namespace
STEP: Deleting cluster capz-e2e-49i88k-vmss
INFO: Waiting for the Cluster capz-e2e-49i88k/capz-e2e-49i88k-vmss to be deleted
STEP: Waiting for cluster capz-e2e-49i88k-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-h729m, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hzssx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-czjx5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-fkjwx, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-49i88k
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 18m3s on Ginkgo node 3 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Sun, 21 Nov 2021 18:47:56 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-obrjfy" for hosting the cluster
Nov 21 18:47:56.267: INFO: starting to create namespace for hosting the "capz-e2e-obrjfy" test spec
2021/11/21 18:47:56 failed trying to get namespace (capz-e2e-obrjfy):namespaces "capz-e2e-obrjfy" not found
INFO: Creating namespace capz-e2e-obrjfy
INFO: Creating event watcher for namespace "capz-e2e-obrjfy"
Nov 21 18:47:56.425: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-obrjfy-ha
INFO: Creating the workload cluster with name "capz-e2e-obrjfy-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 59 lines ...
STEP: waiting for job default/curl-to-elb-jobgxgenabi2sz to be complete
Nov 21 18:58:32.924: INFO: waiting for job default/curl-to-elb-jobgxgenabi2sz to be complete
Nov 21 18:58:43.000: INFO: job default/curl-to-elb-jobgxgenabi2sz is complete, took 10.076967006s
STEP: connecting directly to the external LB service
Nov 21 18:58:43.000: INFO: starting attempts to connect directly to the external LB service
2021/11/21 18:58:43 [DEBUG] GET http://20.102.35.182
2021/11/21 18:59:13 [ERR] GET http://20.102.35.182 request failed: Get "http://20.102.35.182": dial tcp 20.102.35.182:80: i/o timeout
2021/11/21 18:59:13 [DEBUG] GET http://20.102.35.182: retrying in 1s (4 left)
Nov 21 18:59:29.385: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 21 18:59:29.385: INFO: starting to delete external LB service webxu14q9-elb
Nov 21 18:59:29.498: INFO: starting to delete deployment webxu14q9
Nov 21 18:59:29.548: INFO: starting to delete job curl-to-elb-jobgxgenabi2sz
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 21 18:59:29.630: INFO: starting to create dev deployment namespace
2021/11/21 18:59:29 failed trying to get namespace (development):namespaces "development" not found
2021/11/21 18:59:29 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 21 18:59:29.704: INFO: starting to create prod deployment namespace
2021/11/21 18:59:29 failed trying to get namespace (production):namespaces "production" not found
2021/11/21 18:59:29 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 21 18:59:29.778: INFO: starting to create frontend-prod deployments
Nov 21 18:59:29.817: INFO: starting to create frontend-dev deployments
Nov 21 18:59:29.855: INFO: starting to create backend deployments
Nov 21 18:59:29.902: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 21 18:59:53.998: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.97.133 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 21 19:02:05.499: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 21 19:02:05.675: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.97.133 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.97.133 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 21 19:06:27.645: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 21 19:06:27.820: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.97.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 21 19:08:38.716: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 21 19:08:38.881: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.97.130 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.97.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 21 19:13:00.860: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 21 19:13:01.060: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.97.133 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 21 19:15:11.940: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 21 19:15:12.094: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.97.133 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-obrjfy-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-obrjfy/capz-e2e-obrjfy-ha logs
Nov 21 19:17:23.369: INFO: INFO: Collecting logs for node capz-e2e-obrjfy-ha-control-plane-t7b65 in cluster capz-e2e-obrjfy-ha in namespace capz-e2e-obrjfy

Nov 21 19:17:33.023: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-obrjfy-ha-control-plane-t7b65
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-bzpvp, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-obrjfy-ha-control-plane-vl756, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-obrjfy-ha-control-plane-vl756, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-obrjfy-ha-control-plane-l2ld9, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-obrjfy-ha-control-plane-t7b65, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-dhpqc, container calico-node
STEP: Got error while iterating over activity logs for resource group capz-e2e-obrjfy-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00071473s
STEP: Dumping all the Cluster API resources in the "capz-e2e-obrjfy" namespace
STEP: Deleting all clusters in the capz-e2e-obrjfy namespace
STEP: Deleting cluster capz-e2e-obrjfy-ha
INFO: Waiting for the Cluster capz-e2e-obrjfy/capz-e2e-obrjfy-ha to be deleted
STEP: Waiting for cluster capz-e2e-obrjfy-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dlpj6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-obrjfy-ha-control-plane-vl756, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-obrjfy-ha-control-plane-vl756, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-7cc8p, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-obrjfy-ha-control-plane-l2ld9, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-obrjfy-ha-control-plane-t7b65, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-obrjfy-ha-control-plane-vl756, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-obrjfy-ha-control-plane-t7b65, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-b6xg4, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-obrjfy-ha-control-plane-l2ld9, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-g7xdz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-r8hnz, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-obrjfy-ha-control-plane-vl756, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rspt8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-obrjfy-ha-control-plane-l2ld9, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-h4vqh, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cjvff, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-b89vd, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wnggc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-obrjfy-ha-control-plane-l2ld9, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-bzpvp, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-obrjfy
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 39m39s on Ginkgo node 2 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Sun, 21 Nov 2021 18:47:56 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-kb90fe" for hosting the cluster
Nov 21 18:47:56.232: INFO: starting to create namespace for hosting the "capz-e2e-kb90fe" test spec
2021/11/21 18:47:56 failed trying to get namespace (capz-e2e-kb90fe):namespaces "capz-e2e-kb90fe" not found
INFO: Creating namespace capz-e2e-kb90fe
INFO: Creating event watcher for namespace "capz-e2e-kb90fe"
Nov 21 18:47:56.353: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-kb90fe-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Dumping workload cluster capz-e2e-kb90fe/capz-e2e-kb90fe-public-custom-vnet Azure activity log
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-kb90fe-public-custom-vnet-control-plane-s9mqg, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-k4zx8, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-ckwpx, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-kb90fe-public-custom-vnet-control-plane-s9mqg, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-kb90fe-public-custom-vnet-control-plane-s9mqg, container kube-apiserver
STEP: Got error while iterating over activity logs for resource group capz-e2e-kb90fe-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00125726s
STEP: Dumping all the Cluster API resources in the "capz-e2e-kb90fe" namespace
STEP: Deleting all clusters in the capz-e2e-kb90fe namespace
STEP: Deleting cluster capz-e2e-kb90fe-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-kb90fe/capz-e2e-kb90fe-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-kb90fe-public-custom-vnet to be deleted
W1121 19:35:40.561509   23995 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1121 19:36:11.499411   23995 trace.go:205] Trace[1032904289]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (21-Nov-2021 19:35:41.498) (total time: 30000ms):
Trace[1032904289]: [30.000831264s] [30.000831264s] END
E1121 19:36:11.499488   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp 20.102.33.152:6443: i/o timeout
I1121 19:36:43.308382   23995 trace.go:205] Trace[110688326]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (21-Nov-2021 19:36:13.306) (total time: 30001ms):
Trace[110688326]: [30.001565994s] [30.001565994s] END
E1121 19:36:43.308447   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp 20.102.33.152:6443: i/o timeout
I1121 19:37:17.823821   23995 trace.go:205] Trace[336085646]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (21-Nov-2021 19:36:47.821) (total time: 30002ms):
Trace[336085646]: [30.002019826s] [30.002019826s] END
E1121 19:37:17.823884   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp 20.102.33.152:6443: i/o timeout
I1121 19:37:55.799634   23995 trace.go:205] Trace[72017441]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (21-Nov-2021 19:37:25.798) (total time: 30001ms):
Trace[72017441]: [30.001126871s] [30.001126871s] END
E1121 19:37:55.799689   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp 20.102.33.152:6443: i/o timeout
I1121 19:38:49.492653   23995 trace.go:205] Trace[743780032]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (21-Nov-2021 19:38:19.491) (total time: 30001ms):
Trace[743780032]: [30.00156469s] [30.00156469s] END
E1121 19:38:49.492714   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp 20.102.33.152:6443: i/o timeout
I1121 19:39:50.995509   23995 trace.go:205] Trace[1345179896]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (21-Nov-2021 19:39:20.994) (total time: 30001ms):
Trace[1345179896]: [30.001371365s] [30.001371365s] END
E1121 19:39:50.995579   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp 20.102.33.152:6443: i/o timeout
E1121 19:40:44.589954   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-kb90fe
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 21 19:41:03.622: INFO: deleting an existing virtual network "custom-vnet"
Nov 21 19:41:14.156: INFO: deleting an existing route table "node-routetable"
Nov 21 19:41:24.541: INFO: deleting an existing network security group "node-nsg"
E1121 19:41:34.589766   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 19:41:34.968: INFO: deleting an existing network security group "control-plane-nsg"
Nov 21 19:41:45.287: INFO: verifying the existing resource group "capz-e2e-kb90fe-public-custom-vnet" is empty
Nov 21 19:41:45.525: INFO: deleting the existing resource group "capz-e2e-kb90fe-public-custom-vnet"
E1121 19:42:18.075693   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1121 19:43:02.437577   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:43:32.684032   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 56m10s on Ginkgo node 1 of 3


• [SLOW TEST:3370.346 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Sun, 21 Nov 2021 19:27:35 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-tqki2j" for hosting the cluster
Nov 21 19:27:35.464: INFO: starting to create namespace for hosting the "capz-e2e-tqki2j" test spec
2021/11/21 19:27:35 failed trying to get namespace (capz-e2e-tqki2j):namespaces "capz-e2e-tqki2j" not found
INFO: Creating namespace capz-e2e-tqki2j
INFO: Creating event watcher for namespace "capz-e2e-tqki2j"
Nov 21 19:27:35.496: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-tqki2j-oot
INFO: Creating the workload cluster with name "capz-e2e-tqki2j-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 98 lines ...
STEP: Fetching activity logs took 576.58591ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-tqki2j" namespace
STEP: Deleting all clusters in the capz-e2e-tqki2j namespace
STEP: Deleting cluster capz-e2e-tqki2j-oot
INFO: Waiting for the Cluster capz-e2e-tqki2j/capz-e2e-tqki2j-oot to be deleted
STEP: Waiting for cluster capz-e2e-tqki2j-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9ll9s, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-tqki2j-oot-control-plane-hz266, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-controller-manager, container cloud-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-tqki2j-oot-control-plane-hz266, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-tqki2j-oot-control-plane-hz266, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-6b6b8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-t68xb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-tqki2j-oot-control-plane-hz266, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-fs5cg, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-h4sd9, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-p7glw, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-pfnvd, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bsxvq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-x6kc8, container cloud-node-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-tqki2j
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 23m10s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Sun, 21 Nov 2021 19:26:27 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-a2il4d" for hosting the cluster
Nov 21 19:26:27.608: INFO: starting to create namespace for hosting the "capz-e2e-a2il4d" test spec
2021/11/21 19:26:27 failed trying to get namespace (capz-e2e-a2il4d):namespaces "capz-e2e-a2il4d" not found
INFO: Creating namespace capz-e2e-a2il4d
INFO: Creating event watcher for namespace "capz-e2e-a2il4d"
Nov 21 19:26:27.653: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-a2il4d-gpu
INFO: Creating the workload cluster with name "capz-e2e-a2il4d-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 58 lines ...
STEP: Fetching activity logs took 475.014555ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-a2il4d" namespace
STEP: Deleting all clusters in the capz-e2e-a2il4d namespace
STEP: Deleting cluster capz-e2e-a2il4d-gpu
INFO: Waiting for the Cluster capz-e2e-a2il4d/capz-e2e-a2il4d-gpu to be deleted
STEP: Waiting for cluster capz-e2e-a2il4d-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-cc88d, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-m8ccp, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-a2il4d
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 27m11s on Ginkgo node 3 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Sun, 21 Nov 2021 19:50:45 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-5gwjh2" for hosting the cluster
Nov 21 19:50:45.953: INFO: starting to create namespace for hosting the "capz-e2e-5gwjh2" test spec
2021/11/21 19:50:45 failed trying to get namespace (capz-e2e-5gwjh2):namespaces "capz-e2e-5gwjh2" not found
INFO: Creating namespace capz-e2e-5gwjh2
INFO: Creating event watcher for namespace "capz-e2e-5gwjh2"
Nov 21 19:50:46.016: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-5gwjh2-win-ha
INFO: Creating the workload cluster with name "capz-e2e-5gwjh2-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 151 lines ...
STEP: Fetching activity logs took 619.801596ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-5gwjh2" namespace
STEP: Deleting all clusters in the capz-e2e-5gwjh2 namespace
STEP: Deleting cluster capz-e2e-5gwjh2-win-ha
INFO: Waiting for the Cluster capz-e2e-5gwjh2/capz-e2e-5gwjh2-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-5gwjh2-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-5gwjh2-win-ha-control-plane-hkjdb, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-cvd7m, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-6j5hv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-5gwjh2-win-ha-control-plane-hkjdb, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9ttln, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-d79qq, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-z6d9s, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rm2bs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-5gwjh2-win-ha-control-plane-hkjdb, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-5gwjh2-win-ha-control-plane-hkjdb, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-5gwjh2
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 35m22s on Ginkgo node 2 of 3

... skipping 12 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Sun, 21 Nov 2021 19:53:38 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-x5gh1q" for hosting the cluster
Nov 21 19:53:38.727: INFO: starting to create namespace for hosting the "capz-e2e-x5gh1q" test spec
2021/11/21 19:53:38 failed trying to get namespace (capz-e2e-x5gh1q):namespaces "capz-e2e-x5gh1q" not found
INFO: Creating namespace capz-e2e-x5gh1q
INFO: Creating event watcher for namespace "capz-e2e-x5gh1q"
Nov 21 19:53:38.761: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-x5gh1q-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-x5gh1q-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 89 lines ...
STEP: waiting for job default/curl-to-elb-jobbwnwb2bzcw0 to be complete
Nov 21 20:12:59.581: INFO: waiting for job default/curl-to-elb-jobbwnwb2bzcw0 to be complete
Nov 21 20:13:09.641: INFO: job default/curl-to-elb-jobbwnwb2bzcw0 is complete, took 10.060730502s
STEP: connecting directly to the external LB service
Nov 21 20:13:09.641: INFO: starting attempts to connect directly to the external LB service
2021/11/21 20:13:09 [DEBUG] GET http://20.88.181.31
2021/11/21 20:13:39 [ERR] GET http://20.88.181.31 request failed: Get "http://20.88.181.31": dial tcp 20.88.181.31:80: i/o timeout
2021/11/21 20:13:39 [DEBUG] GET http://20.88.181.31: retrying in 1s (4 left)
Nov 21 20:13:40.698: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 21 20:13:40.698: INFO: starting to delete external LB service web-windowsfjnvb8-elb
Nov 21 20:13:40.750: INFO: starting to delete deployment web-windowsfjnvb8
Nov 21 20:13:40.782: INFO: starting to delete job curl-to-elb-jobbwnwb2bzcw0
... skipping 29 lines ...
STEP: Fetching activity logs took 1.165663577s
STEP: Dumping all the Cluster API resources in the "capz-e2e-x5gh1q" namespace
STEP: Deleting all clusters in the capz-e2e-x5gh1q namespace
STEP: Deleting cluster capz-e2e-x5gh1q-win-vmss
INFO: Waiting for the Cluster capz-e2e-x5gh1q/capz-e2e-x5gh1q-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-x5gh1q-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-qvzzz, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-x5gh1q-win-vmss-control-plane-qscdx, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-dbs74, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-x5gh1q-win-vmss-control-plane-qscdx, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-8msfp, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-pxz4v, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-x5gh1q-win-vmss-control-plane-qscdx, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-x5gh1q-win-vmss-control-plane-qscdx, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-zlzdt, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-j74x2, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-x5gh1q
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 33m21s on Ginkgo node 3 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows enabled VMSS cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:542
    with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2021-11-21T20:34:51Z"}
++ early_exit_handler
++ '[' -n 162 ']'
++ kill -TERM 162
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 19 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Sun, 21 Nov 2021 19:44:06 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-1dn5e2" for hosting the cluster
Nov 21 19:44:06.583: INFO: starting to create namespace for hosting the "capz-e2e-1dn5e2" test spec
2021/11/21 19:44:06 failed trying to get namespace (capz-e2e-1dn5e2):namespaces "capz-e2e-1dn5e2" not found
INFO: Creating namespace capz-e2e-1dn5e2
INFO: Creating event watcher for namespace "capz-e2e-1dn5e2"
Nov 21 19:44:06.617: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-1dn5e2-aks
INFO: Creating the workload cluster with name "capz-e2e-1dn5e2-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1121 19:44:14.651083   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:44:51.716721   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:45:40.578678   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:46:17.119533   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:47:13.984838   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:47:46.153885   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:48:28.233578   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 21 19:48:38.002: INFO: Waiting for the first control plane machine managed by capz-e2e-1dn5e2/capz-e2e-1dn5e2-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
E1121 19:49:02.120322   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:49:58.269093   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:50:48.122167   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:51:33.874582   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:52:23.941044   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:53:17.073705   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:54:02.597196   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:54:58.812936   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:55:53.549093   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:56:48.129939   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:57:44.143544   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:58:43.100114   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:59:23.123213   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:00:01.132057   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:00:57.408575   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:01:45.954741   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:02:33.344265   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:03:20.104554   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:04:07.635267   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:04:59.590577   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:05:49.413114   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:06:21.438438   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:07:19.701389   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:08:10.223478   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Dumping logs from the "capz-e2e-1dn5e2-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-1dn5e2/capz-e2e-1dn5e2-aks logs
STEP: Dumping workload cluster capz-e2e-1dn5e2/capz-e2e-1dn5e2-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 406.563902ms
STEP: Dumping workload cluster capz-e2e-1dn5e2/capz-e2e-1dn5e2-aks Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-xxnmt, container calico-node
... skipping 10 lines ...
STEP: Fetching activity logs took 991.343635ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-1dn5e2" namespace
STEP: Deleting all clusters in the capz-e2e-1dn5e2 namespace
STEP: Deleting cluster capz-e2e-1dn5e2-aks
INFO: Waiting for the Cluster capz-e2e-1dn5e2/capz-e2e-1dn5e2-aks to be deleted
STEP: Waiting for cluster capz-e2e-1dn5e2-aks to be deleted
E1121 20:08:54.521262   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:09:31.191791   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:10:01.961976   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:10:45.106460   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:11:41.244917   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:12:38.379206   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:13:13.139760   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:13:55.066489   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:14:40.177290   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:15:15.529376   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:16:08.873250   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:16:42.987677   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:17:31.240411   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:18:03.585457   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:18:35.314219   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:19:32.873772   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:20:29.376702   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:21:13.298188   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:22:05.039621   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:22:50.560210   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:23:30.033651   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:24:03.531260   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:24:41.857657   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:25:38.510121   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:26:15.519917   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:26:48.455373   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:27:29.895352   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:28:20.273744   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:28:56.957242   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:29:50.859223   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:30:40.576949   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:31:14.870805   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:32:02.272518   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:32:56.533839   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:33:55.178627   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:34:29.175505   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
W1121 20:34:51.483409   23995 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: very short watch: pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
E1121 20:34:52.380443   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:43577/api/v1/namespaces/capz-e2e-1dn5e2/events?resourceVersion=60628": dial tcp 127.0.0.1:43577: connect: connection refused
E1121 20:34:55.363565   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:43577/api/v1/namespaces/capz-e2e-1dn5e2/events?resourceVersion=60628": dial tcp 127.0.0.1:43577: connect: connection refused
E1121 20:34:58.811518   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:43577/api/v1/namespaces/capz-e2e-1dn5e2/events?resourceVersion=60628": dial tcp 127.0.0.1:43577: connect: connection refused
E1121 20:35:06.536376   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:43577/api/v1/namespaces/capz-e2e-1dn5e2/events?resourceVersion=60628": dial tcp 127.0.0.1:43577: connect: connection refused
E1121 20:35:09.225263   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:35:27.664694   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:43577/api/v1/namespaces/capz-e2e-1dn5e2/events?resourceVersion=60628": dial tcp 127.0.0.1:43577: connect: connection refused
E1121 20:35:55.253903   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:36:03.002867   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:43577/api/v1/namespaces/capz-e2e-1dn5e2/events?resourceVersion=60628": dial tcp 127.0.0.1:43577: connect: connection refused
E1121 20:36:41.147566   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:43577/api/v1/namespaces/capz-e2e-1dn5e2/events?resourceVersion=60628": dial tcp 127.0.0.1:43577: connect: connection refused
E1121 20:36:53.449349   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:37:23.161428   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:43577/api/v1/namespaces/capz-e2e-1dn5e2/events?resourceVersion=60628": dial tcp 127.0.0.1:43577: connect: connection refused
E1121 20:37:29.861395   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:38:06.896170   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:38:21.419281   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:43577/api/v1/namespaces/capz-e2e-1dn5e2/events?resourceVersion=60628": dial tcp 127.0.0.1:43577: connect: connection refused
STEP: Redacting sensitive information from logs
E1121 20:38:56.244796   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:39:30.086173   23995 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-kb90fe/events?resourceVersion=8380": dial tcp: lookup capz-e2e-kb90fe-public-custom-vnet-6970b85.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host


• Failure [3360.836 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating an AKS cluster
... skipping 50 lines ...
    testing.tRunner(0xc000103b00, 0x23174f8)
    	/usr/local/go/src/testing/testing.go:1193 +0xef
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1238 +0x2b3
------------------------------
STEP: Tearing down the management cluster
INFO: Deleting the kind cluster "capz-e2e" failed. You may need to remove this by hand.



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating an AKS cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216

Ran 9 of 22 Specs in 6903.626 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 1h57m29.37025161s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
{"component":"entrypoint","file":"prow/entrypoint/run.go:252","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process gracefully exited before 15m0s grace period","severity":"error","time":"2021-11-21T20:40:07Z"}