This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2022-05-10 19:41
Elapsed1h44m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node 29m18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\san\sAKS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454
Timed out after 1200.000s.
Expected
    <string>: Provisioning
to equal
    <string>: Provisioned
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/cluster_helpers.go:134
				
				Click to see stdout/stderrfrom junit.e2e_suite.3.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Show 14 Skipped Tests

Error lines from build-log.txt

... skipping 431 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Tue, 10 May 2022 19:49:27 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-3jq2t5" for hosting the cluster
May 10 19:49:27.549: INFO: starting to create namespace for hosting the "capz-e2e-3jq2t5" test spec
2022/05/10 19:49:27 failed trying to get namespace (capz-e2e-3jq2t5):namespaces "capz-e2e-3jq2t5" not found
INFO: Creating namespace capz-e2e-3jq2t5
INFO: Creating event watcher for namespace "capz-e2e-3jq2t5"
May 10 19:49:27.625: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-3jq2t5-ipv6
INFO: Creating the workload cluster with name "capz-e2e-3jq2t5-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 621.506348ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-3jq2t5" namespace
STEP: Deleting all clusters in the capz-e2e-3jq2t5 namespace
STEP: Deleting cluster capz-e2e-3jq2t5-ipv6
INFO: Waiting for the Cluster capz-e2e-3jq2t5/capz-e2e-3jq2t5-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-3jq2t5-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bk5n4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4bdfv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-3jq2t5-ipv6-control-plane-mpzg9, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-c4j6d, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-nm8p2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-3jq2t5-ipv6-control-plane-l4s6l, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-qvt9j, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-nsrc6, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-3jq2t5-ipv6-control-plane-mpzg9, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-3jq2t5-ipv6-control-plane-mpzg9, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-3jq2t5-ipv6-control-plane-6bj9r, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-3jq2t5-ipv6-control-plane-6bj9r, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-3jq2t5-ipv6-control-plane-mpzg9, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-3jq2t5-ipv6-control-plane-l4s6l, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-h5952, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-3jq2t5-ipv6-control-plane-6bj9r, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-3jq2t5-ipv6-control-plane-l4s6l, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-3jq2t5-ipv6-control-plane-l4s6l, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-3jq2t5-ipv6-control-plane-6bj9r, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-d4zwc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2fl9j, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-6fchh, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-25gpt, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-3jq2t5
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 18m13s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Tue, 10 May 2022 20:07:40 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-gwwh4n" for hosting the cluster
May 10 20:07:40.077: INFO: starting to create namespace for hosting the "capz-e2e-gwwh4n" test spec
2022/05/10 20:07:40 failed trying to get namespace (capz-e2e-gwwh4n):namespaces "capz-e2e-gwwh4n" not found
INFO: Creating namespace capz-e2e-gwwh4n
INFO: Creating event watcher for namespace "capz-e2e-gwwh4n"
May 10 20:07:40.114: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-gwwh4n-vmss
INFO: Creating the workload cluster with name "capz-e2e-gwwh4n-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 52 lines ...
STEP: waiting for job default/curl-to-elb-job29hpadkwfsy to be complete
May 10 20:16:11.319: INFO: waiting for job default/curl-to-elb-job29hpadkwfsy to be complete
May 10 20:16:21.393: INFO: job default/curl-to-elb-job29hpadkwfsy is complete, took 10.074321182s
STEP: connecting directly to the external LB service
May 10 20:16:21.393: INFO: starting attempts to connect directly to the external LB service
2022/05/10 20:16:21 [DEBUG] GET http://20.121.255.240
2022/05/10 20:16:51 [ERR] GET http://20.121.255.240 request failed: Get "http://20.121.255.240": dial tcp 20.121.255.240:80: i/o timeout
2022/05/10 20:16:51 [DEBUG] GET http://20.121.255.240: retrying in 1s (4 left)
May 10 20:16:52.461: INFO: successfully connected to the external LB service
STEP: deleting the test resources
May 10 20:16:52.461: INFO: starting to delete external LB service webvfnedj-elb
May 10 20:16:52.517: INFO: starting to delete deployment webvfnedj
May 10 20:16:52.546: INFO: starting to delete job curl-to-elb-job29hpadkwfsy
... skipping 43 lines ...
STEP: Fetching activity logs took 758.758084ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-gwwh4n" namespace
STEP: Deleting all clusters in the capz-e2e-gwwh4n namespace
STEP: Deleting cluster capz-e2e-gwwh4n-vmss
INFO: Waiting for the Cluster capz-e2e-gwwh4n/capz-e2e-gwwh4n-vmss to be deleted
STEP: Waiting for cluster capz-e2e-gwwh4n-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wn5q2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cppsw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-s5jw2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mvmdn, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-gwwh4n
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 20m46s on Ginkgo node 2 of 3

... skipping 12 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Tue, 10 May 2022 19:49:27 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-oegsik" for hosting the cluster
May 10 19:49:27.549: INFO: starting to create namespace for hosting the "capz-e2e-oegsik" test spec
2022/05/10 19:49:27 failed trying to get namespace (capz-e2e-oegsik):namespaces "capz-e2e-oegsik" not found
INFO: Creating namespace capz-e2e-oegsik
INFO: Creating event watcher for namespace "capz-e2e-oegsik"
May 10 19:49:27.616: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-oegsik-ha
INFO: Creating the workload cluster with name "capz-e2e-oegsik-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 67 lines ...
May 10 19:58:53.222: INFO: starting to delete external LB service webbh04sp-elb
May 10 19:58:53.306: INFO: starting to delete deployment webbh04sp
May 10 19:58:53.342: INFO: starting to delete job curl-to-elb-jobm7j8vmnwotd
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
May 10 19:58:53.449: INFO: starting to create dev deployment namespace
2022/05/10 19:58:53 failed trying to get namespace (development):namespaces "development" not found
2022/05/10 19:58:53 namespace development does not exist, creating...
STEP: Creating production namespace
May 10 19:58:53.519: INFO: starting to create prod deployment namespace
2022/05/10 19:58:53 failed trying to get namespace (production):namespaces "production" not found
2022/05/10 19:58:53 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
May 10 19:58:53.606: INFO: starting to create frontend-prod deployments
May 10 19:58:53.646: INFO: starting to create frontend-dev deployments
May 10 19:58:53.686: INFO: starting to create backend deployments
May 10 19:58:53.745: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
May 10 19:59:16.623: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.84.66 port 80: Connection timed out

STEP: Cleaning up after ourselves
May 10 20:01:28.144: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
May 10 20:01:28.303: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.84.66 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.84.66 port 80: Connection timed out

STEP: Cleaning up after ourselves
May 10 20:05:50.290: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
May 10 20:05:50.471: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.7.133 port 80: Connection timed out

STEP: Cleaning up after ourselves
May 10 20:08:01.548: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
May 10 20:08:01.727: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.84.65 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.7.133 port 80: Connection timed out

STEP: Cleaning up after ourselves
May 10 20:12:23.692: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
May 10 20:12:23.916: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.84.66 port 80: Connection timed out

STEP: Cleaning up after ourselves
May 10 20:14:34.576: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
May 10 20:14:34.761: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.84.66 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-oegsik-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-oegsik/capz-e2e-oegsik-ha logs
May 10 20:16:46.243: INFO: INFO: Collecting logs for node capz-e2e-oegsik-ha-control-plane-9fvrp in cluster capz-e2e-oegsik-ha in namespace capz-e2e-oegsik

May 10 20:16:56.972: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-oegsik-ha-control-plane-9fvrp
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-dqgmb, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-2hsgr, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-oegsik-ha-control-plane-9sjkn, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-5r7g8, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-4csqf, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-oegsik-ha-control-plane-9fvrp, container kube-apiserver
STEP: Got error while iterating over activity logs for resource group capz-e2e-oegsik-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001192848s
STEP: Dumping all the Cluster API resources in the "capz-e2e-oegsik" namespace
STEP: Deleting all clusters in the capz-e2e-oegsik namespace
STEP: Deleting cluster capz-e2e-oegsik-ha
INFO: Waiting for the Cluster capz-e2e-oegsik/capz-e2e-oegsik-ha to be deleted
STEP: Waiting for cluster capz-e2e-oegsik-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-oegsik-ha-control-plane-6pffz, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-oegsik-ha-control-plane-6pffz, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-oegsik-ha-control-plane-6pffz, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2hsgr, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-dqgmb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-oegsik-ha-control-plane-9fvrp, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lhcg7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-oegsik-ha-control-plane-9fvrp, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-oegsik-ha-control-plane-9fvrp, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-84s5n, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-htfrw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-oegsik-ha-control-plane-6pffz, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5226l, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5r7g8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-oegsik-ha-control-plane-9fvrp, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rpscc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4csqf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-g5dbl, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-59jbs, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-oegsik
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 42m30s on Ginkgo node 3 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Tue, 10 May 2022 19:49:27 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-zv3jxp" for hosting the cluster
May 10 19:49:27.526: INFO: starting to create namespace for hosting the "capz-e2e-zv3jxp" test spec
2022/05/10 19:49:27 failed trying to get namespace (capz-e2e-zv3jxp):namespaces "capz-e2e-zv3jxp" not found
INFO: Creating namespace capz-e2e-zv3jxp
INFO: Creating event watcher for namespace "capz-e2e-zv3jxp"
May 10 19:49:27.566: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-zv3jxp-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-2b26s, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-zv3jxp-public-custom-vnet-control-plane-7m2vt, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-fwggf, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-zv3jxp-public-custom-vnet-control-plane-7m2vt, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-rzrf9, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-zv3jxp-public-custom-vnet-control-plane-7m2vt, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-zv3jxp-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000623473s
STEP: Dumping all the Cluster API resources in the "capz-e2e-zv3jxp" namespace
STEP: Deleting all clusters in the capz-e2e-zv3jxp namespace
STEP: Deleting cluster capz-e2e-zv3jxp-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-zv3jxp/capz-e2e-zv3jxp-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-zv3jxp-public-custom-vnet to be deleted
W0510 20:38:01.238859   24216 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I0510 20:38:32.355045   24216 trace.go:205] Trace[2116768800]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (10-May-2022 20:38:02.354) (total time: 30000ms):
Trace[2116768800]: [30.000942484s] [30.000942484s] END
E0510 20:38:32.355111   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp 20.124.45.9:6443: i/o timeout
I0510 20:39:04.873660   24216 trace.go:205] Trace[2049821915]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (10-May-2022 20:38:34.872) (total time: 30001ms):
Trace[2049821915]: [30.001479282s] [30.001479282s] END
E0510 20:39:04.873741   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp 20.124.45.9:6443: i/o timeout
I0510 20:39:38.957291   24216 trace.go:205] Trace[572719943]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (10-May-2022 20:39:08.955) (total time: 30002ms):
Trace[572719943]: [30.002142032s] [30.002142032s] END
E0510 20:39:38.957387   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp 20.124.45.9:6443: i/o timeout
I0510 20:40:17.935027   24216 trace.go:205] Trace[22700151]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (10-May-2022 20:39:47.934) (total time: 30000ms):
Trace[22700151]: [30.000776655s] [30.000776655s] END
E0510 20:40:17.935198   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp 20.124.45.9:6443: i/o timeout
I0510 20:41:09.548827   24216 trace.go:205] Trace[2046701567]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (10-May-2022 20:40:39.547) (total time: 30000ms):
Trace[2046701567]: [30.000796551s] [30.000796551s] END
E0510 20:41:09.548898   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp 20.124.45.9:6443: i/o timeout
I0510 20:42:18.424441   24216 trace.go:205] Trace[92833028]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (10-May-2022 20:41:48.422) (total time: 30001ms):
Trace[92833028]: [30.001476696s] [30.001476696s] END
E0510 20:42:18.424504   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp 20.124.45.9:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-zv3jxp
STEP: Running additional cleanup for the "create-workload-cluster" test spec
May 10 20:43:21.927: INFO: deleting an existing virtual network "custom-vnet"
May 10 20:43:32.355: INFO: deleting an existing route table "node-routetable"
May 10 20:43:34.690: INFO: deleting an existing network security group "node-nsg"
I0510 20:43:44.009529   24216 trace.go:205] Trace[821921973]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (10-May-2022 20:43:14.007) (total time: 30001ms):
Trace[821921973]: [30.001608636s] [30.001608636s] END
E0510 20:43:44.009602   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp 20.124.45.9:6443: i/o timeout
May 10 20:43:44.970: INFO: deleting an existing network security group "control-plane-nsg"
May 10 20:43:55.284: INFO: verifying the existing resource group "capz-e2e-zv3jxp-public-custom-vnet" is empty
May 10 20:43:55.346: INFO: deleting the existing resource group "capz-e2e-zv3jxp-public-custom-vnet"
E0510 20:44:25.482062   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 20:45:05.248544   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E0510 20:46:02.990288   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 56m40s on Ginkgo node 1 of 3


• [SLOW TEST:3400.228 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Tue, 10 May 2022 20:28:26 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-2bw21h" for hosting the cluster
May 10 20:28:26.213: INFO: starting to create namespace for hosting the "capz-e2e-2bw21h" test spec
2022/05/10 20:28:26 failed trying to get namespace (capz-e2e-2bw21h):namespaces "capz-e2e-2bw21h" not found
INFO: Creating namespace capz-e2e-2bw21h
INFO: Creating event watcher for namespace "capz-e2e-2bw21h"
May 10 20:28:26.254: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-2bw21h-oot
INFO: Creating the workload cluster with name "capz-e2e-2bw21h-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 98 lines ...
STEP: Fetching activity logs took 593.543506ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-2bw21h" namespace
STEP: Deleting all clusters in the capz-e2e-2bw21h namespace
STEP: Deleting cluster capz-e2e-2bw21h-oot
INFO: Waiting for the Cluster capz-e2e-2bw21h/capz-e2e-2bw21h-oot to be deleted
STEP: Waiting for cluster capz-e2e-2bw21h-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-44kmm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-controller-manager, container cloud-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-2bw21h-oot-control-plane-q5qrq, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-7ppz7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4df2r, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-2bw21h-oot-control-plane-q5qrq, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-b64lq, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-2bw21h-oot-control-plane-q5qrq, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-vdz7n, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-mrbcz, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-2bw21h-oot-control-plane-q5qrq, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-2bw21h
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 20m37s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Tue, 10 May 2022 20:31:57 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-ij5hsx" for hosting the cluster
May 10 20:31:57.234: INFO: starting to create namespace for hosting the "capz-e2e-ij5hsx" test spec
2022/05/10 20:31:57 failed trying to get namespace (capz-e2e-ij5hsx):namespaces "capz-e2e-ij5hsx" not found
INFO: Creating namespace capz-e2e-ij5hsx
INFO: Creating event watcher for namespace "capz-e2e-ij5hsx"
May 10 20:31:57.293: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-ij5hsx-aks
INFO: Creating the workload cluster with name "capz-e2e-ij5hsx-aks" using the "aks-multi-tenancy" template (Kubernetes v1.22.6, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 83 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Tue, 10 May 2022 20:49:02 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-4wjolj" for hosting the cluster
May 10 20:49:02.985: INFO: starting to create namespace for hosting the "capz-e2e-4wjolj" test spec
2022/05/10 20:49:02 failed trying to get namespace (capz-e2e-4wjolj):namespaces "capz-e2e-4wjolj" not found
INFO: Creating namespace capz-e2e-4wjolj
INFO: Creating event watcher for namespace "capz-e2e-4wjolj"
May 10 20:49:03.021: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-4wjolj-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-4wjolj-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 129 lines ...
STEP: Fetching activity logs took 1.047437251s
STEP: Dumping all the Cluster API resources in the "capz-e2e-4wjolj" namespace
STEP: Deleting all clusters in the capz-e2e-4wjolj namespace
STEP: Deleting cluster capz-e2e-4wjolj-win-vmss
INFO: Waiting for the Cluster capz-e2e-4wjolj/capz-e2e-4wjolj-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-4wjolj-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-fzvds, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-wvmpm, container kube-flannel: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-4wjolj
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 29m20s on Ginkgo node 2 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Tue, 10 May 2022 20:46:07 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-k1l7cb" for hosting the cluster
May 10 20:46:07.758: INFO: starting to create namespace for hosting the "capz-e2e-k1l7cb" test spec
2022/05/10 20:46:07 failed trying to get namespace (capz-e2e-k1l7cb):namespaces "capz-e2e-k1l7cb" not found
INFO: Creating namespace capz-e2e-k1l7cb
INFO: Creating event watcher for namespace "capz-e2e-k1l7cb"
May 10 20:46:07.804: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-k1l7cb-win-ha
INFO: Creating the workload cluster with name "capz-e2e-k1l7cb-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 12 lines ...
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created
clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-k1l7cb-win-ha-flannel created
configmap/cni-capz-e2e-k1l7cb-win-ha-flannel created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E0510 20:46:47.049269   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 20:47:20.730583   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by capz-e2e-k1l7cb/capz-e2e-k1l7cb-win-ha-control-plane to be provisioned
STEP: Waiting for one control plane node to exist
E0510 20:48:08.511476   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 20:49:06.067413   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
INFO: Waiting for the remaining control plane machines managed by capz-e2e-k1l7cb/capz-e2e-k1l7cb-win-ha-control-plane to be provisioned
STEP: Waiting for all control plane nodes to exist
E0510 20:50:00.623432   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 20:50:37.210213   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 20:51:24.444416   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 20:51:59.926659   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 20:52:46.473896   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 20:53:22.650318   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 20:53:54.664422   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 20:54:25.155379   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane capz-e2e-k1l7cb/capz-e2e-k1l7cb-win-ha-control-plane to be ready (implies underlying nodes to be ready as well)
STEP: Waiting for the control plane to be ready
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
STEP: Waiting for the workload nodes to exist
INFO: Waiting for the machine pools to be provisioned
... skipping 3 lines ...
May 10 20:54:39.995: INFO: starting to wait for deployment to become available
May 10 20:55:00.099: INFO: Deployment default/webmu3598 is now available, took 20.104401233s
STEP: creating an internal Load Balancer service
May 10 20:55:00.099: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/webmu3598-ilb to be available
May 10 20:55:00.204: INFO: waiting for service default/webmu3598-ilb to be available
E0510 20:55:07.347016   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
May 10 20:55:50.438: INFO: service default/webmu3598-ilb is available, took 50.233735522s
STEP: connecting to the internal LB service from a curl pod
May 10 20:55:50.468: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-job5b2lu to be complete
May 10 20:55:50.530: INFO: waiting for job default/curl-to-ilb-job5b2lu to be complete
May 10 20:56:00.602: INFO: job default/curl-to-ilb-job5b2lu is complete, took 10.072306014s
STEP: deleting the ilb test resources
May 10 20:56:00.602: INFO: deleting the ilb service: webmu3598-ilb
May 10 20:56:00.689: INFO: deleting the ilb job: curl-to-ilb-job5b2lu
STEP: creating an external Load Balancer service
May 10 20:56:00.734: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/webmu3598-elb to be available
May 10 20:56:00.821: INFO: waiting for service default/webmu3598-elb to be available
E0510 20:56:05.968996   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
May 10 20:56:20.918: INFO: service default/webmu3598-elb is available, took 20.096891749s
STEP: connecting to the external LB service from a curl pod
May 10 20:56:20.950: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-job0958xv4f4j5 to be complete
May 10 20:56:20.998: INFO: waiting for job default/curl-to-elb-job0958xv4f4j5 to be complete
May 10 20:56:31.070: INFO: job default/curl-to-elb-job0958xv4f4j5 is complete, took 10.072525582s
... skipping 6 lines ...
May 10 20:56:31.466: INFO: starting to delete deployment webmu3598
May 10 20:56:31.504: INFO: starting to delete job curl-to-elb-job0958xv4f4j5
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowsw5g00y to be available
May 10 20:56:31.652: INFO: starting to wait for deployment to become available
E0510 20:56:39.077329   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 20:57:22.774167   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
May 10 20:57:31.912: INFO: Deployment default/web-windowsw5g00y is now available, took 1m0.260147487s
STEP: creating an internal Load Balancer service
May 10 20:57:31.912: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/web-windowsw5g00y-ilb to be available
May 10 20:57:32.001: INFO: waiting for service default/web-windowsw5g00y-ilb to be available
May 10 20:57:42.066: INFO: service default/web-windowsw5g00y-ilb is available, took 10.065318949s
... skipping 6 lines ...
May 10 20:57:52.211: INFO: deleting the ilb service: web-windowsw5g00y-ilb
May 10 20:57:52.307: INFO: deleting the ilb job: curl-to-ilb-jobxph5u
STEP: creating an external Load Balancer service
May 10 20:57:52.350: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/web-windowsw5g00y-elb to be available
May 10 20:57:52.420: INFO: waiting for service default/web-windowsw5g00y-elb to be available
E0510 20:58:13.806601   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
May 10 20:58:42.617: INFO: service default/web-windowsw5g00y-elb is available, took 50.197641367s
STEP: connecting to the external LB service from a curl pod
May 10 20:58:42.649: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-jobykg2wmth3cn to be complete
May 10 20:58:42.696: INFO: waiting for job default/curl-to-elb-jobykg2wmth3cn to be complete
May 10 20:58:52.768: INFO: job default/curl-to-elb-jobykg2wmth3cn is complete, took 10.072365984s
... skipping 10 lines ...
May 10 20:58:53.046: INFO: INFO: Collecting logs for node capz-e2e-k1l7cb-win-ha-control-plane-lxkzg in cluster capz-e2e-k1l7cb-win-ha in namespace capz-e2e-k1l7cb

May 10 20:59:03.820: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-k1l7cb-win-ha-control-plane-lxkzg

May 10 20:59:04.588: INFO: INFO: Collecting logs for node capz-e2e-k1l7cb-win-ha-control-plane-zdrq9 in cluster capz-e2e-k1l7cb-win-ha in namespace capz-e2e-k1l7cb

E0510 20:59:11.818270   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
May 10 20:59:15.029: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-k1l7cb-win-ha-control-plane-zdrq9

May 10 20:59:15.415: INFO: INFO: Collecting logs for node capz-e2e-k1l7cb-win-ha-control-plane-2kbzt in cluster capz-e2e-k1l7cb-win-ha in namespace capz-e2e-k1l7cb

May 10 20:59:22.875: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-k1l7cb-win-ha-control-plane-2kbzt

May 10 20:59:23.168: INFO: INFO: Collecting logs for node capz-e2e-k1l7cb-win-ha-md-0-sktwx in cluster capz-e2e-k1l7cb-win-ha in namespace capz-e2e-k1l7cb

May 10 20:59:34.089: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-k1l7cb-win-ha-md-0-sktwx

May 10 20:59:34.395: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster capz-e2e-k1l7cb-win-ha in namespace capz-e2e-k1l7cb

E0510 21:00:05.898246   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
May 10 21:00:11.970: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-k1l7cb-win-ha-md-win-vz958

STEP: Dumping workload cluster capz-e2e-k1l7cb/capz-e2e-k1l7cb-win-ha kube-system pod logs
STEP: Fetching kube-system pod logs took 305.801634ms
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-p2d5t, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-t4l7j, container kube-proxy
... skipping 23 lines ...
STEP: Fetching activity logs took 1.02829977s
STEP: Dumping all the Cluster API resources in the "capz-e2e-k1l7cb" namespace
STEP: Deleting all clusters in the capz-e2e-k1l7cb namespace
STEP: Deleting cluster capz-e2e-k1l7cb-win-ha
INFO: Waiting for the Cluster capz-e2e-k1l7cb/capz-e2e-k1l7cb-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-k1l7cb-win-ha to be deleted
E0510 21:00:48.262158   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:01:23.625199   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:02:11.895430   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:03:09.507843   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:04:00.515905   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:04:31.303923   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:05:23.107365   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:05:58.899279   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:06:57.002291   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:07:28.563481   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:08:02.947603   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:08:53.698397   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:09:40.032513   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:10:28.785466   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:11:21.361396   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:11:55.258445   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:12:31.496024   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-k1l7cb-win-ha-control-plane-lxkzg, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-t4l7j, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-k1l7cb-win-ha-control-plane-lxkzg, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-v7454, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-k64mn, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-p2d5t, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-hmmsx, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-pj5cg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-8d9ng, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-k1l7cb-win-ha-control-plane-2kbzt, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8q9c8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-vb8p7, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-k1l7cb-win-ha-control-plane-2kbzt, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7nqdq, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-k1l7cb-win-ha-control-plane-lxkzg, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-k1l7cb-win-ha-control-plane-lxkzg, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-k1l7cb-win-ha-control-plane-2kbzt, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-k1l7cb-win-ha-control-plane-2kbzt, container kube-scheduler: http2: client connection lost
E0510 21:13:06.739018   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:13:57.559106   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:14:39.815962   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:15:22.888010   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:16:03.635304   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:16:36.472168   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:17:10.788130   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:18:07.018133   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:18:45.578962   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:19:45.421098   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:20:15.784415   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:20:50.307286   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:21:47.146659   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:22:34.275859   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0510 21:23:11.527038   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-k1l7cb
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E0510 21:23:52.120350   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 38m8s on Ginkgo node 1 of 3


• [SLOW TEST:2288.227 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 5 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating an AKS cluster [It] with a single control plane node and 1 node 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/cluster_helpers.go:134

Ran 8 of 22 Specs in 5809.212 seconds
FAIL! -- 7 Passed | 1 Failed | 0 Pending | 14 Skipped


Ginkgo ran 1 suite in 1h38m20.509357971s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...