This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-18 06:34
Elapsed1h51m
Revisionmain

Test Failures


capz-e2e Workload cluster creation Creating a GPU-enabled cluster with a single control plane node and 1 node 21m24s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sGPU\-enabled\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413
Timed out after 900.001s.
Expected
    <int>: 0
to equal
    <int>: 1
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.1/framework/machinedeployment_helpers.go:121
				
				Click to see stdout/stderrfrom junit.e2e_suite.3.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 15 Skipped Tests

Error lines from build-log.txt

... skipping 426 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:288

INFO: "With ipv6 worker node" started at Thu, 18 Nov 2021 06:41:23 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-ayqtie" for hosting the cluster
Nov 18 06:41:23.334: INFO: starting to create namespace for hosting the "capz-e2e-ayqtie" test spec
2021/11/18 06:41:23 failed trying to get namespace (capz-e2e-ayqtie):namespaces "capz-e2e-ayqtie" not found
INFO: Creating namespace capz-e2e-ayqtie
INFO: Creating event watcher for namespace "capz-e2e-ayqtie"
Nov 18 06:41:23.417: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-ayqtie-ipv6
INFO: Creating the workload cluster with name "capz-e2e-ayqtie-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 568.138973ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-ayqtie" namespace
STEP: Deleting all clusters in the capz-e2e-ayqtie namespace
STEP: Deleting cluster capz-e2e-ayqtie-ipv6
INFO: Waiting for the Cluster capz-e2e-ayqtie/capz-e2e-ayqtie-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-ayqtie-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-l4ncg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-jmlhm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ayqtie-ipv6-control-plane-dsdr6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8zhqt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ayqtie-ipv6-control-plane-7khbz, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ayqtie-ipv6-control-plane-sqhqf, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-6rsmg, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4gc5k, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ayqtie-ipv6-control-plane-sqhqf, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-j2lv4, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6kvpp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ayqtie-ipv6-control-plane-dsdr6, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-gqh5p, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ayqtie-ipv6-control-plane-7khbz, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ayqtie-ipv6-control-plane-dsdr6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ayqtie-ipv6-control-plane-dsdr6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ayqtie-ipv6-control-plane-7khbz, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ayqtie-ipv6-control-plane-7khbz, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ayqtie-ipv6-control-plane-sqhqf, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-l8dk4, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ayqtie-ipv6-control-plane-sqhqf, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-trkcw, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-v5j95, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-ayqtie
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 17m15s on Ginkgo node 2 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:205

INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" started at Thu, 18 Nov 2021 06:41:23 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-1xkw3j" for hosting the cluster
Nov 18 06:41:23.333: INFO: starting to create namespace for hosting the "capz-e2e-1xkw3j" test spec
2021/11/18 06:41:23 failed trying to get namespace (capz-e2e-1xkw3j):namespaces "capz-e2e-1xkw3j" not found
INFO: Creating namespace capz-e2e-1xkw3j
INFO: Creating event watcher for namespace "capz-e2e-1xkw3j"
Nov 18 06:41:23.407: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-1xkw3j-ha
INFO: Creating the workload cluster with name "capz-e2e-1xkw3j-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 73 lines ...
Nov 18 06:50:58.686: INFO: starting to delete external LB service web7c1sjh-elb
Nov 18 06:50:58.751: INFO: starting to delete deployment web7c1sjh
Nov 18 06:50:58.769: INFO: starting to delete job curl-to-elb-jobyyczih82811
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 18 06:50:58.834: INFO: starting to create dev deployment namespace
2021/11/18 06:50:58 failed trying to get namespace (development):namespaces "development" not found
2021/11/18 06:50:58 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 18 06:50:58.893: INFO: starting to create prod deployment namespace
2021/11/18 06:50:58 failed trying to get namespace (production):namespaces "production" not found
2021/11/18 06:50:58 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 18 06:50:58.935: INFO: starting to create frontend-prod deployments
Nov 18 06:50:58.958: INFO: starting to create frontend-dev deployments
Nov 18 06:50:58.985: INFO: starting to create backend deployments
Nov 18 06:50:59.021: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 18 06:51:21.067: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.166.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 18 06:53:32.368: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 18 06:53:32.480: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.166.3 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.166.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 18 06:57:54.850: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 18 06:57:55.003: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.166.2 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 18 07:00:05.586: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 18 07:00:05.745: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.87.196 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.166.2 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 18 07:04:27.728: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 18 07:04:27.833: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.166.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 18 07:06:38.803: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 18 07:06:38.916: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.166.3 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowsbav7wd to be available
Nov 18 07:08:50.654: INFO: starting to wait for deployment to become available
Nov 18 07:09:40.749: INFO: Deployment default/web-windowsbav7wd is now available, took 50.09561962s
... skipping 51 lines ...
Nov 18 07:11:59.215: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-1xkw3j-ha-md-0-2n4t8

Nov 18 07:11:59.423: INFO: INFO: Collecting logs for node 10.1.0.7 in cluster capz-e2e-1xkw3j-ha in namespace capz-e2e-1xkw3j

Nov 18 07:12:29.327: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-1xkw3j-ha-md-win-l7l6n

Failed to get logs for machine capz-e2e-1xkw3j-ha-md-win-5646ffb764-4xdsx, cluster capz-e2e-1xkw3j/capz-e2e-1xkw3j-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 18 07:12:29.650: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster capz-e2e-1xkw3j-ha in namespace capz-e2e-1xkw3j

Nov 18 07:12:59.939: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-1xkw3j-ha-md-win-cjcf5

Failed to get logs for machine capz-e2e-1xkw3j-ha-md-win-5646ffb764-hssq6, cluster capz-e2e-1xkw3j/capz-e2e-1xkw3j-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-1xkw3j/capz-e2e-1xkw3j-ha kube-system pod logs
STEP: Fetching kube-system pod logs took 229.027202ms
STEP: Creating log watcher for controller kube-system/calico-node-qv925, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-1xkw3j-ha-control-plane-r4lsd, container kube-apiserver
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-1xkw3j-ha-control-plane-r4lsd, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-wrs5f, container calico-node
... skipping 22 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-7l282, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-s7tjx, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-ghcch, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-n6nwv, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-rr88d, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-1xkw3j-ha-control-plane-qqwtt, container kube-apiserver
STEP: Got error while iterating over activity logs for resource group capz-e2e-1xkw3j-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001046739s
STEP: Dumping all the Cluster API resources in the "capz-e2e-1xkw3j" namespace
STEP: Deleting all clusters in the capz-e2e-1xkw3j namespace
STEP: Deleting cluster capz-e2e-1xkw3j-ha
INFO: Waiting for the Cluster capz-e2e-1xkw3j/capz-e2e-1xkw3j-ha to be deleted
STEP: Waiting for cluster capz-e2e-1xkw3j-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-1xkw3j-ha-control-plane-r4lsd, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-1xkw3j-ha-control-plane-k778j, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-z84t5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-7l282, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-xhcbg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-1xkw3j-ha-control-plane-k778j, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-42xhq, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-1xkw3j-ha-control-plane-r4lsd, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-1xkw3j-ha-control-plane-r4lsd, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qv925, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ghcch, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-42xhq, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-9bwpb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-1xkw3j-ha-control-plane-k778j, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-1xkw3j-ha-control-plane-k778j, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-1xkw3j-ha-control-plane-r4lsd, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rr88d, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-1xkw3j
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" ran for 41m45s on Ginkgo node 3 of 3

... skipping 8 lines ...
  with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:334

INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" started at Thu, 18 Nov 2021 06:58:38 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-s551fa" for hosting the cluster
Nov 18 06:58:38.288: INFO: starting to create namespace for hosting the "capz-e2e-s551fa" test spec
2021/11/18 06:58:38 failed trying to get namespace (capz-e2e-s551fa):namespaces "capz-e2e-s551fa" not found
INFO: Creating namespace capz-e2e-s551fa
INFO: Creating event watcher for namespace "capz-e2e-s551fa"
Nov 18 06:58:38.322: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-s551fa-vmss
INFO: Creating the workload cluster with name "capz-e2e-s551fa-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 96 lines ...
STEP: waiting for job default/curl-to-elb-joblenp0bwt85f to be complete
Nov 18 07:13:27.185: INFO: waiting for job default/curl-to-elb-joblenp0bwt85f to be complete
Nov 18 07:13:37.216: INFO: job default/curl-to-elb-joblenp0bwt85f is complete, took 10.030440005s
STEP: connecting directly to the external LB service
Nov 18 07:13:37.216: INFO: starting attempts to connect directly to the external LB service
2021/11/18 07:13:37 [DEBUG] GET http://52.159.73.143
2021/11/18 07:14:07 [ERR] GET http://52.159.73.143 request failed: Get "http://52.159.73.143": dial tcp 52.159.73.143:80: i/o timeout
2021/11/18 07:14:07 [DEBUG] GET http://52.159.73.143: retrying in 1s (4 left)
Nov 18 07:14:15.418: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 18 07:14:15.418: INFO: starting to delete external LB service web-windowsydc907-elb
Nov 18 07:14:15.483: INFO: starting to delete deployment web-windowsydc907
Nov 18 07:14:15.498: INFO: starting to delete job curl-to-elb-joblenp0bwt85f
... skipping 41 lines ...
Nov 18 07:18:48.612: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

Nov 18 07:18:48.869: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-s551fa-vmss in namespace capz-e2e-s551fa

Nov 18 07:19:21.141: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set win-p-win

Failed to get logs for machine pool capz-e2e-s551fa-vmss-mp-win, cluster capz-e2e-s551fa/capz-e2e-s551fa-vmss: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-s551fa/capz-e2e-s551fa-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 207.489147ms
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-rn46m, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-windows-49jtj, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-proxy-sssgw, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-qgjcz, container kube-proxy
... skipping 10 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-windows-gkqpj, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-s551fa-vmss-control-plane-9xzzt, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-vrzhn, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-s551fa-vmss-control-plane-9xzzt, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-fnvqg, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-jq4hg, container calico-node
STEP: Got error while iterating over activity logs for resource group capz-e2e-s551fa-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001088847s
STEP: Dumping all the Cluster API resources in the "capz-e2e-s551fa" namespace
STEP: Deleting all clusters in the capz-e2e-s551fa namespace
STEP: Deleting cluster capz-e2e-s551fa-vmss
INFO: Waiting for the Cluster capz-e2e-s551fa/capz-e2e-s551fa-vmss to be deleted
STEP: Waiting for cluster capz-e2e-s551fa-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-jq4hg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-fbpmc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-qgjcz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-gkqpj, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-49jtj, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-fnvqg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-nbtfd, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-sssgw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-49jtj, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-gkqpj, container calico-node-felix: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-s551fa
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" ran for 28m44s on Ginkgo node 2 of 3

... skipping 10 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:144

INFO: "Creates a public management cluster in the same vnet" started at Thu, 18 Nov 2021 06:41:23 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-j56bv4" for hosting the cluster
Nov 18 06:41:23.306: INFO: starting to create namespace for hosting the "capz-e2e-j56bv4" test spec
2021/11/18 06:41:23 failed trying to get namespace (capz-e2e-j56bv4):namespaces "capz-e2e-j56bv4" not found
INFO: Creating namespace capz-e2e-j56bv4
INFO: Creating event watcher for namespace "capz-e2e-j56bv4"
Nov 18 06:41:23.341: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-j56bv4-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-nsb7l, container coredns
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-j56bv4-public-custom-vnet-control-plane-h2xhw, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-jg4cj, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-dglls, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-tjw2g, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-j56bv4-public-custom-vnet-control-plane-h2xhw, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-j56bv4-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001134679s
STEP: Dumping all the Cluster API resources in the "capz-e2e-j56bv4" namespace
STEP: Deleting all clusters in the capz-e2e-j56bv4 namespace
STEP: Deleting cluster capz-e2e-j56bv4-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-j56bv4/capz-e2e-j56bv4-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-j56bv4-public-custom-vnet to be deleted
W1118 07:30:10.155075   24439 reflector.go:441] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1118 07:30:41.325428   24439 trace.go:205] Trace[357037563]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (18-Nov-2021 07:30:11.324) (total time: 30001ms):
Trace[357037563]: [30.001297659s] [30.001297659s] END
E1118 07:30:41.325486   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp 65.52.197.216:6443: i/o timeout
I1118 07:31:14.432786   24439 trace.go:205] Trace[1989600281]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (18-Nov-2021 07:30:44.431) (total time: 30001ms):
Trace[1989600281]: [30.0014737s] [30.0014737s] END
E1118 07:31:14.432839   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp 65.52.197.216:6443: i/o timeout
I1118 07:31:50.756615   24439 trace.go:205] Trace[725086312]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (18-Nov-2021 07:31:20.755) (total time: 30001ms):
Trace[725086312]: [30.001017498s] [30.001017498s] END
E1118 07:31:50.756754   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp 65.52.197.216:6443: i/o timeout
I1118 07:32:32.778116   24439 trace.go:205] Trace[1345213389]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (18-Nov-2021 07:32:02.776) (total time: 30001ms):
Trace[1345213389]: [30.001695578s] [30.001695578s] END
E1118 07:32:32.778175   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp 65.52.197.216:6443: i/o timeout
I1118 07:33:19.803262   24439 trace.go:205] Trace[559014640]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (18-Nov-2021 07:32:49.801) (total time: 30001ms):
Trace[559014640]: [30.001629897s] [30.001629897s] END
E1118 07:33:19.803334   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp 65.52.197.216:6443: i/o timeout
E1118 07:33:51.877529   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-j56bv4
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 18 07:34:06.914: INFO: deleting an existing virtual network "custom-vnet"
Nov 18 07:34:17.374: INFO: deleting an existing route table "node-routetable"
E1118 07:34:24.410224   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 18 07:34:27.656: INFO: deleting an existing network security group "node-nsg"
Nov 18 07:34:37.998: INFO: deleting an existing network security group "control-plane-nsg"
Nov 18 07:34:48.309: INFO: verifying the existing resource group "capz-e2e-j56bv4-public-custom-vnet" is empty
Nov 18 07:34:49.378: INFO: deleting the existing resource group "capz-e2e-j56bv4-public-custom-vnet"
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1118 07:35:12.142480   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 54m29s on Ginkgo node 1 of 3


• [SLOW TEST:3269.100 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413

INFO: "with a single control plane node and 1 node" started at Thu, 18 Nov 2021 07:23:08 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-mwrl30" for hosting the cluster
Nov 18 07:23:08.419: INFO: starting to create namespace for hosting the "capz-e2e-mwrl30" test spec
2021/11/18 07:23:08 failed trying to get namespace (capz-e2e-mwrl30):namespaces "capz-e2e-mwrl30" not found
INFO: Creating namespace capz-e2e-mwrl30
INFO: Creating event watcher for namespace "capz-e2e-mwrl30"
Nov 18 07:23:08.458: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-mwrl30-gpu
INFO: Creating the workload cluster with name "capz-e2e-mwrl30-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 94 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:455

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Thu, 18 Nov 2021 07:27:22 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-shjts6" for hosting the cluster
Nov 18 07:27:22.113: INFO: starting to create namespace for hosting the "capz-e2e-shjts6" test spec
2021/11/18 07:27:22 failed trying to get namespace (capz-e2e-shjts6):namespaces "capz-e2e-shjts6" not found
INFO: Creating namespace capz-e2e-shjts6
INFO: Creating event watcher for namespace "capz-e2e-shjts6"
Nov 18 07:27:22.149: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-shjts6-oot
INFO: Creating the workload cluster with name "capz-e2e-shjts6-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-job04ban99fbjc to be complete
Nov 18 07:35:44.768: INFO: waiting for job default/curl-to-elb-job04ban99fbjc to be complete
Nov 18 07:35:54.821: INFO: job default/curl-to-elb-job04ban99fbjc is complete, took 10.053402357s
STEP: connecting directly to the external LB service
Nov 18 07:35:54.821: INFO: starting attempts to connect directly to the external LB service
2021/11/18 07:35:54 [DEBUG] GET http://20.80.20.20
2021/11/18 07:36:24 [ERR] GET http://20.80.20.20 request failed: Get "http://20.80.20.20": dial tcp 20.80.20.20:80: i/o timeout
2021/11/18 07:36:24 [DEBUG] GET http://20.80.20.20: retrying in 1s (4 left)
Nov 18 07:36:25.847: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 18 07:36:25.847: INFO: starting to delete external LB service web5oumot-elb
Nov 18 07:36:25.892: INFO: starting to delete deployment web5oumot
Nov 18 07:36:25.907: INFO: starting to delete job curl-to-elb-job04ban99fbjc
... skipping 34 lines ...
STEP: Fetching activity logs took 659.919741ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-shjts6" namespace
STEP: Deleting all clusters in the capz-e2e-shjts6 namespace
STEP: Deleting cluster capz-e2e-shjts6-oot
INFO: Waiting for the Cluster capz-e2e-shjts6/capz-e2e-shjts6-oot to be deleted
STEP: Waiting for cluster capz-e2e-shjts6-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-27t26, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-kd7t2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4gcxt, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-shjts6
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 19m56s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:490

INFO: "with a single control plane node and 1 node" started at Thu, 18 Nov 2021 07:35:52 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-iagcow" for hosting the cluster
Nov 18 07:35:52.410: INFO: starting to create namespace for hosting the "capz-e2e-iagcow" test spec
2021/11/18 07:35:52 failed trying to get namespace (capz-e2e-iagcow):namespaces "capz-e2e-iagcow" not found
INFO: Creating namespace capz-e2e-iagcow
INFO: Creating event watcher for namespace "capz-e2e-iagcow"
Nov 18 07:35:52.454: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-iagcow-aks
INFO: Creating the workload cluster with name "capz-e2e-iagcow-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1118 07:36:11.312835   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:36:51.589446   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:37:42.323354   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:38:36.393511   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:39:15.008226   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 18 07:39:53.694: INFO: Waiting for the first control plane machine managed by capz-e2e-iagcow/capz-e2e-iagcow-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
INFO: Waiting for control plane to be ready
Nov 18 07:40:03.730: INFO: Waiting for the first control plane machine managed by capz-e2e-iagcow/capz-e2e-iagcow-aks to be provisioned
STEP: Waiting for all control plane nodes to exist
... skipping 13 lines ...
STEP: time sync OK for host aks-agentpool1-34142132-vmss000000
STEP: time sync OK for host aks-agentpool1-34142132-vmss000000
STEP: Dumping logs from the "capz-e2e-iagcow-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-iagcow/capz-e2e-iagcow-aks logs
Nov 18 07:40:09.950: INFO: INFO: Collecting logs for node aks-agentpool1-34142132-vmss000000 in cluster capz-e2e-iagcow-aks in namespace capz-e2e-iagcow

E1118 07:40:13.078021   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:40:46.748277   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:41:24.087789   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:42:03.342730   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 18 07:42:20.382: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-iagcow/capz-e2e-iagcow-aks: [dialing public load balancer at capz-e2e-iagcow-aks-3d5bf646.hcp.northcentralus.azmk8s.io: dial tcp 52.162.1.129:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
Nov 18 07:42:20.823: INFO: INFO: Collecting logs for node aks-agentpool1-34142132-vmss000000 in cluster capz-e2e-iagcow-aks in namespace capz-e2e-iagcow

E1118 07:42:56.105186   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:43:39.066838   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:44:18.874641   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 18 07:44:31.454: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-iagcow/capz-e2e-iagcow-aks: [dialing public load balancer at capz-e2e-iagcow-aks-3d5bf646.hcp.northcentralus.azmk8s.io: dial tcp 52.162.1.129:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-iagcow/capz-e2e-iagcow-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 336.159615ms
STEP: Dumping workload cluster capz-e2e-iagcow/capz-e2e-iagcow-aks Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-7mwtt, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-84d976c568-vxrlp, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-x857g, container kube-proxy
... skipping 8 lines ...
STEP: Fetching activity logs took 441.5934ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-iagcow" namespace
STEP: Deleting all clusters in the capz-e2e-iagcow namespace
STEP: Deleting cluster capz-e2e-iagcow-aks
INFO: Waiting for the Cluster capz-e2e-iagcow/capz-e2e-iagcow-aks to be deleted
STEP: Waiting for cluster capz-e2e-iagcow-aks to be deleted
E1118 07:45:08.049048   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:45:54.361582   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:46:49.957940   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:47:25.987112   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:48:06.149801   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:48:47.573782   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-iagcow
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1118 07:49:38.128914   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:50:21.574525   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 14m55s on Ginkgo node 1 of 3


• [SLOW TEST:895.028 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:579

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Thu, 18 Nov 2021 07:47:17 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-oauia9" for hosting the cluster
Nov 18 07:47:17.657: INFO: starting to create namespace for hosting the "capz-e2e-oauia9" test spec
2021/11/18 07:47:17 failed trying to get namespace (capz-e2e-oauia9):namespaces "capz-e2e-oauia9" not found
INFO: Creating namespace capz-e2e-oauia9
INFO: Creating event watcher for namespace "capz-e2e-oauia9"
Nov 18 07:47:17.707: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-oauia9-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-oauia9-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 129 lines ...
STEP: Fetching activity logs took 1.206315677s
STEP: Dumping all the Cluster API resources in the "capz-e2e-oauia9" namespace
STEP: Deleting all clusters in the capz-e2e-oauia9 namespace
STEP: Deleting cluster capz-e2e-oauia9-win-vmss
INFO: Waiting for the Cluster capz-e2e-oauia9/capz-e2e-oauia9-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-oauia9-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-k9cfd, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-4gr2q, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-oauia9-win-vmss-control-plane-qvp2h, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-qxcqf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-oauia9-win-vmss-control-plane-qvp2h, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-oauia9-win-vmss-control-plane-qvp2h, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6f64n, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-oauia9-win-vmss-control-plane-qvp2h, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-oauia9
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 26m10s on Ginkgo node 2 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Thu, 18 Nov 2021 07:44:32 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-z7syul" for hosting the cluster
Nov 18 07:44:32.852: INFO: starting to create namespace for hosting the "capz-e2e-z7syul" test spec
2021/11/18 07:44:32 failed trying to get namespace (capz-e2e-z7syul):namespaces "capz-e2e-z7syul" not found
INFO: Creating namespace capz-e2e-z7syul
INFO: Creating event watcher for namespace "capz-e2e-z7syul"
Nov 18 07:44:32.898: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-z7syul-win-ha
INFO: Creating the workload cluster with name "capz-e2e-z7syul-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 55 lines ...
STEP: waiting for job default/curl-to-elb-jobznl5buw0mjm to be complete
Nov 18 08:00:57.195: INFO: waiting for job default/curl-to-elb-jobznl5buw0mjm to be complete
Nov 18 08:01:07.231: INFO: job default/curl-to-elb-jobznl5buw0mjm is complete, took 10.035943859s
STEP: connecting directly to the external LB service
Nov 18 08:01:07.231: INFO: starting attempts to connect directly to the external LB service
2021/11/18 08:01:07 [DEBUG] GET http://52.159.83.66
2021/11/18 08:01:37 [ERR] GET http://52.159.83.66 request failed: Get "http://52.159.83.66": dial tcp 52.159.83.66:80: i/o timeout
2021/11/18 08:01:37 [DEBUG] GET http://52.159.83.66: retrying in 1s (4 left)
Nov 18 08:01:45.461: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 18 08:01:45.461: INFO: starting to delete external LB service webmog6au-elb
Nov 18 08:01:45.540: INFO: starting to delete deployment webmog6au
Nov 18 08:01:45.560: INFO: starting to delete job curl-to-elb-jobznl5buw0mjm
... skipping 25 lines ...
STEP: waiting for job default/curl-to-elb-jobpbqbh8c32cp to be complete
Nov 18 08:04:36.379: INFO: waiting for job default/curl-to-elb-jobpbqbh8c32cp to be complete
Nov 18 08:04:46.415: INFO: job default/curl-to-elb-jobpbqbh8c32cp is complete, took 10.03615324s
STEP: connecting directly to the external LB service
Nov 18 08:04:46.415: INFO: starting attempts to connect directly to the external LB service
2021/11/18 08:04:46 [DEBUG] GET http://52.159.111.131
2021/11/18 08:05:16 [ERR] GET http://52.159.111.131 request failed: Get "http://52.159.111.131": dial tcp 52.159.111.131:80: i/o timeout
2021/11/18 08:05:16 [DEBUG] GET http://52.159.111.131: retrying in 1s (4 left)
Nov 18 08:05:17.460: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 18 08:05:17.461: INFO: starting to delete external LB service web-windows55xskf-elb
Nov 18 08:05:17.529: INFO: starting to delete deployment web-windows55xskf
Nov 18 08:05:17.550: INFO: starting to delete job curl-to-elb-jobpbqbh8c32cp
... skipping 49 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-z7syul-win-ha-control-plane-b6562, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-z7syul-win-ha-control-plane-9tsxv, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-zsptz, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-z7syul-win-ha-control-plane-wctgp, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-zxd72, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-z7syul-win-ha-control-plane-wctgp, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-z7syul-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00109154s
STEP: Dumping all the Cluster API resources in the "capz-e2e-z7syul" namespace
STEP: Deleting all clusters in the capz-e2e-z7syul namespace
STEP: Deleting cluster capz-e2e-z7syul-win-ha
INFO: Waiting for the Cluster capz-e2e-z7syul/capz-e2e-z7syul-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-z7syul-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-zxd72, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-zsptz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-r4jkb, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-z7syul-win-ha-control-plane-wctgp, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-z7syul-win-ha-control-plane-wctgp, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-gkfwn, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-z7syul-win-ha-control-plane-9tsxv, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-z7syul-win-ha-control-plane-wctgp, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-z7syul-win-ha-control-plane-9tsxv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-z7syul-win-ha-control-plane-9tsxv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-lcwss, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8df8m, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vfsf4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-hnvqx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-z7syul-win-ha-control-plane-9tsxv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-gmhw6, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-9zgxk, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-tbjnz, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-kmhck, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-z7syul-win-ha-control-plane-wctgp, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-z7syul
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 39m29s on Ginkgo node 3 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows Enabled cluster with dockershim
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:530
    With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532
------------------------------
E1118 07:50:53.305884   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:51:43.599172   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:52:21.652450   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:52:55.874054   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:53:55.608105   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:54:32.513256   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:55:03.739728   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:55:36.351992   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:56:21.403806   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:56:56.998133   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:57:49.077978   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:58:23.970276   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:59:14.115055   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 07:59:58.458847   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:00:41.668441   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:01:15.969268   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:02:06.361140   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:02:55.313910   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:03:34.089930   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:04:12.918893   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:05:00.143133   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:05:53.582678   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:06:51.901770   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:07:32.748938   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:08:06.857104   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:08:41.174769   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:09:21.309842   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:10:03.652776   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:10:53.440321   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:11:33.388379   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:12:12.185160   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:13:11.062087   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:14:00.265547   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:14:33.707684   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:15:09.726606   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:15:52.701453   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:16:25.771483   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:17:16.680055   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:18:07.275947   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:18:57.864366   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:19:31.164058   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:20:02.208385   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:20:45.958218   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:21:42.552758   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:22:32.839617   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:23:25.134060   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1118 08:23:58.134019   24439 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j56bv4/events?resourceVersion=11236": dial tcp: lookup capz-e2e-j56bv4-public-custom-vnet-3e5036c1.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a GPU-enabled cluster [It] with a single control plane node and 1 node 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.1/framework/machinedeployment_helpers.go:121

Ran 9 of 24 Specs in 6279.245 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 15 Skipped


Ginkgo ran 1 suite in 1h46m2.717158348s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:176: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:184: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...