This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-14 18:33
Elapsed1h53m
Revisionmain

Test Failures


capz-e2e Workload cluster creation Creating a Windows Enabled cluster with dockershim With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node 33m32s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sWindows\sEnabled\scluster\swith\sdockershim\sWith\s3\scontrol\-plane\snodes\sand\s1\sLinux\sworker\snode\sand\s1\sWindows\sworker\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532
Timed out after 1200.001s.
Expected
    <int>: 0
to equal
    <int>: 1
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/machinedeployment_helpers.go:121
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 14 Skipped Tests

Error lines from build-log.txt

... skipping 426 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:288

INFO: "With ipv6 worker node" started at Sun, 14 Nov 2021 18:40:03 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-jw9t49" for hosting the cluster
Nov 14 18:40:03.816: INFO: starting to create namespace for hosting the "capz-e2e-jw9t49" test spec
2021/11/14 18:40:03 failed trying to get namespace (capz-e2e-jw9t49):namespaces "capz-e2e-jw9t49" not found
INFO: Creating namespace capz-e2e-jw9t49
INFO: Creating event watcher for namespace "capz-e2e-jw9t49"
Nov 14 18:40:03.895: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-jw9t49-ipv6
INFO: Creating the workload cluster with name "capz-e2e-jw9t49-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 556.036979ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-jw9t49" namespace
STEP: Deleting all clusters in the capz-e2e-jw9t49 namespace
STEP: Deleting cluster capz-e2e-jw9t49-ipv6
INFO: Waiting for the Cluster capz-e2e-jw9t49/capz-e2e-jw9t49-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-jw9t49-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-bnq84, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-jw9t49-ipv6-control-plane-rqxwv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6h8tj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-p4sl5, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-jw9t49-ipv6-control-plane-qxzgz, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-msbfk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-jw9t49-ipv6-control-plane-qxzgz, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-jw9t49-ipv6-control-plane-h59hc, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2nc99, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-jw9t49-ipv6-control-plane-qxzgz, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-glcq9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-pdc47, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-jw9t49-ipv6-control-plane-rqxwv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-jw9t49-ipv6-control-plane-rqxwv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-jw9t49-ipv6-control-plane-h59hc, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-2jztf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-kn628, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-jw9t49-ipv6-control-plane-h59hc, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-jw9t49-ipv6-control-plane-qxzgz, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-l5dtc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-jw9t49-ipv6-control-plane-rqxwv, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-jw9t49-ipv6-control-plane-h59hc, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-wfmxs, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-jw9t49
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 17m33s on Ginkgo node 2 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:205

INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" started at Sun, 14 Nov 2021 18:40:03 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-xk02tr" for hosting the cluster
Nov 14 18:40:03.813: INFO: starting to create namespace for hosting the "capz-e2e-xk02tr" test spec
2021/11/14 18:40:03 failed trying to get namespace (capz-e2e-xk02tr):namespaces "capz-e2e-xk02tr" not found
INFO: Creating namespace capz-e2e-xk02tr
INFO: Creating event watcher for namespace "capz-e2e-xk02tr"
Nov 14 18:40:03.890: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-xk02tr-ha
INFO: Creating the workload cluster with name "capz-e2e-xk02tr-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 75 lines ...
Nov 14 18:51:59.907: INFO: starting to delete external LB service webt3odx1-elb
Nov 14 18:52:00.002: INFO: starting to delete deployment webt3odx1
Nov 14 18:52:00.043: INFO: starting to delete job curl-to-elb-jobx48z0jjog2t
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 14 18:52:00.190: INFO: starting to create dev deployment namespace
2021/11/14 18:52:00 failed trying to get namespace (development):namespaces "development" not found
2021/11/14 18:52:00 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 14 18:52:00.269: INFO: starting to create prod deployment namespace
2021/11/14 18:52:00 failed trying to get namespace (production):namespaces "production" not found
2021/11/14 18:52:00 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 14 18:52:00.358: INFO: starting to create frontend-prod deployments
Nov 14 18:52:00.398: INFO: starting to create frontend-dev deployments
Nov 14 18:52:00.449: INFO: starting to create backend deployments
Nov 14 18:52:00.515: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 14 18:52:23.407: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.128.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 14 18:54:34.528: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 14 18:54:34.700: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.128.132 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.128.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 14 18:58:56.672: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 14 18:58:56.844: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.128.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 14 19:01:07.741: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 14 19:01:07.921: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.128.130 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.128.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 14 19:05:29.887: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 14 19:05:30.056: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.128.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 14 19:07:40.959: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 14 19:07:41.170: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.128.132 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowshv4ji7 to be available
Nov 14 19:09:52.675: INFO: starting to wait for deployment to become available
Nov 14 19:10:42.918: INFO: Deployment default/web-windowshv4ji7 is now available, took 50.243147144s
... skipping 20 lines ...
STEP: waiting for job default/curl-to-elb-job1mmge5fbfks to be complete
Nov 14 19:11:33.666: INFO: waiting for job default/curl-to-elb-job1mmge5fbfks to be complete
Nov 14 19:11:43.739: INFO: job default/curl-to-elb-job1mmge5fbfks is complete, took 10.073042161s
STEP: connecting directly to the external LB service
Nov 14 19:11:43.739: INFO: starting attempts to connect directly to the external LB service
2021/11/14 19:11:43 [DEBUG] GET http://20.72.90.224
2021/11/14 19:12:13 [ERR] GET http://20.72.90.224 request failed: Get "http://20.72.90.224": dial tcp 20.72.90.224:80: i/o timeout
2021/11/14 19:12:13 [DEBUG] GET http://20.72.90.224: retrying in 1s (4 left)
Nov 14 19:12:17.832: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 14 19:12:17.832: INFO: starting to delete external LB service web-windowshv4ji7-elb
Nov 14 19:12:17.925: INFO: starting to delete deployment web-windowshv4ji7
Nov 14 19:12:17.966: INFO: starting to delete job curl-to-elb-job1mmge5fbfks
... skipping 20 lines ...
Nov 14 19:13:10.475: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-xk02tr-ha-md-0-94nhs

Nov 14 19:13:10.762: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster capz-e2e-xk02tr-ha in namespace capz-e2e-xk02tr

Nov 14 19:13:34.682: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-xk02tr-ha-md-win-2gqjw

Failed to get logs for machine capz-e2e-xk02tr-ha-md-win-779989f7d9-2cgzc, cluster capz-e2e-xk02tr/capz-e2e-xk02tr-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 14 19:13:34.956: INFO: INFO: Collecting logs for node 10.1.0.7 in cluster capz-e2e-xk02tr-ha in namespace capz-e2e-xk02tr

Nov 14 19:14:02.089: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-xk02tr-ha-md-win-kw97x

Failed to get logs for machine capz-e2e-xk02tr-ha-md-win-779989f7d9-dkhvl, cluster capz-e2e-xk02tr/capz-e2e-xk02tr-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-xk02tr/capz-e2e-xk02tr-ha kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-node-sgm5q, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-4t72x, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-xk02tr-ha-control-plane-nm6vx, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-windows-qxv9t, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-d49nj, container calico-node
... skipping 22 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-xk02tr-ha-control-plane-wgwss, container kube-controller-manager
STEP: Dumping workload cluster capz-e2e-xk02tr/capz-e2e-xk02tr-ha Azure activity log
STEP: Creating log watcher for controller kube-system/kube-proxy-zzq8z, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-hsvfn, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-pj8qb, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-8zghz, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-xk02tr-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001192448s
STEP: Dumping all the Cluster API resources in the "capz-e2e-xk02tr" namespace
STEP: Deleting all clusters in the capz-e2e-xk02tr namespace
STEP: Deleting cluster capz-e2e-xk02tr-ha
INFO: Waiting for the Cluster capz-e2e-xk02tr/capz-e2e-xk02tr-ha to be deleted
STEP: Waiting for cluster capz-e2e-xk02tr-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-qxv9t, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-xk02tr-ha-control-plane-nm6vx, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-h49x9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8zghz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-9zx7v, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-hsvfn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-xk02tr-ha-control-plane-nm6vx, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-xk02tr-ha-control-plane-nm6vx, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8fn64, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-qxv9t, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4t72x, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-97ltn, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-97ltn, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-n9kgk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-cp7ts, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qtg9z, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-xk02tr-ha-control-plane-nm6vx, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-xk02tr
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" ran for 46m41s on Ginkgo node 3 of 3

... skipping 8 lines ...
  with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:334

INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" started at Sun, 14 Nov 2021 18:57:37 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-wegzl4" for hosting the cluster
Nov 14 18:57:37.282: INFO: starting to create namespace for hosting the "capz-e2e-wegzl4" test spec
2021/11/14 18:57:37 failed trying to get namespace (capz-e2e-wegzl4):namespaces "capz-e2e-wegzl4" not found
INFO: Creating namespace capz-e2e-wegzl4
INFO: Creating event watcher for namespace "capz-e2e-wegzl4"
Nov 14 18:57:37.317: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-wegzl4-vmss
INFO: Creating the workload cluster with name "capz-e2e-wegzl4-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 140 lines ...
Nov 14 19:15:50.088: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-e2e-wegzl4-vmss-mp-0

Nov 14 19:15:50.472: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-wegzl4-vmss in namespace capz-e2e-wegzl4

Nov 14 19:16:07.174: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-wegzl4-vmss-mp-0

Failed to get logs for machine pool capz-e2e-wegzl4-vmss-mp-0, cluster capz-e2e-wegzl4/capz-e2e-wegzl4-vmss: [[running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1], [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1]]
Nov 14 19:16:07.495: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-wegzl4-vmss in namespace capz-e2e-wegzl4

Nov 14 19:16:40.876: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

Nov 14 19:16:41.249: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-wegzl4-vmss in namespace capz-e2e-wegzl4

Nov 14 19:17:19.530: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set win-p-win

Failed to get logs for machine pool capz-e2e-wegzl4-vmss-mp-win, cluster capz-e2e-wegzl4/capz-e2e-wegzl4-vmss: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-wegzl4/capz-e2e-wegzl4-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 403.943835ms
STEP: Creating log watcher for controller kube-system/calico-node-f2scz, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-4xqhn, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-rdwf8, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-bk7g2, container kube-proxy
... skipping 16 lines ...
STEP: Fetching activity logs took 1.126678974s
STEP: Dumping all the Cluster API resources in the "capz-e2e-wegzl4" namespace
STEP: Deleting all clusters in the capz-e2e-wegzl4 namespace
STEP: Deleting cluster capz-e2e-wegzl4-vmss
INFO: Waiting for the Cluster capz-e2e-wegzl4/capz-e2e-wegzl4-vmss to be deleted
STEP: Waiting for cluster capz-e2e-wegzl4-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-8887b, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-4csrv, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-dkmvl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bk7g2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-2bmzf, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-dm7j4, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-hr8q7, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-rdwf8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-wegzl4-vmss-control-plane-ck5f5, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-2bmzf, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-jbkcv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-wegzl4-vmss-control-plane-ck5f5, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-wegzl4-vmss-control-plane-ck5f5, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-wegzl4-vmss-control-plane-ck5f5, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-dm7j4, container calico-node-startup: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-wegzl4
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" ran for 30m8s on Ginkgo node 2 of 3

... skipping 10 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:144

INFO: "Creates a public management cluster in the same vnet" started at Sun, 14 Nov 2021 18:40:03 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-qg6x09" for hosting the cluster
Nov 14 18:40:03.808: INFO: starting to create namespace for hosting the "capz-e2e-qg6x09" test spec
2021/11/14 18:40:03 failed trying to get namespace (capz-e2e-qg6x09):namespaces "capz-e2e-qg6x09" not found
INFO: Creating namespace capz-e2e-qg6x09
INFO: Creating event watcher for namespace "capz-e2e-qg6x09"
Nov 14 18:40:03.858: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-qg6x09-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-qg6x09-public-custom-vnet-control-plane-6b5jc, container etcd
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-hrn6s, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-proxy-bwdqj, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-w6jgf, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-nd2cq, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-qg6x09-public-custom-vnet-control-plane-6b5jc, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-qg6x09-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001140494s
STEP: Dumping all the Cluster API resources in the "capz-e2e-qg6x09" namespace
STEP: Deleting all clusters in the capz-e2e-qg6x09 namespace
STEP: Deleting cluster capz-e2e-qg6x09-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-qg6x09/capz-e2e-qg6x09-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-qg6x09-public-custom-vnet to be deleted
W1114 19:24:20.665017   24426 reflector.go:441] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1114 19:24:51.889326   24426 trace.go:205] Trace[185257188]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (14-Nov-2021 19:24:21.888) (total time: 30000ms):
Trace[185257188]: [30.000867129s] [30.000867129s] END
E1114 19:24:51.889393   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp 20.80.221.78:6443: i/o timeout
I1114 19:25:23.838456   24426 trace.go:205] Trace[708957430]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (14-Nov-2021 19:24:53.837) (total time: 30000ms):
Trace[708957430]: [30.000705674s] [30.000705674s] END
E1114 19:25:23.838518   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp 20.80.221.78:6443: i/o timeout
I1114 19:25:59.626065   24426 trace.go:205] Trace[1538053052]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (14-Nov-2021 19:25:29.624) (total time: 30001ms):
Trace[1538053052]: [30.001172481s] [30.001172481s] END
E1114 19:25:59.626131   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp 20.80.221.78:6443: i/o timeout
I1114 19:26:36.730239   24426 trace.go:205] Trace[329782856]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (14-Nov-2021 19:26:06.729) (total time: 30000ms):
Trace[329782856]: [30.000895914s] [30.000895914s] END
E1114 19:26:36.730307   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp 20.80.221.78:6443: i/o timeout
E1114 19:27:02.201994   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-qg6x09
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 14 19:27:09.183: INFO: deleting an existing virtual network "custom-vnet"
Nov 14 19:27:19.725: INFO: deleting an existing route table "node-routetable"
Nov 14 19:27:30.151: INFO: deleting an existing network security group "node-nsg"
E1114 19:27:34.576786   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 14 19:27:40.573: INFO: deleting an existing network security group "control-plane-nsg"
Nov 14 19:27:50.977: INFO: verifying the existing resource group "capz-e2e-qg6x09-public-custom-vnet" is empty
Nov 14 19:27:51.018: INFO: deleting the existing resource group "capz-e2e-qg6x09-public-custom-vnet"
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1114 19:28:24.563821   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 48m43s on Ginkgo node 1 of 3


• [SLOW TEST:2923.179 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:490

INFO: "with a single control plane node and 1 node" started at Sun, 14 Nov 2021 19:28:46 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-jpwmqw" for hosting the cluster
Nov 14 19:28:46.989: INFO: starting to create namespace for hosting the "capz-e2e-jpwmqw" test spec
2021/11/14 19:28:46 failed trying to get namespace (capz-e2e-jpwmqw):namespaces "capz-e2e-jpwmqw" not found
INFO: Creating namespace capz-e2e-jpwmqw
INFO: Creating event watcher for namespace "capz-e2e-jpwmqw"
Nov 14 19:28:47.022: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-jpwmqw-aks
INFO: Creating the workload cluster with name "capz-e2e-jpwmqw-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1114 19:28:55.444717   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:29:51.505285   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:30:26.153196   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:31:17.210953   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:32:03.196705   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:32:45.829597   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 14 19:32:48.061: INFO: Waiting for the first control plane machine managed by capz-e2e-jpwmqw/capz-e2e-jpwmqw-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
INFO: Waiting for control plane to be ready
Nov 14 19:32:58.094: INFO: Waiting for the first control plane machine managed by capz-e2e-jpwmqw/capz-e2e-jpwmqw-aks to be provisioned
STEP: Waiting for all control plane nodes to exist
... skipping 13 lines ...
STEP: time sync OK for host aks-agentpool1-96944531-vmss000000
STEP: time sync OK for host aks-agentpool1-96944531-vmss000000
STEP: Dumping logs from the "capz-e2e-jpwmqw-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-jpwmqw/capz-e2e-jpwmqw-aks logs
Nov 14 19:33:04.959: INFO: INFO: Collecting logs for node aks-agentpool1-96944531-vmss000000 in cluster capz-e2e-jpwmqw-aks in namespace capz-e2e-jpwmqw

E1114 19:33:36.770099   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:34:15.293389   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:35:00.237634   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 14 19:35:15.200: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-jpwmqw/capz-e2e-jpwmqw-aks: [dialing public load balancer at capz-e2e-jpwmqw-aks-c18d5ea6.hcp.eastus2.azmk8s.io: dial tcp 52.177.92.8:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
Nov 14 19:35:15.615: INFO: INFO: Collecting logs for node aks-agentpool1-96944531-vmss000000 in cluster capz-e2e-jpwmqw-aks in namespace capz-e2e-jpwmqw

E1114 19:35:57.104577   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:36:37.678128   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:37:23.522934   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 14 19:37:26.272: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-jpwmqw/capz-e2e-jpwmqw-aks: [dialing public load balancer at capz-e2e-jpwmqw-aks-c18d5ea6.hcp.eastus2.azmk8s.io: dial tcp 52.177.92.8:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-jpwmqw/capz-e2e-jpwmqw-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 419.12165ms
STEP: Dumping workload cluster capz-e2e-jpwmqw/capz-e2e-jpwmqw-aks Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-vcdxc, container calico-node
STEP: Creating log watcher for controller kube-system/calico-typha-deployment-76cb9744d8-xsjpg, container calico-typha
STEP: Creating log watcher for controller kube-system/coredns-autoscaler-54d55c8b75-vd5pd, container autoscaler
... skipping 8 lines ...
STEP: Fetching activity logs took 511.022573ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-jpwmqw" namespace
STEP: Deleting all clusters in the capz-e2e-jpwmqw namespace
STEP: Deleting cluster capz-e2e-jpwmqw-aks
INFO: Waiting for the Cluster capz-e2e-jpwmqw/capz-e2e-jpwmqw-aks to be deleted
STEP: Waiting for cluster capz-e2e-jpwmqw-aks to be deleted
E1114 19:38:03.089652   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:38:51.679865   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:39:50.630558   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:40:45.243029   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:41:29.616605   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-jpwmqw
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1114 19:42:25.044815   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:43:02.233535   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 14m35s on Ginkgo node 1 of 3


• [SLOW TEST:874.598 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:455

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Sun, 14 Nov 2021 19:27:44 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-jpf025" for hosting the cluster
Nov 14 19:27:44.856: INFO: starting to create namespace for hosting the "capz-e2e-jpf025" test spec
2021/11/14 19:27:44 failed trying to get namespace (capz-e2e-jpf025):namespaces "capz-e2e-jpf025" not found
INFO: Creating namespace capz-e2e-jpf025
INFO: Creating event watcher for namespace "capz-e2e-jpf025"
Nov 14 19:27:44.890: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-jpf025-oot
INFO: Creating the workload cluster with name "capz-e2e-jpf025-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 98 lines ...
STEP: Fetching activity logs took 585.800018ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-jpf025" namespace
STEP: Deleting all clusters in the capz-e2e-jpf025 namespace
STEP: Deleting cluster capz-e2e-jpf025-oot
INFO: Waiting for the Cluster capz-e2e-jpf025/capz-e2e-jpf025-oot to be deleted
STEP: Waiting for cluster capz-e2e-jpf025-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-thpqz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-k6sw2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-qswcs, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-cjhbp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-8hcd7, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rrlwf, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-jpf025
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 16m47s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413

INFO: "with a single control plane node and 1 node" started at Sun, 14 Nov 2021 19:26:44 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-0wzdmy" for hosting the cluster
Nov 14 19:26:44.664: INFO: starting to create namespace for hosting the "capz-e2e-0wzdmy" test spec
2021/11/14 19:26:44 failed trying to get namespace (capz-e2e-0wzdmy):namespaces "capz-e2e-0wzdmy" not found
INFO: Creating namespace capz-e2e-0wzdmy
INFO: Creating event watcher for namespace "capz-e2e-0wzdmy"
Nov 14 19:26:44.727: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-0wzdmy-gpu
INFO: Creating the workload cluster with name "capz-e2e-0wzdmy-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 58 lines ...
STEP: Fetching activity logs took 568.525116ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-0wzdmy" namespace
STEP: Deleting all clusters in the capz-e2e-0wzdmy namespace
STEP: Deleting cluster capz-e2e-0wzdmy-gpu
INFO: Waiting for the Cluster capz-e2e-0wzdmy/capz-e2e-0wzdmy-gpu to be deleted
STEP: Waiting for cluster capz-e2e-0wzdmy-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-68t2j, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xfxr7, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-0wzdmy
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 19m39s on Ginkgo node 3 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Sun, 14 Nov 2021 19:43:21 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-e7e2fx" for hosting the cluster
Nov 14 19:43:21.591: INFO: starting to create namespace for hosting the "capz-e2e-e7e2fx" test spec
2021/11/14 19:43:21 failed trying to get namespace (capz-e2e-e7e2fx):namespaces "capz-e2e-e7e2fx" not found
INFO: Creating namespace capz-e2e-e7e2fx
INFO: Creating event watcher for namespace "capz-e2e-e7e2fx"
Nov 14 19:43:21.632: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-e7e2fx-win-ha
INFO: Creating the workload cluster with name "capz-e2e-e7e2fx-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 12 lines ...
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created
clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-e7e2fx-win-ha-flannel created
configmap/cni-capz-e2e-e7e2fx-win-ha-flannel created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1114 19:43:40.198471   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:44:33.114669   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by capz-e2e-e7e2fx/capz-e2e-e7e2fx-win-ha-control-plane to be provisioned
STEP: Waiting for one control plane node to exist
E1114 19:45:05.874723   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:45:48.357614   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:46:44.260532   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:47:31.567410   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:48:18.131958   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:49:00.712800   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:49:57.094739   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
INFO: Waiting for the remaining control plane machines managed by capz-e2e-e7e2fx/capz-e2e-e7e2fx-win-ha-control-plane to be provisioned
STEP: Waiting for all control plane nodes to exist
E1114 19:50:39.060706   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:51:34.897204   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:52:09.795550   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:52:57.865856   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:53:57.776510   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:54:49.838712   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane capz-e2e-e7e2fx/capz-e2e-e7e2fx-win-ha-control-plane to be ready (implies underlying nodes to be ready as well)
STEP: Waiting for the control plane to be ready
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
E1114 19:55:43.767114   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:56:24.002046   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:56:58.918513   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:57:51.454498   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:58:25.016683   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:59:12.333594   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 19:59:57.193400   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:00:54.536875   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:01:31.000946   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:02:12.097691   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:02:55.635070   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:03:29.078233   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:04:04.593758   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:04:51.900517   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:05:35.527955   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:06:26.828085   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:07:21.818473   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:08:06.541385   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:09:04.533488   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:09:54.406069   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:10:31.916225   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:11:08.152290   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:12:07.598471   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:12:48.620884   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:13:24.490337   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:14:15.307122   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:14:46.343725   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Dumping logs from the "capz-e2e-e7e2fx-win-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-e7e2fx/capz-e2e-e7e2fx-win-ha logs
Nov 14 20:14:54.166: INFO: INFO: Collecting logs for node capz-e2e-e7e2fx-win-ha-control-plane-xbwkv in cluster capz-e2e-e7e2fx-win-ha in namespace capz-e2e-e7e2fx

Nov 14 20:15:05.288: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-e7e2fx-win-ha-control-plane-xbwkv

Nov 14 20:15:06.140: INFO: INFO: Collecting logs for node capz-e2e-e7e2fx-win-ha-control-plane-rm5jn in cluster capz-e2e-e7e2fx-win-ha in namespace capz-e2e-e7e2fx

Nov 14 20:15:16.435: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-e7e2fx-win-ha-control-plane-rm5jn

Nov 14 20:15:16.782: INFO: INFO: Collecting logs for node capz-e2e-e7e2fx-win-ha-control-plane-pc7vw in cluster capz-e2e-e7e2fx-win-ha in namespace capz-e2e-e7e2fx

E1114 20:15:17.305280   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 14 20:15:27.457: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-e7e2fx-win-ha-control-plane-pc7vw

Nov 14 20:15:27.754: INFO: INFO: Collecting logs for node capz-e2e-e7e2fx-win-ha-md-0-6jld4 in cluster capz-e2e-e7e2fx-win-ha in namespace capz-e2e-e7e2fx

Nov 14 20:15:31.372: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-e7e2fx-win-ha-md-0-6jld4

STEP: Redacting sensitive information from logs
E1114 20:16:12.495690   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host


• Failure [2012.420 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows Enabled cluster with dockershim
... skipping 51 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:579

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Sun, 14 Nov 2021 19:44:31 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-gp0e9c" for hosting the cluster
Nov 14 19:44:31.563: INFO: starting to create namespace for hosting the "capz-e2e-gp0e9c" test spec
2021/11/14 19:44:31 failed trying to get namespace (capz-e2e-gp0e9c):namespaces "capz-e2e-gp0e9c" not found
INFO: Creating namespace capz-e2e-gp0e9c
INFO: Creating event watcher for namespace "capz-e2e-gp0e9c"
Nov 14 19:44:31.600: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-gp0e9c-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-gp0e9c-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 89 lines ...
STEP: waiting for job default/curl-to-elb-joboleksyc5kfp to be complete
Nov 14 20:08:49.863: INFO: waiting for job default/curl-to-elb-joboleksyc5kfp to be complete
Nov 14 20:08:59.938: INFO: job default/curl-to-elb-joboleksyc5kfp is complete, took 10.074141315s
STEP: connecting directly to the external LB service
Nov 14 20:08:59.938: INFO: starting attempts to connect directly to the external LB service
2021/11/14 20:08:59 [DEBUG] GET http://20.190.208.64
2021/11/14 20:09:29 [ERR] GET http://20.190.208.64 request failed: Get "http://20.190.208.64": dial tcp 20.190.208.64:80: i/o timeout
2021/11/14 20:09:29 [DEBUG] GET http://20.190.208.64: retrying in 1s (4 left)
Nov 14 20:09:46.309: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 14 20:09:46.309: INFO: starting to delete external LB service web-windows2q7ydz-elb
Nov 14 20:09:46.377: INFO: starting to delete deployment web-windows2q7ydz
Nov 14 20:09:46.416: INFO: starting to delete job curl-to-elb-joboleksyc5kfp
... skipping 4 lines ...
Nov 14 20:10:00.633: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-gp0e9c-win-vmss-control-plane-9vj75

Nov 14 20:10:01.545: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-gp0e9c-win-vmss in namespace capz-e2e-gp0e9c

Nov 14 20:10:12.992: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-gp0e9c-win-vmss-mp-0

Failed to get logs for machine pool capz-e2e-gp0e9c-win-vmss-mp-0, cluster capz-e2e-gp0e9c/capz-e2e-gp0e9c-win-vmss: [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1]
Nov 14 20:10:13.336: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-gp0e9c-win-vmss in namespace capz-e2e-gp0e9c

Nov 14 20:10:36.046: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

STEP: Dumping workload cluster capz-e2e-gp0e9c/capz-e2e-gp0e9c-win-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 350.452166ms
... skipping 13 lines ...
STEP: Fetching activity logs took 898.352402ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-gp0e9c" namespace
STEP: Deleting all clusters in the capz-e2e-gp0e9c namespace
STEP: Deleting cluster capz-e2e-gp0e9c-win-vmss
INFO: Waiting for the Cluster capz-e2e-gp0e9c/capz-e2e-gp0e9c-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-gp0e9c-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-x6vpb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-gp0e9c-win-vmss-control-plane-9vj75, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-gp0e9c-win-vmss-control-plane-9vj75, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-lq8xl, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-k2f9t, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-g2kvh, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-2nrd4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-gp0e9c-win-vmss-control-plane-9vj75, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-gp0e9c-win-vmss-control-plane-9vj75, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-h56ln, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-gp0e9c
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 41m0s on Ginkgo node 2 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows enabled VMSS cluster with dockershim
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:578
    with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:579
------------------------------
E1114 20:17:01.507726   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:17:53.897022   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:18:40.804396   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:19:20.304107   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:20:13.656737   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:20:53.248774   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:21:28.613685   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:21:59.296111   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:22:34.287854   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:23:28.266578   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:24:08.502143   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:24:53.011434   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 20:25:27.609156   24426 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-qg6x09/events?resourceVersion=10246": dial tcp: lookup capz-e2e-qg6x09-public-custom-vnet-4b3fde25.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a Windows Enabled cluster with dockershim [It] With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/machinedeployment_helpers.go:121

Ran 9 of 23 Specs in 6447.040 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 14 Skipped


Ginkgo ran 1 suite in 1h48m48.372711688s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:176: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:184: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...