This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-25 18:36
Elapsed1h43m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node 35m58s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\san\sAKS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454
Timed out after 1200.000s.
System machine pools not ready
Expected
    <bool>: false
to equal
    <bool>: true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216
				
				Click to see stdout/stderrfrom junit.e2e_suite.2.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 430 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Thu, 25 Nov 2021 18:43:03 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-yor96v" for hosting the cluster
Nov 25 18:43:03.381: INFO: starting to create namespace for hosting the "capz-e2e-yor96v" test spec
2021/11/25 18:43:03 failed trying to get namespace (capz-e2e-yor96v):namespaces "capz-e2e-yor96v" not found
INFO: Creating namespace capz-e2e-yor96v
INFO: Creating event watcher for namespace "capz-e2e-yor96v"
Nov 25 18:43:03.447: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-yor96v-ipv6
INFO: Creating the workload cluster with name "capz-e2e-yor96v-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 534.915343ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-yor96v" namespace
STEP: Deleting all clusters in the capz-e2e-yor96v namespace
STEP: Deleting cluster capz-e2e-yor96v-ipv6
INFO: Waiting for the Cluster capz-e2e-yor96v/capz-e2e-yor96v-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-yor96v-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-n7d4x, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-yor96v-ipv6-control-plane-9w7bl, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-yor96v-ipv6-control-plane-lrv5q, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2l6pz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-yor96v-ipv6-control-plane-9w7bl, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-yor96v-ipv6-control-plane-9w7bl, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qdd8p, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-yor96v-ipv6-control-plane-9w7bl, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2k8fz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-bjh8x, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-yor96v-ipv6-control-plane-vln4s, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-vjpjd, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-yor96v-ipv6-control-plane-lrv5q, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-yor96v-ipv6-control-plane-lrv5q, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-yor96v-ipv6-control-plane-vln4s, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-pl8rt, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ckjw4, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-yor96v-ipv6-control-plane-vln4s, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-v7blj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-sthz2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-f5w7b, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-yor96v-ipv6-control-plane-vln4s, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-yor96v-ipv6-control-plane-lrv5q, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-yor96v
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 16m43s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Thu, 25 Nov 2021 18:59:46 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-f56umz" for hosting the cluster
Nov 25 18:59:46.736: INFO: starting to create namespace for hosting the "capz-e2e-f56umz" test spec
2021/11/25 18:59:46 failed trying to get namespace (capz-e2e-f56umz):namespaces "capz-e2e-f56umz" not found
INFO: Creating namespace capz-e2e-f56umz
INFO: Creating event watcher for namespace "capz-e2e-f56umz"
Nov 25 18:59:46.764: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-f56umz-vmss
INFO: Creating the workload cluster with name "capz-e2e-f56umz-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 106 lines ...
STEP: Fetching activity logs took 555.952183ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-f56umz" namespace
STEP: Deleting all clusters in the capz-e2e-f56umz namespace
STEP: Deleting cluster capz-e2e-f56umz-vmss
INFO: Waiting for the Cluster capz-e2e-f56umz/capz-e2e-f56umz-vmss to be deleted
STEP: Waiting for cluster capz-e2e-f56umz-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-49n8b, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nb44q, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qnqqq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-gdvlh, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-f56umz
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 21m52s on Ginkgo node 3 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Thu, 25 Nov 2021 18:43:03 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-bef4lz" for hosting the cluster
Nov 25 18:43:03.380: INFO: starting to create namespace for hosting the "capz-e2e-bef4lz" test spec
2021/11/25 18:43:03 failed trying to get namespace (capz-e2e-bef4lz):namespaces "capz-e2e-bef4lz" not found
INFO: Creating namespace capz-e2e-bef4lz
INFO: Creating event watcher for namespace "capz-e2e-bef4lz"
Nov 25 18:43:03.438: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-bef4lz-ha
INFO: Creating the workload cluster with name "capz-e2e-bef4lz-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 59 lines ...
STEP: waiting for job default/curl-to-elb-job0aks074zi88 to be complete
Nov 25 18:53:23.057: INFO: waiting for job default/curl-to-elb-job0aks074zi88 to be complete
Nov 25 18:53:33.181: INFO: job default/curl-to-elb-job0aks074zi88 is complete, took 10.124094725s
STEP: connecting directly to the external LB service
Nov 25 18:53:33.181: INFO: starting attempts to connect directly to the external LB service
2021/11/25 18:53:33 [DEBUG] GET http://20.99.189.40
2021/11/25 18:54:03 [ERR] GET http://20.99.189.40 request failed: Get "http://20.99.189.40": dial tcp 20.99.189.40:80: i/o timeout
2021/11/25 18:54:03 [DEBUG] GET http://20.99.189.40: retrying in 1s (4 left)
Nov 25 18:54:04.291: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 25 18:54:04.291: INFO: starting to delete external LB service web79ul6c-elb
Nov 25 18:54:04.394: INFO: starting to delete deployment web79ul6c
Nov 25 18:54:04.458: INFO: starting to delete job curl-to-elb-job0aks074zi88
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 25 18:54:04.617: INFO: starting to create dev deployment namespace
2021/11/25 18:54:04 failed trying to get namespace (development):namespaces "development" not found
2021/11/25 18:54:04 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 25 18:54:04.744: INFO: starting to create prod deployment namespace
2021/11/25 18:54:04 failed trying to get namespace (production):namespaces "production" not found
2021/11/25 18:54:04 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 25 18:54:04.871: INFO: starting to create frontend-prod deployments
Nov 25 18:54:04.936: INFO: starting to create frontend-dev deployments
Nov 25 18:54:05.014: INFO: starting to create backend deployments
Nov 25 18:54:05.110: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 25 18:54:29.075: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.210.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 25 18:56:40.738: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 25 18:56:40.973: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.210.3 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.210.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 25 19:01:03.327: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 25 19:01:03.567: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.210.4 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 25 19:03:16.008: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 25 19:03:16.246: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.207.130 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.210.4 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 25 19:07:38.147: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 25 19:07:38.389: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.210.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 25 19:09:49.219: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 25 19:09:49.504: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.210.3 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-bef4lz-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-bef4lz/capz-e2e-bef4lz-ha logs
Nov 25 19:12:01.324: INFO: INFO: Collecting logs for node capz-e2e-bef4lz-ha-control-plane-jzgrn in cluster capz-e2e-bef4lz-ha in namespace capz-e2e-bef4lz

Nov 25 19:12:12.758: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-bef4lz-ha-control-plane-jzgrn
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-bef4lz-ha-control-plane-jzgrn, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-bef4lz-ha-control-plane-45bk8, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-bef4lz-ha-control-plane-45bk8, container etcd
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-bef4lz-ha-control-plane-jzgrn, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-8kdrj, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-2ksx2, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-bef4lz-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000323812s
STEP: Dumping all the Cluster API resources in the "capz-e2e-bef4lz" namespace
STEP: Deleting all clusters in the capz-e2e-bef4lz namespace
STEP: Deleting cluster capz-e2e-bef4lz-ha
INFO: Waiting for the Cluster capz-e2e-bef4lz/capz-e2e-bef4lz-ha to be deleted
STEP: Waiting for cluster capz-e2e-bef4lz-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-bef4lz-ha-control-plane-45bk8, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-r2kfz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-bef4lz-ha-control-plane-bm64n, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8kdrj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-bef4lz-ha-control-plane-45bk8, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-82nk2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-bef4lz-ha-control-plane-bm64n, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2ksx2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fmkpb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-ndg6z, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-bef4lz-ha-control-plane-jzgrn, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-bef4lz-ha-control-plane-jzgrn, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-bef4lz-ha-control-plane-45bk8, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-bef4lz-ha-control-plane-bm64n, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vtlsp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-bef4lz-ha-control-plane-jzgrn, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-b9jw4, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-bef4lz-ha-control-plane-45bk8, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-bef4lz-ha-control-plane-jzgrn, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-ng6wt, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-bef4lz-ha-control-plane-bm64n, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-bef4lz
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 40m21s on Ginkgo node 1 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Thu, 25 Nov 2021 18:43:03 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-b0dmf5" for hosting the cluster
Nov 25 18:43:03.380: INFO: starting to create namespace for hosting the "capz-e2e-b0dmf5" test spec
2021/11/25 18:43:03 failed trying to get namespace (capz-e2e-b0dmf5):namespaces "capz-e2e-b0dmf5" not found
INFO: Creating namespace capz-e2e-b0dmf5
INFO: Creating event watcher for namespace "capz-e2e-b0dmf5"
Nov 25 18:43:03.442: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-b0dmf5-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-t8tds, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-np6ll, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-2nxhk, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-b0dmf5-public-custom-vnet-control-plane-59h9k, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-22gg4, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-kszc7, container coredns
STEP: Got error while iterating over activity logs for resource group capz-e2e-b0dmf5-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000408342s
STEP: Dumping all the Cluster API resources in the "capz-e2e-b0dmf5" namespace
STEP: Deleting all clusters in the capz-e2e-b0dmf5 namespace
STEP: Deleting cluster capz-e2e-b0dmf5-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-b0dmf5/capz-e2e-b0dmf5-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-b0dmf5-public-custom-vnet to be deleted
W1125 19:29:25.330299   24253 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1125 19:29:56.560603   24253 trace.go:205] Trace[1351356835]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (25-Nov-2021 19:29:26.559) (total time: 30001ms):
Trace[1351356835]: [30.00112148s] [30.00112148s] END
E1125 19:29:56.560671   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp 20.99.205.4:6443: i/o timeout
I1125 19:30:29.184455   24253 trace.go:205] Trace[1707474370]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (25-Nov-2021 19:29:59.183) (total time: 30001ms):
Trace[1707474370]: [30.00131176s] [30.00131176s] END
E1125 19:30:29.184524   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp 20.99.205.4:6443: i/o timeout
I1125 19:31:03.172628   24253 trace.go:205] Trace[763447249]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (25-Nov-2021 19:30:33.171) (total time: 30001ms):
Trace[763447249]: [30.001347393s] [30.001347393s] END
E1125 19:31:03.172691   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp 20.99.205.4:6443: i/o timeout
I1125 19:31:44.817207   24253 trace.go:205] Trace[1839375558]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (25-Nov-2021 19:31:14.816) (total time: 30000ms):
Trace[1839375558]: [30.000892123s] [30.000892123s] END
E1125 19:31:44.817271   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp 20.99.205.4:6443: i/o timeout
I1125 19:32:32.765926   24253 trace.go:205] Trace[917302768]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (25-Nov-2021 19:32:02.764) (total time: 30001ms):
Trace[917302768]: [30.001085071s] [30.001085071s] END
E1125 19:32:32.765995   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp 20.99.205.4:6443: i/o timeout
I1125 19:33:40.312709   24253 trace.go:205] Trace[792719159]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (25-Nov-2021 19:33:10.311) (total time: 30001ms):
Trace[792719159]: [30.001278026s] [30.001278026s] END
E1125 19:33:40.312773   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp 20.99.205.4:6443: i/o timeout
E1125 19:34:35.047482   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-b0dmf5
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 25 19:34:39.278: INFO: deleting an existing virtual network "custom-vnet"
Nov 25 19:34:49.790: INFO: deleting an existing route table "node-routetable"
Nov 25 19:35:00.128: INFO: deleting an existing network security group "node-nsg"
Nov 25 19:35:10.469: INFO: deleting an existing network security group "control-plane-nsg"
Nov 25 19:35:20.858: INFO: verifying the existing resource group "capz-e2e-b0dmf5-public-custom-vnet" is empty
Nov 25 19:35:20.906: INFO: deleting the existing resource group "capz-e2e-b0dmf5-public-custom-vnet"
E1125 19:35:34.951138   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1125 19:36:17.261298   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 53m53s on Ginkgo node 2 of 3


• [SLOW TEST:3233.385 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Thu, 25 Nov 2021 19:21:38 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-fho65g" for hosting the cluster
Nov 25 19:21:38.961: INFO: starting to create namespace for hosting the "capz-e2e-fho65g" test spec
2021/11/25 19:21:38 failed trying to get namespace (capz-e2e-fho65g):namespaces "capz-e2e-fho65g" not found
INFO: Creating namespace capz-e2e-fho65g
INFO: Creating event watcher for namespace "capz-e2e-fho65g"
Nov 25 19:21:38.993: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-fho65g-gpu
INFO: Creating the workload cluster with name "capz-e2e-fho65g-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 58 lines ...
STEP: Fetching activity logs took 552.882762ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-fho65g" namespace
STEP: Deleting all clusters in the capz-e2e-fho65g namespace
STEP: Deleting cluster capz-e2e-fho65g-gpu
INFO: Waiting for the Cluster capz-e2e-fho65g/capz-e2e-fho65g-gpu to be deleted
STEP: Waiting for cluster capz-e2e-fho65g-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-xv469, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7f7qz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-fho65g-gpu-control-plane-vk7ht, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mdz9l, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-fho65g-gpu-control-plane-vk7ht, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-fho65g-gpu-control-plane-vk7ht, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-fho65g-gpu-control-plane-vk7ht, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-z8qlm, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-th8ks, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-fho65g
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 21m14s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Thu, 25 Nov 2021 19:23:24 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-qn0low" for hosting the cluster
Nov 25 19:23:24.392: INFO: starting to create namespace for hosting the "capz-e2e-qn0low" test spec
2021/11/25 19:23:24 failed trying to get namespace (capz-e2e-qn0low):namespaces "capz-e2e-qn0low" not found
INFO: Creating namespace capz-e2e-qn0low
INFO: Creating event watcher for namespace "capz-e2e-qn0low"
Nov 25 19:23:24.422: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-qn0low-oot
INFO: Creating the workload cluster with name "capz-e2e-qn0low-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 98 lines ...
STEP: Fetching activity logs took 563.288401ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-qn0low" namespace
STEP: Deleting all clusters in the capz-e2e-qn0low namespace
STEP: Deleting cluster capz-e2e-qn0low-oot
INFO: Waiting for the Cluster capz-e2e-qn0low/capz-e2e-qn0low-oot to be deleted
STEP: Waiting for cluster capz-e2e-qn0low-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-6ljk9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-2pqwc, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mvqpf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-q5ttr, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xf8t4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-fnj4p, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-qn0low
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 22m13s on Ginkgo node 1 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Thu, 25 Nov 2021 19:36:56 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-n9wup6" for hosting the cluster
Nov 25 19:36:56.769: INFO: starting to create namespace for hosting the "capz-e2e-n9wup6" test spec
2021/11/25 19:36:56 failed trying to get namespace (capz-e2e-n9wup6):namespaces "capz-e2e-n9wup6" not found
INFO: Creating namespace capz-e2e-n9wup6
INFO: Creating event watcher for namespace "capz-e2e-n9wup6"
Nov 25 19:36:56.821: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-n9wup6-aks
INFO: Creating the workload cluster with name "capz-e2e-n9wup6-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1125 19:37:12.054044   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:38:11.285335   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:39:02.145197   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:39:55.405720   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:40:39.793557   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:41:17.039125   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 25 19:41:28.318: INFO: Waiting for the first control plane machine managed by capz-e2e-n9wup6/capz-e2e-n9wup6-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
E1125 19:41:59.453635   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:42:43.780538   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:43:33.564462   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:44:04.150160   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:44:52.818601   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:45:34.767008   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:46:16.634977   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:47:02.853552   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:47:59.126390   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:48:40.689942   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:49:19.600793   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:50:14.159481   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:50:54.215986   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:51:26.282924   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:52:06.878024   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:52:54.775362   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:53:52.865936   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:54:42.643859   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:55:41.925052   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:56:15.277258   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:57:09.400010   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:57:49.346831   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:58:20.482491   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:58:59.213724   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 19:59:54.090094   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 20:00:53.179511   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 20:01:25.049533   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Dumping logs from the "capz-e2e-n9wup6-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-n9wup6/capz-e2e-n9wup6-aks logs
STEP: Dumping workload cluster capz-e2e-n9wup6/capz-e2e-n9wup6-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 608.446465ms
STEP: Dumping workload cluster capz-e2e-n9wup6/capz-e2e-n9wup6-aks Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-84d976c568-k7mwk, container coredns
... skipping 10 lines ...
STEP: Fetching activity logs took 756.723711ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-n9wup6" namespace
STEP: Deleting all clusters in the capz-e2e-n9wup6 namespace
STEP: Deleting cluster capz-e2e-n9wup6-aks
INFO: Waiting for the Cluster capz-e2e-n9wup6/capz-e2e-n9wup6-aks to be deleted
STEP: Waiting for cluster capz-e2e-n9wup6-aks to be deleted
E1125 20:01:57.769837   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 20:02:32.970343   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 20:03:12.908781   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 20:04:10.256221   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 20:05:02.838674   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 20:05:42.440768   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 20:06:20.894520   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 20:06:51.857183   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 20:07:23.446015   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 20:08:14.531804   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 20:08:47.718704   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 20:09:17.820764   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 20:09:56.158110   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 20:10:45.815172   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 20:11:20.899140   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-n9wup6
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1125 20:11:52.370866   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1125 20:12:51.554310   24253 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-b0dmf5/events?resourceVersion=8431": dial tcp: lookup capz-e2e-b0dmf5-public-custom-vnet-f04ec553.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 35m59s on Ginkgo node 2 of 3


• Failure [2158.711 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 57 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Thu, 25 Nov 2021 19:42:53 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-erqpin" for hosting the cluster
Nov 25 19:42:53.389: INFO: starting to create namespace for hosting the "capz-e2e-erqpin" test spec
2021/11/25 19:42:53 failed trying to get namespace (capz-e2e-erqpin):namespaces "capz-e2e-erqpin" not found
INFO: Creating namespace capz-e2e-erqpin
INFO: Creating event watcher for namespace "capz-e2e-erqpin"
Nov 25 19:42:53.427: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-erqpin-win-ha
INFO: Creating the workload cluster with name "capz-e2e-erqpin-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 151 lines ...
STEP: Fetching activity logs took 1.025016914s
STEP: Dumping all the Cluster API resources in the "capz-e2e-erqpin" namespace
STEP: Deleting all clusters in the capz-e2e-erqpin namespace
STEP: Deleting cluster capz-e2e-erqpin-win-ha
INFO: Waiting for the Cluster capz-e2e-erqpin/capz-e2e-erqpin-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-erqpin-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5w9w2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-46dzs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-erqpin-win-ha-control-plane-ffcdk, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-wjfxc, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5mtmp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-erqpin-win-ha-control-plane-ffcdk, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-vnr77, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-fr7w9, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-erqpin-win-ha-control-plane-ffcdk, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-erqpin-win-ha-control-plane-ffcdk, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-erqpin
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 32m33s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Thu, 25 Nov 2021 19:45:37 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-nw6bue" for hosting the cluster
Nov 25 19:45:37.582: INFO: starting to create namespace for hosting the "capz-e2e-nw6bue" test spec
2021/11/25 19:45:37 failed trying to get namespace (capz-e2e-nw6bue):namespaces "capz-e2e-nw6bue" not found
INFO: Creating namespace capz-e2e-nw6bue
INFO: Creating event watcher for namespace "capz-e2e-nw6bue"
Nov 25 19:45:37.620: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-nw6bue-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-nw6bue-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 129 lines ...
STEP: Fetching activity logs took 996.959301ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-nw6bue" namespace
STEP: Deleting all clusters in the capz-e2e-nw6bue namespace
STEP: Deleting cluster capz-e2e-nw6bue-win-vmss
INFO: Waiting for the Cluster capz-e2e-nw6bue/capz-e2e-nw6bue-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-nw6bue-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-8b928, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-zxsqk, container kube-flannel: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-nw6bue
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 31m51s on Ginkgo node 1 of 3

... skipping 9 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating an AKS cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216

Ran 9 of 22 Specs in 5778.317 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 1h37m53.692498145s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...