This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-22 18:34
Elapsed2h1m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node 31m43s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\san\sAKS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454
Timed out after 1200.001s.
System machine pools not ready
Expected
    <bool>: false
to equal
    <bool>: true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 433 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Mon, 22 Nov 2021 18:41:43 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-7on3ez" for hosting the cluster
Nov 22 18:41:43.114: INFO: starting to create namespace for hosting the "capz-e2e-7on3ez" test spec
2021/11/22 18:41:43 failed trying to get namespace (capz-e2e-7on3ez):namespaces "capz-e2e-7on3ez" not found
INFO: Creating namespace capz-e2e-7on3ez
INFO: Creating event watcher for namespace "capz-e2e-7on3ez"
Nov 22 18:41:43.180: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-7on3ez-ipv6
INFO: Creating the workload cluster with name "capz-e2e-7on3ez-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 1.378958753s
STEP: Dumping all the Cluster API resources in the "capz-e2e-7on3ez" namespace
STEP: Deleting all clusters in the capz-e2e-7on3ez namespace
STEP: Deleting cluster capz-e2e-7on3ez-ipv6
INFO: Waiting for the Cluster capz-e2e-7on3ez/capz-e2e-7on3ez-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-7on3ez-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-7on3ez-ipv6-control-plane-nwsbc, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-bnw95, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-7on3ez-ipv6-control-plane-pq6tm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-lq2xr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-7on3ez-ipv6-control-plane-nwsbc, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-7on3ez-ipv6-control-plane-nwsbc, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-7on3ez-ipv6-control-plane-z8xz4, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-7on3ez-ipv6-control-plane-z8xz4, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-hcrgq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-7on3ez-ipv6-control-plane-pq6tm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-7on3ez-ipv6-control-plane-nwsbc, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-w97lk, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5xl87, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-r958w, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-tfvmz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-dw4sp, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-j2znp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-56h98, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-7on3ez-ipv6-control-plane-z8xz4, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-fsh6t, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-7on3ez-ipv6-control-plane-pq6tm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-7on3ez-ipv6-control-plane-z8xz4, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-7on3ez-ipv6-control-plane-pq6tm, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-7on3ez
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 19m46s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Mon, 22 Nov 2021 19:01:28 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-htlu8x" for hosting the cluster
Nov 22 19:01:28.737: INFO: starting to create namespace for hosting the "capz-e2e-htlu8x" test spec
2021/11/22 19:01:28 failed trying to get namespace (capz-e2e-htlu8x):namespaces "capz-e2e-htlu8x" not found
INFO: Creating namespace capz-e2e-htlu8x
INFO: Creating event watcher for namespace "capz-e2e-htlu8x"
Nov 22 19:01:28.798: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-htlu8x-vmss
INFO: Creating the workload cluster with name "capz-e2e-htlu8x-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 106 lines ...
STEP: Fetching activity logs took 570.108022ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-htlu8x" namespace
STEP: Deleting all clusters in the capz-e2e-htlu8x namespace
STEP: Deleting cluster capz-e2e-htlu8x-vmss
INFO: Waiting for the Cluster capz-e2e-htlu8x/capz-e2e-htlu8x-vmss to be deleted
STEP: Waiting for cluster capz-e2e-htlu8x-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-r22mj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-htlu8x-vmss-control-plane-w5p6x, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-htlu8x-vmss-control-plane-w5p6x, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-htlu8x-vmss-control-plane-w5p6x, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-7dqd2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-669q9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-lgwtm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-cvh6f, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-nf4rh, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-sgw9s, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-htlu8x-vmss-control-plane-w5p6x, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-h6rmr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-95v9f, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-htlu8x
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 19m28s on Ginkgo node 2 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Mon, 22 Nov 2021 18:41:43 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-xodb72" for hosting the cluster
Nov 22 18:41:43.108: INFO: starting to create namespace for hosting the "capz-e2e-xodb72" test spec
2021/11/22 18:41:43 failed trying to get namespace (capz-e2e-xodb72):namespaces "capz-e2e-xodb72" not found
INFO: Creating namespace capz-e2e-xodb72
INFO: Creating event watcher for namespace "capz-e2e-xodb72"
Nov 22 18:41:43.180: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-xodb72-ha
INFO: Creating the workload cluster with name "capz-e2e-xodb72-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 59 lines ...
STEP: waiting for job default/curl-to-elb-jobt9vpifo0xcg to be complete
Nov 22 18:51:00.206: INFO: waiting for job default/curl-to-elb-jobt9vpifo0xcg to be complete
Nov 22 18:51:10.325: INFO: job default/curl-to-elb-jobt9vpifo0xcg is complete, took 10.119188093s
STEP: connecting directly to the external LB service
Nov 22 18:51:10.325: INFO: starting attempts to connect directly to the external LB service
2021/11/22 18:51:10 [DEBUG] GET http://20.99.143.183
2021/11/22 18:51:40 [ERR] GET http://20.99.143.183 request failed: Get "http://20.99.143.183": dial tcp 20.99.143.183:80: i/o timeout
2021/11/22 18:51:40 [DEBUG] GET http://20.99.143.183: retrying in 1s (4 left)
2021/11/22 18:52:11 [ERR] GET http://20.99.143.183 request failed: Get "http://20.99.143.183": dial tcp 20.99.143.183:80: i/o timeout
2021/11/22 18:52:11 [DEBUG] GET http://20.99.143.183: retrying in 2s (3 left)
Nov 22 18:52:13.441: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 22 18:52:13.441: INFO: starting to delete external LB service webxf20fu-elb
Nov 22 18:52:13.538: INFO: starting to delete deployment webxf20fu
Nov 22 18:52:13.599: INFO: starting to delete job curl-to-elb-jobt9vpifo0xcg
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 22 18:52:13.706: INFO: starting to create dev deployment namespace
2021/11/22 18:52:13 failed trying to get namespace (development):namespaces "development" not found
2021/11/22 18:52:13 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 22 18:52:13.831: INFO: starting to create prod deployment namespace
2021/11/22 18:52:13 failed trying to get namespace (production):namespaces "production" not found
2021/11/22 18:52:13 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 22 18:52:13.951: INFO: starting to create frontend-prod deployments
Nov 22 18:52:14.013: INFO: starting to create frontend-dev deployments
Nov 22 18:52:14.084: INFO: starting to create backend deployments
Nov 22 18:52:14.158: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 22 18:52:38.245: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.208.195 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 22 18:54:49.276: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 22 18:54:49.516: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.208.195 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.208.195 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 22 18:59:10.655: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 22 18:59:10.903: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.17.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 22 19:01:21.728: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 22 19:01:21.976: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.208.194 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.17.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 22 19:05:43.871: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 22 19:05:44.105: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.208.195 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 22 19:07:55.715: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 22 19:07:55.956: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.208.195 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-xodb72-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-xodb72/capz-e2e-xodb72-ha logs
Nov 22 19:10:06.574: INFO: INFO: Collecting logs for node capz-e2e-xodb72-ha-control-plane-dkxx6 in cluster capz-e2e-xodb72-ha in namespace capz-e2e-xodb72

Nov 22 19:10:17.805: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-xodb72-ha-control-plane-dkxx6
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-xodb72-ha-control-plane-zqm2r, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-sxjdl, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-xodb72-ha-control-plane-dkxx6, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-xodb72-ha-control-plane-dkxx6, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-xodb72-ha-control-plane-hm882, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-xodb72-ha-control-plane-zqm2r, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-xodb72-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000885153s
STEP: Dumping all the Cluster API resources in the "capz-e2e-xodb72" namespace
STEP: Deleting all clusters in the capz-e2e-xodb72 namespace
STEP: Deleting cluster capz-e2e-xodb72-ha
INFO: Waiting for the Cluster capz-e2e-xodb72/capz-e2e-xodb72-ha to be deleted
STEP: Waiting for cluster capz-e2e-xodb72-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-xodb72-ha-control-plane-dkxx6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7d2jq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-q48w2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-xodb72-ha-control-plane-dkxx6, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-xodb72-ha-control-plane-dkxx6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5vc82, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-xodb72-ha-control-plane-hm882, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vx9js, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-zswjs, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-drtpm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-xodb72-ha-control-plane-dkxx6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-xodb72-ha-control-plane-hm882, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-lfnh7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-sxjdl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-xodb72-ha-control-plane-hm882, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-xodb72-ha-control-plane-hm882, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-xodb72
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 41m9s on Ginkgo node 3 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Mon, 22 Nov 2021 18:41:43 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-x9v6b5" for hosting the cluster
Nov 22 18:41:43.065: INFO: starting to create namespace for hosting the "capz-e2e-x9v6b5" test spec
2021/11/22 18:41:43 failed trying to get namespace (capz-e2e-x9v6b5):namespaces "capz-e2e-x9v6b5" not found
INFO: Creating namespace capz-e2e-x9v6b5
INFO: Creating event watcher for namespace "capz-e2e-x9v6b5"
Nov 22 18:41:43.109: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-x9v6b5-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-x9v6b5-public-custom-vnet-control-plane-tsjbz, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-x9v6b5-public-custom-vnet-control-plane-tsjbz, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-2dqg2, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-x9v6b5-public-custom-vnet-control-plane-tsjbz, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-x9v6b5-public-custom-vnet-control-plane-tsjbz, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-lbhqk, container calico-node
STEP: Got error while iterating over activity logs for resource group capz-e2e-x9v6b5-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000466958s
STEP: Dumping all the Cluster API resources in the "capz-e2e-x9v6b5" namespace
STEP: Deleting all clusters in the capz-e2e-x9v6b5 namespace
STEP: Deleting cluster capz-e2e-x9v6b5-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-x9v6b5/capz-e2e-x9v6b5-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-x9v6b5-public-custom-vnet to be deleted
W1122 19:29:35.931848   24247 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1122 19:30:07.005881   24247 trace.go:205] Trace[179728839]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (22-Nov-2021 19:29:37.004) (total time: 30001ms):
Trace[179728839]: [30.001360903s] [30.001360903s] END
E1122 19:30:07.005949   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp 20.112.57.181:6443: i/o timeout
I1122 19:30:39.380797   24247 trace.go:205] Trace[153066853]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (22-Nov-2021 19:30:09.379) (total time: 30000ms):
Trace[153066853]: [30.000956307s] [30.000956307s] END
E1122 19:30:39.380891   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp 20.112.57.181:6443: i/o timeout
I1122 19:31:12.633498   24247 trace.go:205] Trace[842935673]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (22-Nov-2021 19:30:42.632) (total time: 30000ms):
Trace[842935673]: [30.000678175s] [30.000678175s] END
E1122 19:31:12.633590   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp 20.112.57.181:6443: i/o timeout
I1122 19:31:50.547144   24247 trace.go:205] Trace[1307503788]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (22-Nov-2021 19:31:20.546) (total time: 30000ms):
Trace[1307503788]: [30.000771154s] [30.000771154s] END
E1122 19:31:50.547199   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp 20.112.57.181:6443: i/o timeout
I1122 19:32:43.996512   24247 trace.go:205] Trace[109038344]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (22-Nov-2021 19:32:13.995) (total time: 30001ms):
Trace[109038344]: [30.001012172s] [30.001012172s] END
E1122 19:32:43.996618   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp 20.112.57.181:6443: i/o timeout
I1122 19:33:45.628990   24247 trace.go:205] Trace[1505683048]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (22-Nov-2021 19:33:15.627) (total time: 30001ms):
Trace[1505683048]: [30.001085909s] [30.001085909s] END
E1122 19:33:45.629057   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp 20.112.57.181:6443: i/o timeout
E1122 19:34:42.111017   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-x9v6b5
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 22 19:34:54.169: INFO: deleting an existing virtual network "custom-vnet"
Nov 22 19:35:04.704: INFO: deleting an existing route table "node-routetable"
Nov 22 19:35:15.161: INFO: deleting an existing network security group "node-nsg"
Nov 22 19:35:25.510: INFO: deleting an existing network security group "control-plane-nsg"
E1122 19:35:29.482340   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 22 19:35:35.875: INFO: verifying the existing resource group "capz-e2e-x9v6b5-public-custom-vnet" is empty
Nov 22 19:35:36.060: INFO: deleting the existing resource group "capz-e2e-x9v6b5-public-custom-vnet"
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1122 19:36:03.292555   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:36:35.788593   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 55m29s on Ginkgo node 1 of 3


• [SLOW TEST:3328.631 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Mon, 22 Nov 2021 19:20:56 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-qpb95z" for hosting the cluster
Nov 22 19:20:56.709: INFO: starting to create namespace for hosting the "capz-e2e-qpb95z" test spec
2021/11/22 19:20:56 failed trying to get namespace (capz-e2e-qpb95z):namespaces "capz-e2e-qpb95z" not found
INFO: Creating namespace capz-e2e-qpb95z
INFO: Creating event watcher for namespace "capz-e2e-qpb95z"
Nov 22 19:20:56.751: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-qpb95z-gpu
INFO: Creating the workload cluster with name "capz-e2e-qpb95z-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 58 lines ...
STEP: Fetching activity logs took 544.135372ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-qpb95z" namespace
STEP: Deleting all clusters in the capz-e2e-qpb95z namespace
STEP: Deleting cluster capz-e2e-qpb95z-gpu
INFO: Waiting for the Cluster capz-e2e-qpb95z/capz-e2e-qpb95z-gpu to be deleted
STEP: Waiting for cluster capz-e2e-qpb95z-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-g68wg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-s8gwd, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-qpb95z
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 25m6s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Mon, 22 Nov 2021 19:22:52 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-m67rs7" for hosting the cluster
Nov 22 19:22:52.125: INFO: starting to create namespace for hosting the "capz-e2e-m67rs7" test spec
2021/11/22 19:22:52 failed trying to get namespace (capz-e2e-m67rs7):namespaces "capz-e2e-m67rs7" not found
INFO: Creating namespace capz-e2e-m67rs7
INFO: Creating event watcher for namespace "capz-e2e-m67rs7"
Nov 22 19:22:52.161: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-m67rs7-oot
INFO: Creating the workload cluster with name "capz-e2e-m67rs7-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 98 lines ...
STEP: Fetching activity logs took 511.667741ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-m67rs7" namespace
STEP: Deleting all clusters in the capz-e2e-m67rs7 namespace
STEP: Deleting cluster capz-e2e-m67rs7-oot
INFO: Waiting for the Cluster capz-e2e-m67rs7/capz-e2e-m67rs7-oot to be deleted
STEP: Waiting for cluster capz-e2e-m67rs7-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-cnvl7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-dbzkq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-wdzpg, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-nzdkz, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-tgnkz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7fk6t, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-m67rs7
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 25m35s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Mon, 22 Nov 2021 19:37:11 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-napnt9" for hosting the cluster
Nov 22 19:37:11.700: INFO: starting to create namespace for hosting the "capz-e2e-napnt9" test spec
2021/11/22 19:37:11 failed trying to get namespace (capz-e2e-napnt9):namespaces "capz-e2e-napnt9" not found
INFO: Creating namespace capz-e2e-napnt9
INFO: Creating event watcher for namespace "capz-e2e-napnt9"
Nov 22 19:37:11.726: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-napnt9-aks
INFO: Creating the workload cluster with name "capz-e2e-napnt9-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1122 19:37:30.861971   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:38:10.327597   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:38:44.967603   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:39:39.476639   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:40:23.228407   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:41:07.944526   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:41:50.681568   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 22 19:41:53.143: INFO: Waiting for the first control plane machine managed by capz-e2e-napnt9/capz-e2e-napnt9-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
E1122 19:42:32.368450   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:43:07.333835   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:43:53.569678   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:44:36.963573   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:45:14.828947   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:45:48.810860   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:46:22.201889   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:46:55.550718   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:47:49.949719   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:48:29.558885   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:49:18.951520   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:50:15.453129   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:51:01.747205   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:51:38.032781   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:52:14.055983   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:53:03.769586   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:53:50.216383   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:54:50.011240   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:55:30.952820   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:56:14.660003   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:57:12.755108   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:58:01.611824   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:58:39.620608   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:59:35.922634   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:00:17.012576   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:00:47.067715   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:01:42.640342   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Dumping logs from the "capz-e2e-napnt9-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-napnt9/capz-e2e-napnt9-aks logs
STEP: Dumping workload cluster capz-e2e-napnt9/capz-e2e-napnt9-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 619.473418ms
STEP: Creating log watcher for controller kube-system/calico-node-tcgtd, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-84d976c568-v8t4l, container coredns
... skipping 10 lines ...
STEP: Fetching activity logs took 729.274284ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-napnt9" namespace
STEP: Deleting all clusters in the capz-e2e-napnt9 namespace
STEP: Deleting cluster capz-e2e-napnt9-aks
INFO: Waiting for the Cluster capz-e2e-napnt9/capz-e2e-napnt9-aks to be deleted
STEP: Waiting for cluster capz-e2e-napnt9-aks to be deleted
E1122 20:02:26.484685   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:03:01.546018   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:03:34.478551   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:04:19.791328   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:05:07.126020   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:05:50.111627   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:06:47.394277   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:07:21.597832   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-napnt9
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1122 20:08:06.318595   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:08:52.531160   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 31m44s on Ginkgo node 1 of 3


• Failure [1903.844 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 59 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Mon, 22 Nov 2021 19:48:26 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-9gmdt2" for hosting the cluster
Nov 22 19:48:26.788: INFO: starting to create namespace for hosting the "capz-e2e-9gmdt2" test spec
2021/11/22 19:48:26 failed trying to get namespace (capz-e2e-9gmdt2):namespaces "capz-e2e-9gmdt2" not found
INFO: Creating namespace capz-e2e-9gmdt2
INFO: Creating event watcher for namespace "capz-e2e-9gmdt2"
Nov 22 19:48:26.827: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-9gmdt2-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-9gmdt2-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-jobf08muwrzpz6 to be complete
Nov 22 19:58:40.251: INFO: waiting for job default/curl-to-elb-jobf08muwrzpz6 to be complete
Nov 22 19:58:50.371: INFO: job default/curl-to-elb-jobf08muwrzpz6 is complete, took 10.120241561s
STEP: connecting directly to the external LB service
Nov 22 19:58:50.371: INFO: starting attempts to connect directly to the external LB service
2021/11/22 19:58:50 [DEBUG] GET http://20.99.185.235
2021/11/22 19:59:20 [ERR] GET http://20.99.185.235 request failed: Get "http://20.99.185.235": dial tcp 20.99.185.235:80: i/o timeout
2021/11/22 19:59:20 [DEBUG] GET http://20.99.185.235: retrying in 1s (4 left)
Nov 22 19:59:21.485: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 22 19:59:21.485: INFO: starting to delete external LB service webwpipi5-elb
Nov 22 19:59:21.570: INFO: starting to delete deployment webwpipi5
Nov 22 19:59:21.628: INFO: starting to delete job curl-to-elb-jobf08muwrzpz6
... skipping 25 lines ...
STEP: waiting for job default/curl-to-elb-jobiosks7r8yvw to be complete
Nov 22 20:06:15.131: INFO: waiting for job default/curl-to-elb-jobiosks7r8yvw to be complete
Nov 22 20:06:25.252: INFO: job default/curl-to-elb-jobiosks7r8yvw is complete, took 10.121442204s
STEP: connecting directly to the external LB service
Nov 22 20:06:25.253: INFO: starting attempts to connect directly to the external LB service
2021/11/22 20:06:25 [DEBUG] GET http://20.112.52.198
2021/11/22 20:06:55 [ERR] GET http://20.112.52.198 request failed: Get "http://20.112.52.198": dial tcp 20.112.52.198:80: i/o timeout
2021/11/22 20:06:55 [DEBUG] GET http://20.112.52.198: retrying in 1s (4 left)
Nov 22 20:07:11.762: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 22 20:07:11.762: INFO: starting to delete external LB service web-windows2exzz6-elb
Nov 22 20:07:11.861: INFO: starting to delete deployment web-windows2exzz6
Nov 22 20:07:11.919: INFO: starting to delete job curl-to-elb-jobiosks7r8yvw
... skipping 4 lines ...
Nov 22 20:07:24.908: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-9gmdt2-win-vmss-control-plane-985qs

Nov 22 20:07:25.794: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-9gmdt2-win-vmss in namespace capz-e2e-9gmdt2

Nov 22 20:07:45.053: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-9gmdt2-win-vmss-mp-0

Failed to get logs for machine pool capz-e2e-9gmdt2-win-vmss-mp-0, cluster capz-e2e-9gmdt2/capz-e2e-9gmdt2-win-vmss: [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1]
Nov 22 20:07:45.428: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-9gmdt2-win-vmss in namespace capz-e2e-9gmdt2

Nov 22 20:08:24.593: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

STEP: Dumping workload cluster capz-e2e-9gmdt2/capz-e2e-9gmdt2-win-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 515.246952ms
... skipping 13 lines ...
STEP: Fetching activity logs took 1.031623537s
STEP: Dumping all the Cluster API resources in the "capz-e2e-9gmdt2" namespace
STEP: Deleting all clusters in the capz-e2e-9gmdt2 namespace
STEP: Deleting cluster capz-e2e-9gmdt2-win-vmss
INFO: Waiting for the Cluster capz-e2e-9gmdt2/capz-e2e-9gmdt2-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-9gmdt2-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-2hvrf, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-88qn9, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-2b6hx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vn8rd, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-9gmdt2
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 34m20s on Ginkgo node 3 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows enabled VMSS cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:542
    with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2021-11-22T20:34:52Z"}
++ early_exit_handler
++ '[' -n 161 ']'
++ kill -TERM 161
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 15 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Mon, 22 Nov 2021 19:46:02 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-otx9in" for hosting the cluster
Nov 22 19:46:02.341: INFO: starting to create namespace for hosting the "capz-e2e-otx9in" test spec
2021/11/22 19:46:02 failed trying to get namespace (capz-e2e-otx9in):namespaces "capz-e2e-otx9in" not found
INFO: Creating namespace capz-e2e-otx9in
INFO: Creating event watcher for namespace "capz-e2e-otx9in"
Nov 22 19:46:02.386: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-otx9in-win-ha
INFO: Creating the workload cluster with name "capz-e2e-otx9in-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 145 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-hnbx6, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-7q7l2, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-otx9in-win-ha-control-plane-4522g, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-otx9in-win-ha-control-plane-mdfwv, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-otx9in-win-ha-control-plane-mtk8w, container kube-apiserver
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-otx9in-win-ha-control-plane-mtk8w, container etcd
STEP: Got error while iterating over activity logs for resource group capz-e2e-otx9in-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000650333s
STEP: Dumping all the Cluster API resources in the "capz-e2e-otx9in" namespace
STEP: Deleting all clusters in the capz-e2e-otx9in namespace
STEP: Deleting cluster capz-e2e-otx9in-win-ha
INFO: Waiting for the Cluster capz-e2e-otx9in/capz-e2e-otx9in-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-otx9in-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-otx9in-win-ha-control-plane-mtk8w, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-otx9in-win-ha-control-plane-mtk8w, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-otx9in-win-ha-control-plane-mtk8w, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-otx9in-win-ha-control-plane-mtk8w, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xhv9k, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rhjlq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-7q7l2, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-slg9w, container kube-flannel: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-otx9in
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 49m20s on Ginkgo node 2 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows Enabled cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:494
    With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496
------------------------------
E1122 20:09:40.587063   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:10:39.037948   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:11:18.405755   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:12:12.194961   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:12:57.809596   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:13:40.830849   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:14:38.995720   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:15:24.875706   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:16:12.971045   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:17:09.026105   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:17:43.216534   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:18:13.743025   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:18:57.094463   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:19:50.588821   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:20:34.471974   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:21:23.408525   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:22:20.359631   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:23:18.919363   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:23:55.157985   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:24:36.480586   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:25:17.269484   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:25:59.480444   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:26:57.021348   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:27:40.309062   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:28:31.782492   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:29:09.634283   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:29:49.204415   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:30:44.335420   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:31:18.135424   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:32:13.021194   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:33:10.095701   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:33:40.325543   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:34:28.467750   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:35:21.910666   24247 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-x9v6b5/events?resourceVersion=8943": dial tcp: lookup capz-e2e-x9v6b5-public-custom-vnet-21473803.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Tearing down the management cluster
INFO: Deleting the kind cluster "capz-e2e" failed. You may need to remove this by hand.



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating an AKS cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216

Ran 9 of 22 Specs in 6935.425 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 1h57m2.83754134s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
Program process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
================================================================================
Done cleaning up after docker in docker.
All sensitive variables are redacted
{"component":"entrypoint","file":"prow/entrypoint/run.go:252","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process gracefully exited before 15m0s grace period","severity":"error","time":"2021-11-22T20:36:43Z"}