This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-20 18:34
Elapsed1h57m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node 55m11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\san\sAKS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454
Timed out after 1200.001s.
System machine pools not ready
Expected
    <bool>: false
to equal
    <bool>: true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216
				
				Click to see stdout/stderrfrom junit.e2e_suite.3.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 432 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Sat, 20 Nov 2021 18:41:38 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-6oi5zc" for hosting the cluster
Nov 20 18:41:38.081: INFO: starting to create namespace for hosting the "capz-e2e-6oi5zc" test spec
2021/11/20 18:41:38 failed trying to get namespace (capz-e2e-6oi5zc):namespaces "capz-e2e-6oi5zc" not found
INFO: Creating namespace capz-e2e-6oi5zc
INFO: Creating event watcher for namespace "capz-e2e-6oi5zc"
Nov 20 18:41:38.167: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-6oi5zc-ipv6
INFO: Creating the workload cluster with name "capz-e2e-6oi5zc-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 528.183777ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-6oi5zc" namespace
STEP: Deleting all clusters in the capz-e2e-6oi5zc namespace
STEP: Deleting cluster capz-e2e-6oi5zc-ipv6
INFO: Waiting for the Cluster capz-e2e-6oi5zc/capz-e2e-6oi5zc-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-6oi5zc-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-q8zvb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-d8xgs, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-6oi5zc-ipv6-control-plane-xj8s6, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mcn6c, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-6oi5zc-ipv6-control-plane-mgz5g, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-6oi5zc-ipv6-control-plane-mgz5g, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-6oi5zc-ipv6-control-plane-nncck, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-6oi5zc-ipv6-control-plane-nncck, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-zd266, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-28kvw, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-l4qtv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-6oi5zc-ipv6-control-plane-xj8s6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-pgrnz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-6oi5zc-ipv6-control-plane-nncck, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-6oi5zc-ipv6-control-plane-mgz5g, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-6oi5zc-ipv6-control-plane-mgz5g, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-6oi5zc-ipv6-control-plane-nncck, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-gvz8m, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8mkxd, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-6oi5zc-ipv6-control-plane-xj8s6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-6oi5zc-ipv6-control-plane-xj8s6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-64kql, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-6oi5zc
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 17m32s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Sat, 20 Nov 2021 18:59:09 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-22wh85" for hosting the cluster
Nov 20 18:59:09.996: INFO: starting to create namespace for hosting the "capz-e2e-22wh85" test spec
2021/11/20 18:59:10 failed trying to get namespace (capz-e2e-22wh85):namespaces "capz-e2e-22wh85" not found
INFO: Creating namespace capz-e2e-22wh85
INFO: Creating event watcher for namespace "capz-e2e-22wh85"
Nov 20 18:59:10.025: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-22wh85-vmss
INFO: Creating the workload cluster with name "capz-e2e-22wh85-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 128 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Sat, 20 Nov 2021 18:41:38 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-wym6s7" for hosting the cluster
Nov 20 18:41:38.055: INFO: starting to create namespace for hosting the "capz-e2e-wym6s7" test spec
2021/11/20 18:41:38 failed trying to get namespace (capz-e2e-wym6s7):namespaces "capz-e2e-wym6s7" not found
INFO: Creating namespace capz-e2e-wym6s7
INFO: Creating event watcher for namespace "capz-e2e-wym6s7"
Nov 20 18:41:38.126: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-wym6s7-ha
INFO: Creating the workload cluster with name "capz-e2e-wym6s7-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 67 lines ...
Nov 20 18:50:56.036: INFO: starting to delete external LB service webyk1zuy-elb
Nov 20 18:50:56.211: INFO: starting to delete deployment webyk1zuy
Nov 20 18:50:56.328: INFO: starting to delete job curl-to-elb-jobhbuj805nu6e
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 20 18:50:56.489: INFO: starting to create dev deployment namespace
2021/11/20 18:50:56 failed trying to get namespace (development):namespaces "development" not found
2021/11/20 18:50:56 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 20 18:50:56.733: INFO: starting to create prod deployment namespace
2021/11/20 18:50:56 failed trying to get namespace (production):namespaces "production" not found
2021/11/20 18:50:56 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 20 18:50:56.968: INFO: starting to create frontend-prod deployments
Nov 20 18:50:57.086: INFO: starting to create frontend-dev deployments
Nov 20 18:50:57.204: INFO: starting to create backend deployments
Nov 20 18:50:57.322: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 20 18:51:24.170: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.107.4 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 20 18:53:35.393: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 20 18:53:36.537: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.107.4 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.107.4 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 20 18:57:59.586: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 20 18:57:59.994: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.152.195 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 20 19:00:12.560: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 20 19:00:12.962: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.152.193 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.152.195 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 20 19:04:36.748: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 20 19:04:37.153: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.107.4 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 20 19:06:50.016: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 20 19:06:50.422: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.107.4 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-wym6s7-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-wym6s7/capz-e2e-wym6s7-ha logs
Nov 20 19:09:02.023: INFO: INFO: Collecting logs for node capz-e2e-wym6s7-ha-control-plane-dpv8g in cluster capz-e2e-wym6s7-ha in namespace capz-e2e-wym6s7

Nov 20 19:09:13.709: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-wym6s7-ha-control-plane-dpv8g
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-wym6s7-ha-control-plane-jc5z5, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-wym6s7-ha-control-plane-zjzp8, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-wym6s7-ha-control-plane-dpv8g, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-6mrl2, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-wym6s7-ha-control-plane-jc5z5, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-wym6s7-ha-control-plane-zjzp8, container kube-controller-manager
STEP: Got error while iterating over activity logs for resource group capz-e2e-wym6s7-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001194427s
STEP: Dumping all the Cluster API resources in the "capz-e2e-wym6s7" namespace
STEP: Deleting all clusters in the capz-e2e-wym6s7 namespace
STEP: Deleting cluster capz-e2e-wym6s7-ha
INFO: Waiting for the Cluster capz-e2e-wym6s7/capz-e2e-wym6s7-ha to be deleted
STEP: Waiting for cluster capz-e2e-wym6s7-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-jg6jw, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-hsfgd, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-wym6s7-ha-control-plane-zjzp8, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-z9tzt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-qlr2p, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qc7qm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-wym6s7-ha-control-plane-zjzp8, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7x6th, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wqwvr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-d8ng7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-wym6s7-ha-control-plane-zjzp8, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-wym6s7-ha-control-plane-zjzp8, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5gmfq, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-wym6s7
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 37m57s on Ginkgo node 1 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Sat, 20 Nov 2021 18:41:38 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-w0s0nh" for hosting the cluster
Nov 20 18:41:38.054: INFO: starting to create namespace for hosting the "capz-e2e-w0s0nh" test spec
2021/11/20 18:41:38 failed trying to get namespace (capz-e2e-w0s0nh):namespaces "capz-e2e-w0s0nh" not found
INFO: Creating namespace capz-e2e-w0s0nh
INFO: Creating event watcher for namespace "capz-e2e-w0s0nh"
Nov 20 18:41:38.144: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-w0s0nh-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-n546q, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-ch2m5, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-nhqlx, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-rjzcw, container coredns
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-w0s0nh-public-custom-vnet-control-plane-f9hrl, container kube-scheduler
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-w0s0nh-public-custom-vnet-control-plane-f9hrl, container etcd
STEP: Got error while iterating over activity logs for resource group capz-e2e-w0s0nh-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00103145s
STEP: Dumping all the Cluster API resources in the "capz-e2e-w0s0nh" namespace
STEP: Deleting all clusters in the capz-e2e-w0s0nh namespace
STEP: Deleting cluster capz-e2e-w0s0nh-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-w0s0nh/capz-e2e-w0s0nh-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-w0s0nh-public-custom-vnet to be deleted
W1120 19:26:26.249859   24165 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1120 19:26:57.214306   24165 trace.go:205] Trace[726153510]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (20-Nov-2021 19:26:27.213) (total time: 30001ms):
Trace[726153510]: [30.0010326s] [30.0010326s] END
E1120 19:26:57.214411   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp 20.76.150.36:6443: i/o timeout
I1120 19:27:29.320457   24165 trace.go:205] Trace[1498818067]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (20-Nov-2021 19:26:59.319) (total time: 30001ms):
Trace[1498818067]: [30.001167496s] [30.001167496s] END
E1120 19:27:29.320517   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp 20.76.150.36:6443: i/o timeout
I1120 19:28:03.875717   24165 trace.go:205] Trace[1661122949]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (20-Nov-2021 19:27:33.874) (total time: 30001ms):
Trace[1661122949]: [30.001000345s] [30.001000345s] END
E1120 19:28:03.875774   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp 20.76.150.36:6443: i/o timeout
I1120 19:28:43.153477   24165 trace.go:205] Trace[1583080243]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (20-Nov-2021 19:28:13.152) (total time: 30000ms):
Trace[1583080243]: [30.000904356s] [30.000904356s] END
E1120 19:28:43.153534   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp 20.76.150.36:6443: i/o timeout
I1120 19:29:34.941547   24165 trace.go:205] Trace[489051537]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (20-Nov-2021 19:29:04.940) (total time: 30001ms):
Trace[489051537]: [30.00126242s] [30.00126242s] END
E1120 19:29:34.941613   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp 20.76.150.36:6443: i/o timeout
I1120 19:30:47.819691   24165 trace.go:205] Trace[2043858604]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (20-Nov-2021 19:30:17.818) (total time: 30001ms):
Trace[2043858604]: [30.001225754s] [30.001225754s] END
E1120 19:30:47.819756   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp 20.76.150.36:6443: i/o timeout
E1120 19:31:33.414139   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-w0s0nh
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 20 19:31:44.803: INFO: deleting an existing virtual network "custom-vnet"
Nov 20 19:31:56.645: INFO: deleting an existing route table "node-routetable"
Nov 20 19:32:07.269: INFO: deleting an existing network security group "node-nsg"
E1120 19:32:10.967179   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 20 19:32:18.219: INFO: deleting an existing network security group "control-plane-nsg"
Nov 20 19:32:29.174: INFO: verifying the existing resource group "capz-e2e-w0s0nh-public-custom-vnet" is empty
Nov 20 19:32:29.214: INFO: deleting the existing resource group "capz-e2e-w0s0nh-public-custom-vnet"
E1120 19:33:07.324432   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1120 19:33:56.914342   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:34:27.389948   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 53m31s on Ginkgo node 3 of 3


• [SLOW TEST:3211.030 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Sat, 20 Nov 2021 19:19:34 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-nafuop" for hosting the cluster
Nov 20 19:19:34.635: INFO: starting to create namespace for hosting the "capz-e2e-nafuop" test spec
2021/11/20 19:19:34 failed trying to get namespace (capz-e2e-nafuop):namespaces "capz-e2e-nafuop" not found
INFO: Creating namespace capz-e2e-nafuop
INFO: Creating event watcher for namespace "capz-e2e-nafuop"
Nov 20 19:19:34.743: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-nafuop-oot
INFO: Creating the workload cluster with name "capz-e2e-nafuop-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 98 lines ...
STEP: Fetching activity logs took 602.820799ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-nafuop" namespace
STEP: Deleting all clusters in the capz-e2e-nafuop namespace
STEP: Deleting cluster capz-e2e-nafuop-oot
INFO: Waiting for the Cluster capz-e2e-nafuop/capz-e2e-nafuop-oot to be deleted
STEP: Waiting for cluster capz-e2e-nafuop-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/cloud-controller-manager, container cloud-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-nafuop-oot-control-plane-xp5dm, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-nafuop-oot-control-plane-xp5dm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4jbck, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-9tflh, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-2pbcc, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-mjhqp, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-rsg4h, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-nafuop-oot-control-plane-xp5dm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-nafuop-oot-control-plane-xp5dm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-pvs9s, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-nafuop
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 21m23s on Ginkgo node 1 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Sat, 20 Nov 2021 19:18:47 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-vadftl" for hosting the cluster
Nov 20 19:18:47.480: INFO: starting to create namespace for hosting the "capz-e2e-vadftl" test spec
2021/11/20 19:18:47 failed trying to get namespace (capz-e2e-vadftl):namespaces "capz-e2e-vadftl" not found
INFO: Creating namespace capz-e2e-vadftl
INFO: Creating event watcher for namespace "capz-e2e-vadftl"
Nov 20 19:18:47.551: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-vadftl-gpu
INFO: Creating the workload cluster with name "capz-e2e-vadftl-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 58 lines ...
STEP: Fetching activity logs took 497.962439ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-vadftl" namespace
STEP: Deleting all clusters in the capz-e2e-vadftl namespace
STEP: Deleting cluster capz-e2e-vadftl-gpu
INFO: Waiting for the Cluster capz-e2e-vadftl/capz-e2e-vadftl-gpu to be deleted
STEP: Waiting for cluster capz-e2e-vadftl-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-r9htj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-98x4w, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-vadftl
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 24m59s on Ginkgo node 2 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Sat, 20 Nov 2021 19:40:57 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-rcdfeb" for hosting the cluster
Nov 20 19:40:57.966: INFO: starting to create namespace for hosting the "capz-e2e-rcdfeb" test spec
2021/11/20 19:40:57 failed trying to get namespace (capz-e2e-rcdfeb):namespaces "capz-e2e-rcdfeb" not found
INFO: Creating namespace capz-e2e-rcdfeb
INFO: Creating event watcher for namespace "capz-e2e-rcdfeb"
Nov 20 19:40:58.005: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-rcdfeb-win-ha
INFO: Creating the workload cluster with name "capz-e2e-rcdfeb-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 151 lines ...
STEP: Fetching activity logs took 1.208328059s
STEP: Dumping all the Cluster API resources in the "capz-e2e-rcdfeb" namespace
STEP: Deleting all clusters in the capz-e2e-rcdfeb namespace
STEP: Deleting cluster capz-e2e-rcdfeb-win-ha
INFO: Waiting for the Cluster capz-e2e-rcdfeb/capz-e2e-rcdfeb-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-rcdfeb-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-rcdfeb-win-ha-control-plane-v9jtp, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-rcdfeb-win-ha-control-plane-chxgp, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-rcdfeb-win-ha-control-plane-v9jtp, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-rcdfeb-win-ha-control-plane-chxgp, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lmfsb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-rcdfeb-win-ha-control-plane-v9jtp, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-stwqd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-rcdfeb-win-ha-control-plane-chxgp, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-x24wn, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-rcdfeb-win-ha-control-plane-v9jtp, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-rcdfeb-win-ha-control-plane-chxgp, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-w7s8f, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-bg5nq, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-59v54, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-vfrgb, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-gnf5g, container kube-flannel: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-rcdfeb
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 30m6s on Ginkgo node 1 of 3

... skipping 10 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Sat, 20 Nov 2021 19:43:46 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-mzo4oa" for hosting the cluster
Nov 20 19:43:46.879: INFO: starting to create namespace for hosting the "capz-e2e-mzo4oa" test spec
2021/11/20 19:43:46 failed trying to get namespace (capz-e2e-mzo4oa):namespaces "capz-e2e-mzo4oa" not found
INFO: Creating namespace capz-e2e-mzo4oa
INFO: Creating event watcher for namespace "capz-e2e-mzo4oa"
Nov 20 19:43:46.920: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-mzo4oa-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-mzo4oa-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 89 lines ...
STEP: waiting for job default/curl-to-elb-jobbfm1krhorua to be complete
Nov 20 19:58:47.225: INFO: waiting for job default/curl-to-elb-jobbfm1krhorua to be complete
Nov 20 19:58:57.452: INFO: job default/curl-to-elb-jobbfm1krhorua is complete, took 10.226109621s
STEP: connecting directly to the external LB service
Nov 20 19:58:57.452: INFO: starting attempts to connect directly to the external LB service
2021/11/20 19:58:57 [DEBUG] GET http://20.82.53.53
2021/11/20 19:59:27 [ERR] GET http://20.82.53.53 request failed: Get "http://20.82.53.53": dial tcp 20.82.53.53:80: i/o timeout
2021/11/20 19:59:27 [DEBUG] GET http://20.82.53.53: retrying in 1s (4 left)
Nov 20 19:59:28.677: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 20 19:59:28.678: INFO: starting to delete external LB service web-windowshzrmxb-elb
Nov 20 19:59:28.813: INFO: starting to delete deployment web-windowshzrmxb
Nov 20 19:59:28.926: INFO: starting to delete job curl-to-elb-jobbfm1krhorua
... skipping 23 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-mzo4oa-win-vmss-control-plane-8z54v, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-2pz8s, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-ppcrl, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-7bcf8, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-proxy-9j5k2, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-b5k7z, container coredns
STEP: Got error while iterating over activity logs for resource group capz-e2e-mzo4oa-win-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00038534s
STEP: Dumping all the Cluster API resources in the "capz-e2e-mzo4oa" namespace
STEP: Deleting all clusters in the capz-e2e-mzo4oa namespace
STEP: Deleting cluster capz-e2e-mzo4oa-win-vmss
INFO: Waiting for the Cluster capz-e2e-mzo4oa/capz-e2e-mzo4oa-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-mzo4oa-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mnrcg, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-mg7ht, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-b5k7z, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-mzo4oa-win-vmss-control-plane-8z54v, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-ppcrl, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-mzo4oa-win-vmss-control-plane-8z54v, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-cwlws, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-mzo4oa-win-vmss-control-plane-8z54v, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2pz8s, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-7bcf8, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-mzo4oa-win-vmss-control-plane-8z54v, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9j5k2, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-mzo4oa
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 33m32s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Sat, 20 Nov 2021 19:35:09 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-s5of1m" for hosting the cluster
Nov 20 19:35:09.088: INFO: starting to create namespace for hosting the "capz-e2e-s5of1m" test spec
2021/11/20 19:35:09 failed trying to get namespace (capz-e2e-s5of1m):namespaces "capz-e2e-s5of1m" not found
INFO: Creating namespace capz-e2e-s5of1m
INFO: Creating event watcher for namespace "capz-e2e-s5of1m"
Nov 20 19:35:09.130: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-s5of1m-aks
INFO: Creating the workload cluster with name "capz-e2e-s5of1m-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1120 19:35:17.098507   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:36:09.656093   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:37:03.631420   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:37:53.327525   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:38:28.757449   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 20 19:38:50.731: INFO: Waiting for the first control plane machine managed by capz-e2e-s5of1m/capz-e2e-s5of1m-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
E1120 19:39:12.833732   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:40:08.624094   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:41:02.491233   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:41:56.502965   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:42:31.456459   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:43:18.065810   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:43:51.538908   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:44:22.812200   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:45:15.066554   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:45:55.448315   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:46:31.219554   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:47:06.876917   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:48:06.219864   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:48:48.455146   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:49:27.852824   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:50:15.588289   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:50:46.910668   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:51:43.549604   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:52:18.302431   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:53:09.981722   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:54:03.168296   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:54:43.776583   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:55:22.860642   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:56:10.746316   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:56:43.897205   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:57:42.452163   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 19:58:33.155536   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Dumping logs from the "capz-e2e-s5of1m-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-s5of1m/capz-e2e-s5of1m-aks logs
STEP: Dumping workload cluster capz-e2e-s5of1m/capz-e2e-s5of1m-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 1.024419291s
STEP: Dumping workload cluster capz-e2e-s5of1m/capz-e2e-s5of1m-aks Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-autoscaler-54d55c8b75-8xkpp, container autoscaler
... skipping 10 lines ...
STEP: Fetching activity logs took 879.906167ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-s5of1m" namespace
STEP: Deleting all clusters in the capz-e2e-s5of1m namespace
STEP: Deleting cluster capz-e2e-s5of1m-aks
INFO: Waiting for the Cluster capz-e2e-s5of1m/capz-e2e-s5of1m-aks to be deleted
STEP: Waiting for cluster capz-e2e-s5of1m-aks to be deleted
E1120 19:59:26.092878   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:00:06.406616   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:00:46.354312   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:01:45.159256   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:02:38.919599   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:03:28.269700   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:04:11.525400   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:04:55.949030   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:05:50.827397   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:06:37.010845   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:07:30.443059   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:08:03.952659   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:08:34.425435   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:09:33.220596   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:10:28.841253   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:11:07.171363   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:11:56.904899   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:12:52.426334   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:13:43.415961   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:14:30.711626   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:15:02.864117   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:15:33.225586   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:16:13.919948   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:17:03.059381   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:17:42.747258   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:18:15.050427   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:18:54.129648   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:19:47.881442   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:20:18.468340   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:21:07.063567   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:21:50.798198   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:22:23.633986   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:23:11.063195   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:23:59.412491   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:24:53.018935   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:25:46.849635   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:26:46.736602   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:27:39.473427   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:28:34.030389   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Redacting sensitive information from logs
E1120 20:29:10.349477   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 20:29:41.542081   24165 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w0s0nh/events?resourceVersion=8488": dial tcp: lookup capz-e2e-w0s0nh-public-custom-vnet-2bc0042f.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host


• Failure [3311.765 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating an AKS cluster
... skipping 55 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating an AKS cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216

Ran 9 of 22 Specs in 6639.223 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 1h52m4.171810922s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...