This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2022-05-14 19:43
Elapsed1h42m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider with a 1 control plane nodes and 2 worker nodes 22m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\scluster\sthat\suses\sthe\sexternal\scloud\sprovider\swith\sa\s1\scontrol\splane\snodes\sand\s2\sworker\snodes$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419
Timed out after 1200.002s.
Expected
    <bool>: false
to be true
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/controlplane_helpers.go:145
				
				Click to see stdout/stderrfrom junit.e2e_suite.3.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Show 14 Skipped Tests

Error lines from build-log.txt

... skipping 437 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Sat, 14 May 2022 19:50:25 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-yh5yg7" for hosting the cluster
May 14 19:50:25.950: INFO: starting to create namespace for hosting the "capz-e2e-yh5yg7" test spec
2022/05/14 19:50:25 failed trying to get namespace (capz-e2e-yh5yg7):namespaces "capz-e2e-yh5yg7" not found
INFO: Creating namespace capz-e2e-yh5yg7
INFO: Creating event watcher for namespace "capz-e2e-yh5yg7"
May 14 19:50:26.025: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-yh5yg7-ipv6
INFO: Creating the workload cluster with name "capz-e2e-yh5yg7-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 537.701101ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-yh5yg7" namespace
STEP: Deleting all clusters in the capz-e2e-yh5yg7 namespace
STEP: Deleting cluster capz-e2e-yh5yg7-ipv6
INFO: Waiting for the Cluster capz-e2e-yh5yg7/capz-e2e-yh5yg7-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-yh5yg7-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-yh5yg7-ipv6-control-plane-2nlcw, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-9b52m, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4xj8f, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-yh5yg7-ipv6-control-plane-dd68g, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-flf6r, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-l7mnb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jcvjs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-yh5yg7-ipv6-control-plane-2nlcw, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-yh5yg7-ipv6-control-plane-btmnx, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-yh5yg7-ipv6-control-plane-dd68g, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-yh5yg7-ipv6-control-plane-btmnx, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-yh5yg7-ipv6-control-plane-dd68g, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qz54v, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-yh5yg7-ipv6-control-plane-btmnx, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-689ns, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-yh5yg7-ipv6-control-plane-dd68g, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-yh5yg7-ipv6-control-plane-2nlcw, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-g4rm2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-yh5yg7-ipv6-control-plane-btmnx, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-22nvs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-yh5yg7-ipv6-control-plane-2nlcw, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-82lx5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-m9vlw, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-yh5yg7
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 16m22s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Sat, 14 May 2022 20:06:48 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-s49l83" for hosting the cluster
May 14 20:06:48.014: INFO: starting to create namespace for hosting the "capz-e2e-s49l83" test spec
2022/05/14 20:06:48 failed trying to get namespace (capz-e2e-s49l83):namespaces "capz-e2e-s49l83" not found
INFO: Creating namespace capz-e2e-s49l83
INFO: Creating event watcher for namespace "capz-e2e-s49l83"
May 14 20:06:48.056: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-s49l83-vmss
INFO: Creating the workload cluster with name "capz-e2e-s49l83-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 130 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Sat, 14 May 2022 19:50:25 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-xv2je0" for hosting the cluster
May 14 19:50:25.918: INFO: starting to create namespace for hosting the "capz-e2e-xv2je0" test spec
2022/05/14 19:50:25 failed trying to get namespace (capz-e2e-xv2je0):namespaces "capz-e2e-xv2je0" not found
INFO: Creating namespace capz-e2e-xv2je0
INFO: Creating event watcher for namespace "capz-e2e-xv2je0"
May 14 19:50:25.999: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-xv2je0-ha
INFO: Creating the workload cluster with name "capz-e2e-xv2je0-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 59 lines ...
STEP: waiting for job default/curl-to-elb-jobqkzkw1r1dg2 to be complete
May 14 20:00:21.528: INFO: waiting for job default/curl-to-elb-jobqkzkw1r1dg2 to be complete
May 14 20:00:31.600: INFO: job default/curl-to-elb-jobqkzkw1r1dg2 is complete, took 10.071371785s
STEP: connecting directly to the external LB service
May 14 20:00:31.600: INFO: starting attempts to connect directly to the external LB service
2022/05/14 20:00:31 [DEBUG] GET http://20.88.124.37
2022/05/14 20:01:01 [ERR] GET http://20.88.124.37 request failed: Get "http://20.88.124.37": dial tcp 20.88.124.37:80: i/o timeout
2022/05/14 20:01:01 [DEBUG] GET http://20.88.124.37: retrying in 1s (4 left)
May 14 20:01:18.013: INFO: successfully connected to the external LB service
STEP: deleting the test resources
May 14 20:01:18.013: INFO: starting to delete external LB service web77oofd-elb
May 14 20:01:18.109: INFO: starting to delete deployment web77oofd
May 14 20:01:18.146: INFO: starting to delete job curl-to-elb-jobqkzkw1r1dg2
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
May 14 20:01:18.233: INFO: starting to create dev deployment namespace
2022/05/14 20:01:18 failed trying to get namespace (development):namespaces "development" not found
2022/05/14 20:01:18 namespace development does not exist, creating...
STEP: Creating production namespace
May 14 20:01:18.323: INFO: starting to create prod deployment namespace
2022/05/14 20:01:18 failed trying to get namespace (production):namespaces "production" not found
2022/05/14 20:01:18 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
May 14 20:01:18.401: INFO: starting to create frontend-prod deployments
May 14 20:01:18.443: INFO: starting to create frontend-dev deployments
May 14 20:01:18.487: INFO: starting to create backend deployments
May 14 20:01:18.524: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
May 14 20:01:41.358: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.161.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
May 14 20:03:51.209: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
May 14 20:03:51.392: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.161.131 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.161.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
May 14 20:08:13.074: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
May 14 20:08:13.266: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.161.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
May 14 20:10:24.425: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
May 14 20:10:24.610: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.161.130 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.161.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
May 14 20:14:46.572: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
May 14 20:14:46.758: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.161.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
May 14 20:16:57.641: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
May 14 20:16:57.823: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.161.131 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-xv2je0-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-xv2je0/capz-e2e-xv2je0-ha logs
May 14 20:19:09.126: INFO: INFO: Collecting logs for node capz-e2e-xv2je0-ha-control-plane-j54qq in cluster capz-e2e-xv2je0-ha in namespace capz-e2e-xv2je0

May 14 20:19:24.021: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-xv2je0-ha-control-plane-j54qq
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-xv2je0-ha-control-plane-vjcjn, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-xv2je0-ha-control-plane-dq79c, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-crq7q, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-proxy-m82nv, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-dzj8r, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-xv2je0-ha-control-plane-vjcjn, container etcd
STEP: Got error while iterating over activity logs for resource group capz-e2e-xv2je0-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000917819s
STEP: Dumping all the Cluster API resources in the "capz-e2e-xv2je0" namespace
STEP: Deleting all clusters in the capz-e2e-xv2je0 namespace
STEP: Deleting cluster capz-e2e-xv2je0-ha
INFO: Waiting for the Cluster capz-e2e-xv2je0/capz-e2e-xv2je0-ha to be deleted
STEP: Waiting for cluster capz-e2e-xv2je0-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-xv2je0-ha-control-plane-j54qq, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-xv2je0-ha-control-plane-vjcjn, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-xv2je0-ha-control-plane-j54qq, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-xv2je0-ha-control-plane-j54qq, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-z4l2k, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ssptv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-xv2je0-ha-control-plane-j54qq, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-xv2je0-ha-control-plane-vjcjn, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jtqxf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-x2zdv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-xv2je0-ha-control-plane-vjcjn, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-xv2je0-ha-control-plane-vjcjn, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-v9j4k, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-qhwch, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-crq7q, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-xv2je0
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 36m28s on Ginkgo node 2 of 3

... skipping 8 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Sat, 14 May 2022 20:26:54 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-coslcd" for hosting the cluster
May 14 20:26:54.045: INFO: starting to create namespace for hosting the "capz-e2e-coslcd" test spec
2022/05/14 20:26:54 failed trying to get namespace (capz-e2e-coslcd):namespaces "capz-e2e-coslcd" not found
INFO: Creating namespace capz-e2e-coslcd
INFO: Creating event watcher for namespace "capz-e2e-coslcd"
May 14 20:26:54.085: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-coslcd-aks
INFO: Creating the workload cluster with name "capz-e2e-coslcd-aks" using the "aks-multi-tenancy" template (Kubernetes v1.22.6, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 34 lines ...
STEP: Dumping logs from the "capz-e2e-coslcd-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-coslcd/capz-e2e-coslcd-aks logs
May 14 20:34:42.180: INFO: INFO: Collecting logs for node aks-agentpool1-28365680-vmss000000 in cluster capz-e2e-coslcd-aks in namespace capz-e2e-coslcd

May 14 20:36:52.225: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-coslcd/capz-e2e-coslcd-aks: [dialing public load balancer at capz-e2e-coslcd-aks-25a44157.hcp.eastus2.azmk8s.io: dial tcp 20.96.53.91:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
May 14 20:36:52.766: INFO: INFO: Collecting logs for node aks-agentpool1-28365680-vmss000000 in cluster capz-e2e-coslcd-aks in namespace capz-e2e-coslcd

May 14 20:39:03.296: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-coslcd/capz-e2e-coslcd-aks: [dialing public load balancer at capz-e2e-coslcd-aks-25a44157.hcp.eastus2.azmk8s.io: dial tcp 20.96.53.91:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-coslcd/capz-e2e-coslcd-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 428.799282ms
STEP: Dumping workload cluster capz-e2e-coslcd/capz-e2e-coslcd-aks Azure activity log
STEP: Creating log watcher for controller kube-system/azure-ip-masq-agent-whqkr, container azure-ip-masq-agent
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-zxnjh, container node-driver-registrar
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-zxnjh, container azurefile
... skipping 42 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Sat, 14 May 2022 19:50:25 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-wivrd7" for hosting the cluster
May 14 19:50:25.910: INFO: starting to create namespace for hosting the "capz-e2e-wivrd7" test spec
2022/05/14 19:50:25 failed trying to get namespace (capz-e2e-wivrd7):namespaces "capz-e2e-wivrd7" not found
INFO: Creating namespace capz-e2e-wivrd7
INFO: Creating event watcher for namespace "capz-e2e-wivrd7"
May 14 19:50:25.955: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-wivrd7-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-rwspm, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-wivrd7-public-custom-vnet-control-plane-4dlzh, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-wivrd7-public-custom-vnet-control-plane-4dlzh, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-5hnqv, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-7ztlf, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-54xvw, container coredns
STEP: Got error while iterating over activity logs for resource group capz-e2e-wivrd7-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000872575s
STEP: Dumping all the Cluster API resources in the "capz-e2e-wivrd7" namespace
STEP: Deleting all clusters in the capz-e2e-wivrd7 namespace
STEP: Deleting cluster capz-e2e-wivrd7-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-wivrd7/capz-e2e-wivrd7-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-wivrd7-public-custom-vnet to be deleted
W0514 20:39:06.324714   24160 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I0514 20:39:37.864307   24160 trace.go:205] Trace[784646361]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (14-May-2022 20:39:07.863) (total time: 30001ms):
Trace[784646361]: [30.001217253s] [30.001217253s] END
E0514 20:39:37.864397   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp 20.22.33.163:6443: i/o timeout
I0514 20:40:10.920954   24160 trace.go:205] Trace[507218320]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (14-May-2022 20:39:40.919) (total time: 30001ms):
Trace[507218320]: [30.00102254s] [30.00102254s] END
E0514 20:40:10.921023   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp 20.22.33.163:6443: i/o timeout
I0514 20:40:45.366172   24160 trace.go:205] Trace[1550106395]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (14-May-2022 20:40:15.364) (total time: 30001ms):
Trace[1550106395]: [30.001589403s] [30.001589403s] END
E0514 20:40:45.366260   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp 20.22.33.163:6443: i/o timeout
I0514 20:41:22.396766   24160 trace.go:205] Trace[589534845]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (14-May-2022 20:40:52.395) (total time: 30001ms):
Trace[589534845]: [30.001150639s] [30.001150639s] END
E0514 20:41:22.396843   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp 20.22.33.163:6443: i/o timeout
I0514 20:42:17.758462   24160 trace.go:205] Trace[185374530]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (14-May-2022 20:41:47.757) (total time: 30001ms):
Trace[185374530]: [30.001240442s] [30.001240442s] END
E0514 20:42:17.758528   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp 20.22.33.163:6443: i/o timeout
I0514 20:43:18.285059   24160 trace.go:205] Trace[1106792788]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (14-May-2022 20:42:48.283) (total time: 30001ms):
Trace[1106792788]: [30.001614295s] [30.001614295s] END
E0514 20:43:18.285129   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp 20.22.33.163:6443: i/o timeout
E0514 20:44:05.074045   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-wivrd7
STEP: Running additional cleanup for the "create-workload-cluster" test spec
May 14 20:44:28.695: INFO: deleting an existing virtual network "custom-vnet"
May 14 20:44:39.269: INFO: deleting an existing route table "node-routetable"
May 14 20:44:41.576: INFO: deleting an existing network security group "node-nsg"
E0514 20:44:48.514842   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
May 14 20:44:51.896: INFO: deleting an existing network security group "control-plane-nsg"
May 14 20:45:02.361: INFO: verifying the existing resource group "capz-e2e-wivrd7-public-custom-vnet" is empty
May 14 20:45:02.430: INFO: deleting the existing resource group "capz-e2e-wivrd7-public-custom-vnet"
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E0514 20:45:43.449215   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 55m35s on Ginkgo node 1 of 3


• [SLOW TEST:3335.416 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Sat, 14 May 2022 20:25:52 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-22ppl5" for hosting the cluster
May 14 20:25:52.790: INFO: starting to create namespace for hosting the "capz-e2e-22ppl5" test spec
2022/05/14 20:25:52 failed trying to get namespace (capz-e2e-22ppl5):namespaces "capz-e2e-22ppl5" not found
INFO: Creating namespace capz-e2e-22ppl5
INFO: Creating event watcher for namespace "capz-e2e-22ppl5"
May 14 20:25:52.827: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-22ppl5-oot
INFO: Creating the workload cluster with name "capz-e2e-22ppl5-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Sat, 14 May 2022 20:46:01 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-lv43jj" for hosting the cluster
May 14 20:46:01.330: INFO: starting to create namespace for hosting the "capz-e2e-lv43jj" test spec
2022/05/14 20:46:01 failed trying to get namespace (capz-e2e-lv43jj):namespaces "capz-e2e-lv43jj" not found
INFO: Creating namespace capz-e2e-lv43jj
INFO: Creating event watcher for namespace "capz-e2e-lv43jj"
May 14 20:46:01.367: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-lv43jj-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-lv43jj-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 12 lines ...
kubeadmconfig.bootstrap.cluster.x-k8s.io/capz-e2e-lv43jj-win-vmss-mp-win created
clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-lv43jj-win-vmss-flannel created
configmap/cni-capz-e2e-lv43jj-win-vmss-flannel created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E0514 20:46:24.391353   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by capz-e2e-lv43jj/capz-e2e-lv43jj-win-vmss-control-plane to be provisioned
STEP: Waiting for one control plane node to exist
E0514 20:47:04.225836   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 20:47:45.391781   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 20:48:29.446384   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
INFO: Waiting for control plane capz-e2e-lv43jj/capz-e2e-lv43jj-win-vmss-control-plane to be ready (implies underlying nodes to be ready as well)
STEP: Waiting for the control plane to be ready
E0514 20:49:12.338268   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for the machine deployments to be provisioned
INFO: Waiting for the machine pools to be provisioned
STEP: Waiting for the machine pool workload nodes to exist
E0514 20:49:42.906823   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 20:50:33.521299   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Waiting for the machine pool workload nodes to exist
E0514 20:51:14.242415   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 20:52:09.580423   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 20:52:55.077742   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/webwwvxih to be available
May 14 20:53:22.755: INFO: starting to wait for deployment to become available
E0514 20:53:26.682865   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
May 14 20:53:42.867: INFO: Deployment default/webwwvxih is now available, took 20.112213204s
STEP: creating an internal Load Balancer service
May 14 20:53:42.867: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/webwwvxih-ilb to be available
May 14 20:53:42.925: INFO: waiting for service default/webwwvxih-ilb to be available
E0514 20:54:05.644460   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
May 14 20:54:23.097: INFO: service default/webwwvxih-ilb is available, took 40.17117224s
STEP: connecting to the internal LB service from a curl pod
May 14 20:54:23.130: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-jobhxlxw to be complete
May 14 20:54:23.179: INFO: waiting for job default/curl-to-ilb-jobhxlxw to be complete
May 14 20:54:33.258: INFO: job default/curl-to-ilb-jobhxlxw is complete, took 10.078597292s
STEP: deleting the ilb test resources
May 14 20:54:33.258: INFO: deleting the ilb service: webwwvxih-ilb
May 14 20:54:33.313: INFO: deleting the ilb job: curl-to-ilb-jobhxlxw
STEP: creating an external Load Balancer service
May 14 20:54:33.347: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/webwwvxih-elb to be available
May 14 20:54:33.404: INFO: waiting for service default/webwwvxih-elb to be available
E0514 20:54:53.451263   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 20:55:49.721814   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
May 14 20:55:53.712: INFO: service default/webwwvxih-elb is available, took 1m20.307905951s
STEP: connecting to the external LB service from a curl pod
May 14 20:55:53.745: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-jobg09x9tibcac to be complete
May 14 20:55:53.781: INFO: waiting for job default/curl-to-elb-jobg09x9tibcac to be complete
May 14 20:56:03.847: INFO: job default/curl-to-elb-jobg09x9tibcac is complete, took 10.066077141s
STEP: connecting directly to the external LB service
May 14 20:56:03.847: INFO: starting attempts to connect directly to the external LB service
2022/05/14 20:56:03 [DEBUG] GET http://20.22.8.213
E0514 20:56:27.561815   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
2022/05/14 20:56:33 [ERR] GET http://20.22.8.213 request failed: Get "http://20.22.8.213": dial tcp 20.22.8.213:80: i/o timeout
2022/05/14 20:56:33 [DEBUG] GET http://20.22.8.213: retrying in 1s (4 left)
May 14 20:56:50.367: INFO: successfully connected to the external LB service
STEP: deleting the test resources
May 14 20:56:50.367: INFO: starting to delete external LB service webwwvxih-elb
May 14 20:56:50.428: INFO: starting to delete deployment webwwvxih
May 14 20:56:50.463: INFO: starting to delete job curl-to-elb-jobg09x9tibcac
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowsd9if4y to be available
May 14 20:56:50.591: INFO: starting to wait for deployment to become available
E0514 20:57:21.966751   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 20:58:05.635904   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
May 14 20:58:30.995: INFO: Deployment default/web-windowsd9if4y is now available, took 1m40.404717189s
STEP: creating an internal Load Balancer service
May 14 20:58:30.995: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/web-windowsd9if4y-ilb to be available
May 14 20:58:31.045: INFO: waiting for service default/web-windowsd9if4y-ilb to be available
E0514 20:58:53.345852   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
May 14 20:59:11.214: INFO: service default/web-windowsd9if4y-ilb is available, took 40.168668229s
STEP: connecting to the internal LB service from a curl pod
May 14 20:59:11.246: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-jobhsr4h to be complete
May 14 20:59:11.282: INFO: waiting for job default/curl-to-ilb-jobhsr4h to be complete
May 14 20:59:21.358: INFO: job default/curl-to-ilb-jobhsr4h is complete, took 10.0763273s
STEP: deleting the ilb test resources
May 14 20:59:21.358: INFO: deleting the ilb service: web-windowsd9if4y-ilb
May 14 20:59:21.414: INFO: deleting the ilb job: curl-to-ilb-jobhsr4h
STEP: creating an external Load Balancer service
May 14 20:59:21.448: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/web-windowsd9if4y-elb to be available
May 14 20:59:21.506: INFO: waiting for service default/web-windowsd9if4y-elb to be available
E0514 20:59:29.024230   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 20:59:59.214066   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
May 14 21:00:31.775: INFO: service default/web-windowsd9if4y-elb is available, took 1m10.26946725s
STEP: connecting to the external LB service from a curl pod
May 14 21:00:31.808: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-jobm46eepxtph2 to be complete
May 14 21:00:31.844: INFO: waiting for job default/curl-to-elb-jobm46eepxtph2 to be complete
May 14 21:00:41.910: INFO: job default/curl-to-elb-jobm46eepxtph2 is complete, took 10.066454741s
STEP: connecting directly to the external LB service
May 14 21:00:41.910: INFO: starting attempts to connect directly to the external LB service
2022/05/14 21:00:41 [DEBUG] GET http://20.22.9.209
E0514 21:00:57.852831   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
2022/05/14 21:01:11 [ERR] GET http://20.22.9.209 request failed: Get "http://20.22.9.209": dial tcp 20.22.9.209:80: i/o timeout
2022/05/14 21:01:11 [DEBUG] GET http://20.22.9.209: retrying in 1s (4 left)
May 14 21:01:13.993: INFO: successfully connected to the external LB service
STEP: deleting the test resources
May 14 21:01:13.993: INFO: starting to delete external LB service web-windowsd9if4y-elb
May 14 21:01:14.054: INFO: starting to delete deployment web-windowsd9if4y
May 14 21:01:14.088: INFO: starting to delete job curl-to-elb-jobm46eepxtph2
... skipping 6 lines ...
May 14 21:01:25.583: INFO: INFO: Collecting logs for node capz-e2e-lv43jj-win-vmss-mp-0000000 in cluster capz-e2e-lv43jj-win-vmss in namespace capz-e2e-lv43jj

May 14 21:01:37.137: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-lv43jj-win-vmss-mp-0

May 14 21:01:37.479: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-lv43jj-win-vmss in namespace capz-e2e-lv43jj

E0514 21:01:50.865743   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:02:31.804301   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
May 14 21:02:34.938: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

STEP: Dumping workload cluster capz-e2e-lv43jj/capz-e2e-lv43jj-win-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 359.586655ms
STEP: Dumping workload cluster capz-e2e-lv43jj/capz-e2e-lv43jj-win-vmss Azure activity log
STEP: Creating log watcher for controller kube-system/kube-proxy-g8h4m, container kube-proxy
... skipping 11 lines ...
STEP: Fetching activity logs took 983.744388ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-lv43jj" namespace
STEP: Deleting all clusters in the capz-e2e-lv43jj namespace
STEP: Deleting cluster capz-e2e-lv43jj-win-vmss
INFO: Waiting for the Cluster capz-e2e-lv43jj/capz-e2e-lv43jj-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-lv43jj-win-vmss to be deleted
E0514 21:03:21.125455   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:03:51.751375   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:04:34.072735   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-4lkvb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-x7vrd, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-g8h4m, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-q4tcz, container kube-flannel: http2: client connection lost
E0514 21:05:15.230251   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:05:48.125799   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:06:26.769478   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:07:10.229502   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:08:06.804906   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:08:37.735733   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:09:14.738567   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:09:48.051114   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:10:37.170356   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:11:17.000473   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:12:14.280258   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:12:51.767492   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:13:42.177093   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:14:26.445147   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:15:01.589487   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:15:38.761283   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-lv43jj
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E0514 21:16:31.474321   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:17:09.044481   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 31m18s on Ginkgo node 1 of 3


• [SLOW TEST:1877.732 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Sat, 14 May 2022 20:44:57 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-9pbmbj" for hosting the cluster
May 14 20:44:57.876: INFO: starting to create namespace for hosting the "capz-e2e-9pbmbj" test spec
2022/05/14 20:44:57 failed trying to get namespace (capz-e2e-9pbmbj):namespaces "capz-e2e-9pbmbj" not found
INFO: Creating namespace capz-e2e-9pbmbj
INFO: Creating event watcher for namespace "capz-e2e-9pbmbj"
May 14 20:44:57.917: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-9pbmbj-win-ha
INFO: Creating the workload cluster with name "capz-e2e-9pbmbj-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 151 lines ...
STEP: Fetching activity logs took 1.008229121s
STEP: Dumping all the Cluster API resources in the "capz-e2e-9pbmbj" namespace
STEP: Deleting all clusters in the capz-e2e-9pbmbj namespace
STEP: Deleting cluster capz-e2e-9pbmbj-win-ha
INFO: Waiting for the Cluster capz-e2e-9pbmbj/capz-e2e-9pbmbj-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-9pbmbj-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-6mxxh, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-h47mw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-9pbmbj-win-ha-control-plane-xgj76, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-9pbmbj-win-ha-control-plane-sfb82, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-pg9j4, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-9pbmbj-win-ha-control-plane-sfb82, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-9pbmbj-win-ha-control-plane-xgj76, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-9pbmbj-win-ha-control-plane-sfb82, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-9pbmbj-win-ha-control-plane-xgj76, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6brcl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-9pbmbj-win-ha-control-plane-xgj76, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-n8ltw, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-6rrrw, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-9pbmbj-win-ha-control-plane-sfb82, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-zjf86, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-np84z, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-9pbmbj
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 39m51s on Ginkgo node 2 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows Enabled cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:494
    With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496
------------------------------
E0514 21:17:46.698774   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:18:32.683009   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:19:22.821869   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:20:03.962653   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:21:03.278380   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:21:53.385512   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:22:52.905270   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:23:50.015799   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0514 21:24:41.618201   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a cluster that uses the external cloud provider [It] with a 1 control plane nodes and 2 worker nodes 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/controlplane_helpers.go:145

Ran 8 of 22 Specs in 5785.731 seconds
FAIL! -- 7 Passed | 1 Failed | 0 Pending | 14 Skipped


Ginkgo ran 1 suite in 1h37m49.764055601s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...