This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-11 18:31
Elapsed2h12m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating a Windows enabled VMSS cluster with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node 1h0m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sWindows\senabled\sVMSS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\san\sLinux\sAzureMachinePool\swith\s1\snodes\sand\sWindows\sAzureMachinePool\swith\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115
Timed out after 1800.001s.
Expected
    <bool>: false
to be true
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.3/framework/cluster_helpers.go:165
				
				Click to see stdout/stderrfrom junit.e2e_suite.3.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 433 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Thu, 11 Nov 2021 18:38:15 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-2q5c71" for hosting the cluster
Nov 11 18:38:15.839: INFO: starting to create namespace for hosting the "capz-e2e-2q5c71" test spec
2021/11/11 18:38:15 failed trying to get namespace (capz-e2e-2q5c71):namespaces "capz-e2e-2q5c71" not found
INFO: Creating namespace capz-e2e-2q5c71
INFO: Creating event watcher for namespace "capz-e2e-2q5c71"
Nov 11 18:38:15.916: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-2q5c71-ipv6
INFO: Creating the workload cluster with name "capz-e2e-2q5c71-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 622.689866ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-2q5c71" namespace
STEP: Deleting all clusters in the capz-e2e-2q5c71 namespace
STEP: Deleting cluster capz-e2e-2q5c71-ipv6
INFO: Waiting for the Cluster capz-e2e-2q5c71/capz-e2e-2q5c71-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-2q5c71-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-2q5c71-ipv6-control-plane-lj76q, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-2q5c71-ipv6-control-plane-lj76q, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6xbzz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-ff96l, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6b6jp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-b7kzd, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-2q5c71-ipv6-control-plane-nncnt, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-2q5c71-ipv6-control-plane-nncnt, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-2q5c71-ipv6-control-plane-lj76q, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-2q5c71-ipv6-control-plane-5tr85, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-szzfz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-2q5c71-ipv6-control-plane-lj76q, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-4hpwf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-2q5c71-ipv6-control-plane-5tr85, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vtm8c, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-p9xc8, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-2q5c71-ipv6-control-plane-nncnt, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-2q5c71-ipv6-control-plane-5tr85, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-2q5c71-ipv6-control-plane-5tr85, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-2q5c71-ipv6-control-plane-nncnt, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ww79d, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cwqtg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-b8jgk, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-2q5c71
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 15m58s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Thu, 11 Nov 2021 18:54:14 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-dcyg93" for hosting the cluster
Nov 11 18:54:14.002: INFO: starting to create namespace for hosting the "capz-e2e-dcyg93" test spec
2021/11/11 18:54:14 failed trying to get namespace (capz-e2e-dcyg93):namespaces "capz-e2e-dcyg93" not found
INFO: Creating namespace capz-e2e-dcyg93
INFO: Creating event watcher for namespace "capz-e2e-dcyg93"
Nov 11 18:54:14.042: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-dcyg93-vmss
INFO: Creating the workload cluster with name "capz-e2e-dcyg93-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 106 lines ...
STEP: Fetching activity logs took 667.66151ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-dcyg93" namespace
STEP: Deleting all clusters in the capz-e2e-dcyg93 namespace
STEP: Deleting cluster capz-e2e-dcyg93-vmss
INFO: Waiting for the Cluster capz-e2e-dcyg93/capz-e2e-dcyg93-vmss to be deleted
STEP: Waiting for cluster capz-e2e-dcyg93-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-dcyg93-vmss-control-plane-btcl8, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-dcyg93-vmss-control-plane-btcl8, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qzlqd, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wmm4x, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-q9r4s, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-x52rm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-dcyg93-vmss-control-plane-btcl8, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4cg76, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-dcyg93-vmss-control-plane-btcl8, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8tp82, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-br6kb, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-dcyg93
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 17m24s on Ginkgo node 2 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Thu, 11 Nov 2021 18:38:15 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-ghuy3i" for hosting the cluster
Nov 11 18:38:15.838: INFO: starting to create namespace for hosting the "capz-e2e-ghuy3i" test spec
2021/11/11 18:38:15 failed trying to get namespace (capz-e2e-ghuy3i):namespaces "capz-e2e-ghuy3i" not found
INFO: Creating namespace capz-e2e-ghuy3i
INFO: Creating event watcher for namespace "capz-e2e-ghuy3i"
Nov 11 18:38:15.905: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-ghuy3i-ha
INFO: Creating the workload cluster with name "capz-e2e-ghuy3i-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 57 lines ...
STEP: waiting for job default/curl-to-elb-jobn5chl9qtwv9 to be complete
Nov 11 18:49:48.924: INFO: waiting for job default/curl-to-elb-jobn5chl9qtwv9 to be complete
Nov 11 18:49:58.972: INFO: job default/curl-to-elb-jobn5chl9qtwv9 is complete, took 10.048391849s
STEP: connecting directly to the external LB service
Nov 11 18:49:58.972: INFO: starting attempts to connect directly to the external LB service
2021/11/11 18:49:58 [DEBUG] GET http://23.96.245.50
2021/11/11 18:50:28 [ERR] GET http://23.96.245.50 request failed: Get "http://23.96.245.50": dial tcp 23.96.245.50:80: i/o timeout
2021/11/11 18:50:28 [DEBUG] GET http://23.96.245.50: retrying in 1s (4 left)
2021/11/11 18:50:59 [ERR] GET http://23.96.245.50 request failed: Get "http://23.96.245.50": dial tcp 23.96.245.50:80: i/o timeout
2021/11/11 18:50:59 [DEBUG] GET http://23.96.245.50: retrying in 2s (3 left)
Nov 11 18:51:02.004: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 11 18:51:02.004: INFO: starting to delete external LB service webs5rk63-elb
Nov 11 18:51:02.070: INFO: starting to delete deployment webs5rk63
Nov 11 18:51:02.102: INFO: starting to delete job curl-to-elb-jobn5chl9qtwv9
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 11 18:51:02.166: INFO: starting to create dev deployment namespace
2021/11/11 18:51:02 failed trying to get namespace (development):namespaces "development" not found
2021/11/11 18:51:02 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 11 18:51:02.239: INFO: starting to create prod deployment namespace
2021/11/11 18:51:02 failed trying to get namespace (production):namespaces "production" not found
2021/11/11 18:51:02 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 11 18:51:02.282: INFO: starting to create frontend-prod deployments
Nov 11 18:51:02.312: INFO: starting to create frontend-dev deployments
Nov 11 18:51:02.349: INFO: starting to create backend deployments
Nov 11 18:51:02.388: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 11 18:51:24.583: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.96.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 11 18:53:34.851: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 11 18:53:34.960: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.96.132 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.96.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 11 18:57:56.996: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 11 18:57:57.119: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.54.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 11 19:00:09.078: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 11 19:00:09.202: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.54.129 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.54.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 11 19:04:31.222: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 11 19:04:31.374: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.96.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 11 19:06:41.284: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 11 19:06:41.396: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.96.132 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-ghuy3i-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-ghuy3i/capz-e2e-ghuy3i-ha logs
Nov 11 19:08:52.710: INFO: INFO: Collecting logs for node capz-e2e-ghuy3i-ha-control-plane-d8wtl in cluster capz-e2e-ghuy3i-ha in namespace capz-e2e-ghuy3i

Nov 11 19:09:03.173: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-ghuy3i-ha-control-plane-d8wtl
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-j7mcb, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-s8xtc, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-75gc8, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-ghuy3i-ha-control-plane-4tb4x, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-vgkn8, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-thr8s, container calico-node
STEP: Got error while iterating over activity logs for resource group capz-e2e-ghuy3i-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001012604s
STEP: Dumping all the Cluster API resources in the "capz-e2e-ghuy3i" namespace
STEP: Deleting all clusters in the capz-e2e-ghuy3i namespace
STEP: Deleting cluster capz-e2e-ghuy3i-ha
INFO: Waiting for the Cluster capz-e2e-ghuy3i/capz-e2e-ghuy3i-ha to be deleted
STEP: Waiting for cluster capz-e2e-ghuy3i-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ghuy3i-ha-control-plane-8kjw8, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rjcqv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ghuy3i-ha-control-plane-4tb4x, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jqpz6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ghuy3i-ha-control-plane-4tb4x, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-lscqx, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-75gc8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-j7mcb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xkmlf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ghuy3i-ha-control-plane-4tb4x, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ghuy3i-ha-control-plane-8kjw8, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ghuy3i-ha-control-plane-d8wtl, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-w2gj5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ghuy3i-ha-control-plane-8kjw8, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ghuy3i-ha-control-plane-d8wtl, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-rtxqz, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vgkn8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-97glb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-b8znv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ghuy3i-ha-control-plane-d8wtl, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-s8xtc, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ghuy3i-ha-control-plane-4tb4x, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-thr8s, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ghuy3i-ha-control-plane-8kjw8, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ghuy3i-ha-control-plane-d8wtl, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-ghuy3i
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 41m8s on Ginkgo node 3 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Thu, 11 Nov 2021 18:38:15 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-cicsfw" for hosting the cluster
Nov 11 18:38:15.838: INFO: starting to create namespace for hosting the "capz-e2e-cicsfw" test spec
2021/11/11 18:38:15 failed trying to get namespace (capz-e2e-cicsfw):namespaces "capz-e2e-cicsfw" not found
INFO: Creating namespace capz-e2e-cicsfw
INFO: Creating event watcher for namespace "capz-e2e-cicsfw"
Nov 11 18:38:15.892: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-cicsfw-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-qcmp7, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-cicsfw-public-custom-vnet-control-plane-8sctp, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-bzw7q, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-cicsfw-public-custom-vnet-control-plane-8sctp, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-cicsfw-public-custom-vnet-control-plane-8sctp, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-5khxb, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-cicsfw-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001242147s
STEP: Dumping all the Cluster API resources in the "capz-e2e-cicsfw" namespace
STEP: Deleting all clusters in the capz-e2e-cicsfw namespace
STEP: Deleting cluster capz-e2e-cicsfw-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-cicsfw/capz-e2e-cicsfw-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-cicsfw-public-custom-vnet to be deleted
W1111 19:25:08.246310   24174 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1111 19:25:39.069447   24174 trace.go:205] Trace[1051914922]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (11-Nov-2021 19:25:09.067) (total time: 30001ms):
Trace[1051914922]: [30.001472474s] [30.001472474s] END
E1111 19:25:39.069511   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp 52.159.82.227:6443: i/o timeout
I1111 19:26:11.991134   24174 trace.go:205] Trace[293740829]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (11-Nov-2021 19:25:41.990) (total time: 30000ms):
Trace[293740829]: [30.000566528s] [30.000566528s] END
E1111 19:26:11.991201   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp 52.159.82.227:6443: i/o timeout
I1111 19:26:47.609222   24174 trace.go:205] Trace[488695635]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (11-Nov-2021 19:26:17.608) (total time: 30000ms):
Trace[488695635]: [30.000666028s] [30.000666028s] END
E1111 19:26:47.609281   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp 52.159.82.227:6443: i/o timeout
I1111 19:27:24.801490   24174 trace.go:205] Trace[172132034]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (11-Nov-2021 19:26:54.799) (total time: 30001ms):
Trace[172132034]: [30.001483454s] [30.001483454s] END
E1111 19:27:24.801550   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp 52.159.82.227:6443: i/o timeout
I1111 19:28:10.825184   24174 trace.go:205] Trace[2108129464]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (11-Nov-2021 19:27:40.824) (total time: 30000ms):
Trace[2108129464]: [30.000818349s] [30.000818349s] END
E1111 19:28:10.825250   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp 52.159.82.227:6443: i/o timeout
I1111 19:29:26.509491   24174 trace.go:205] Trace[1135115496]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (11-Nov-2021 19:28:56.508) (total time: 30001ms):
Trace[1135115496]: [30.001317619s] [30.001317619s] END
E1111 19:29:26.509553   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp 52.159.82.227:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-cicsfw
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 11 19:30:21.163: INFO: deleting an existing virtual network "custom-vnet"
I1111 19:30:26.639340   24174 trace.go:205] Trace[287545792]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (11-Nov-2021 19:29:56.638) (total time: 30001ms):
Trace[287545792]: [30.001069384s] [30.001069384s] END
E1111 19:30:26.639396   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp 52.159.82.227:6443: i/o timeout
Nov 11 19:30:31.462: INFO: deleting an existing route table "node-routetable"
Nov 11 19:30:41.646: INFO: deleting an existing network security group "node-nsg"
Nov 11 19:30:51.857: INFO: deleting an existing network security group "control-plane-nsg"
Nov 11 19:31:02.014: INFO: verifying the existing resource group "capz-e2e-cicsfw-public-custom-vnet" is empty
Nov 11 19:31:02.047: INFO: deleting the existing resource group "capz-e2e-cicsfw-public-custom-vnet"
E1111 19:31:16.101062   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1111 19:31:51.949904   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 54m17s on Ginkgo node 1 of 3


• [SLOW TEST:3257.207 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Thu, 11 Nov 2021 19:11:37 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-lonlpj" for hosting the cluster
Nov 11 19:11:37.800: INFO: starting to create namespace for hosting the "capz-e2e-lonlpj" test spec
2021/11/11 19:11:37 failed trying to get namespace (capz-e2e-lonlpj):namespaces "capz-e2e-lonlpj" not found
INFO: Creating namespace capz-e2e-lonlpj
INFO: Creating event watcher for namespace "capz-e2e-lonlpj"
Nov 11 19:11:37.839: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-lonlpj-gpu
INFO: Creating the workload cluster with name "capz-e2e-lonlpj-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 80 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Thu, 11 Nov 2021 19:19:24 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-d52xrc" for hosting the cluster
Nov 11 19:19:24.080: INFO: starting to create namespace for hosting the "capz-e2e-d52xrc" test spec
2021/11/11 19:19:24 failed trying to get namespace (capz-e2e-d52xrc):namespaces "capz-e2e-d52xrc" not found
INFO: Creating namespace capz-e2e-d52xrc
INFO: Creating event watcher for namespace "capz-e2e-d52xrc"
Nov 11 19:19:24.120: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-d52xrc-oot
INFO: Creating the workload cluster with name "capz-e2e-d52xrc-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-job6a0kj63yv83 to be complete
Nov 11 19:28:26.410: INFO: waiting for job default/curl-to-elb-job6a0kj63yv83 to be complete
Nov 11 19:28:36.449: INFO: job default/curl-to-elb-job6a0kj63yv83 is complete, took 10.03900059s
STEP: connecting directly to the external LB service
Nov 11 19:28:36.449: INFO: starting attempts to connect directly to the external LB service
2021/11/11 19:28:36 [DEBUG] GET http://65.52.205.54
2021/11/11 19:29:06 [ERR] GET http://65.52.205.54 request failed: Get "http://65.52.205.54": dial tcp 65.52.205.54:80: i/o timeout
2021/11/11 19:29:06 [DEBUG] GET http://65.52.205.54: retrying in 1s (4 left)
Nov 11 19:29:10.495: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 11 19:29:10.495: INFO: starting to delete external LB service webqyelpj-elb
Nov 11 19:29:10.540: INFO: starting to delete deployment webqyelpj
Nov 11 19:29:10.570: INFO: starting to delete job curl-to-elb-job6a0kj63yv83
... skipping 34 lines ...
STEP: Fetching activity logs took 976.053778ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-d52xrc" namespace
STEP: Deleting all clusters in the capz-e2e-d52xrc namespace
STEP: Deleting cluster capz-e2e-d52xrc-oot
INFO: Waiting for the Cluster capz-e2e-d52xrc/capz-e2e-d52xrc-oot to be deleted
STEP: Waiting for cluster capz-e2e-d52xrc-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-dxcpz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-n2859, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2rd5z, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-czntl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-n89cq, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-9b4ch, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-d52xrc
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 23m28s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Thu, 11 Nov 2021 19:32:33 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-o4yttu" for hosting the cluster
Nov 11 19:32:33.048: INFO: starting to create namespace for hosting the "capz-e2e-o4yttu" test spec
2021/11/11 19:32:33 failed trying to get namespace (capz-e2e-o4yttu):namespaces "capz-e2e-o4yttu" not found
INFO: Creating namespace capz-e2e-o4yttu
INFO: Creating event watcher for namespace "capz-e2e-o4yttu"
Nov 11 19:32:33.075: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-o4yttu-aks
INFO: Creating the workload cluster with name "capz-e2e-o4yttu-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1111 19:32:39.460992   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:33:21.466089   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:34:00.633587   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:34:46.568910   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:35:41.556345   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:36:22.442921   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:36:53.696096   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 11 19:37:04.686: INFO: Waiting for the first control plane machine managed by capz-e2e-o4yttu/capz-e2e-o4yttu-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
INFO: Waiting for control plane to be ready
Nov 11 19:37:04.728: INFO: Waiting for the first control plane machine managed by capz-e2e-o4yttu/capz-e2e-o4yttu-aks to be provisioned
STEP: Waiting for all control plane nodes to exist
... skipping 13 lines ...
STEP: time sync OK for host aks-agentpool1-26009618-vmss000000
STEP: time sync OK for host aks-agentpool1-26009618-vmss000000
STEP: Dumping logs from the "capz-e2e-o4yttu-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-o4yttu/capz-e2e-o4yttu-aks logs
Nov 11 19:37:20.935: INFO: INFO: Collecting logs for node aks-agentpool1-26009618-vmss000000 in cluster capz-e2e-o4yttu-aks in namespace capz-e2e-o4yttu

E1111 19:37:27.712159   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:38:25.636621   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:39:08.972207   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 11 19:39:31.684: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-o4yttu/capz-e2e-o4yttu-aks: [dialing public load balancer at capz-e2e-o4yttu-aks-41698228.hcp.northcentralus.azmk8s.io: dial tcp 52.162.3.198:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
Nov 11 19:39:32.125: INFO: INFO: Collecting logs for node aks-agentpool1-26009618-vmss000000 in cluster capz-e2e-o4yttu-aks in namespace capz-e2e-o4yttu

E1111 19:39:57.623034   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:40:50.392530   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:41:26.399803   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 11 19:41:42.760: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-o4yttu/capz-e2e-o4yttu-aks: [dialing public load balancer at capz-e2e-o4yttu-aks-41698228.hcp.northcentralus.azmk8s.io: dial tcp 52.162.3.198:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-o4yttu/capz-e2e-o4yttu-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 376.413894ms
STEP: Dumping workload cluster capz-e2e-o4yttu/capz-e2e-o4yttu-aks Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-vgbrp, container calico-node
STEP: Creating log watcher for controller kube-system/calico-typha-horizontal-autoscaler-599c7bb664-x4xq9, container autoscaler
STEP: Creating log watcher for controller kube-system/coredns-84d976c568-mm28f, container coredns
... skipping 8 lines ...
STEP: Fetching activity logs took 491.02343ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-o4yttu" namespace
STEP: Deleting all clusters in the capz-e2e-o4yttu namespace
STEP: Deleting cluster capz-e2e-o4yttu-aks
INFO: Waiting for the Cluster capz-e2e-o4yttu/capz-e2e-o4yttu-aks to be deleted
STEP: Waiting for cluster capz-e2e-o4yttu-aks to be deleted
E1111 19:42:24.192962   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:43:21.291127   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:44:07.200602   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:44:47.476147   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:45:45.801347   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:46:29.723258   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:47:04.122071   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:47:41.422699   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:48:34.221893   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-o4yttu
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1111 19:49:20.708201   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:49:53.721218   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 18m3s on Ginkgo node 1 of 3


• [SLOW TEST:1083.374 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 8 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Thu, 11 Nov 2021 19:35:44 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-v882wb" for hosting the cluster
Nov 11 19:35:44.837: INFO: starting to create namespace for hosting the "capz-e2e-v882wb" test spec
2021/11/11 19:35:44 failed trying to get namespace (capz-e2e-v882wb):namespaces "capz-e2e-v882wb" not found
INFO: Creating namespace capz-e2e-v882wb
INFO: Creating event watcher for namespace "capz-e2e-v882wb"
Nov 11 19:35:44.869: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-v882wb-win-ha
INFO: Creating the workload cluster with name "capz-e2e-v882wb-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 55 lines ...
STEP: waiting for job default/curl-to-elb-jobk4rvol6mt73 to be complete
Nov 11 19:45:06.965: INFO: waiting for job default/curl-to-elb-jobk4rvol6mt73 to be complete
Nov 11 19:45:16.999: INFO: job default/curl-to-elb-jobk4rvol6mt73 is complete, took 10.034199107s
STEP: connecting directly to the external LB service
Nov 11 19:45:16.999: INFO: starting attempts to connect directly to the external LB service
2021/11/11 19:45:16 [DEBUG] GET http://52.159.104.28
2021/11/11 19:45:47 [ERR] GET http://52.159.104.28 request failed: Get "http://52.159.104.28": dial tcp 52.159.104.28:80: i/o timeout
2021/11/11 19:45:47 [DEBUG] GET http://52.159.104.28: retrying in 1s (4 left)
Nov 11 19:45:51.068: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 11 19:45:51.068: INFO: starting to delete external LB service webvrfhfm-elb
Nov 11 19:45:51.136: INFO: starting to delete deployment webvrfhfm
Nov 11 19:45:51.157: INFO: starting to delete job curl-to-elb-jobk4rvol6mt73
... skipping 79 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-v882wb-win-ha-control-plane-jj696, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-v882wb-win-ha-control-plane-t4j2z, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-v882wb-win-ha-control-plane-v7cfh, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-v882wb-win-ha-control-plane-jj696, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-v882wb-win-ha-control-plane-jj696, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-v882wb-win-ha-control-plane-v7cfh, container kube-apiserver
STEP: Got error while iterating over activity logs for resource group capz-e2e-v882wb-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000920776s
STEP: Dumping all the Cluster API resources in the "capz-e2e-v882wb" namespace
STEP: Deleting all clusters in the capz-e2e-v882wb namespace
STEP: Deleting cluster capz-e2e-v882wb-win-ha
INFO: Waiting for the Cluster capz-e2e-v882wb/capz-e2e-v882wb-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-v882wb-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-v882wb-win-ha-control-plane-t4j2z, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-dwf4r, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-v882wb-win-ha-control-plane-t4j2z, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-fkskq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-v882wb-win-ha-control-plane-jj696, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-v882wb-win-ha-control-plane-t4j2z, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-9spjx, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-d6tvb, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-v882wb-win-ha-control-plane-jj696, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-v882wb-win-ha-control-plane-jj696, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-v882wb-win-ha-control-plane-t4j2z, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5qdg5, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-q8rjc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-v882wb-win-ha-control-plane-jj696, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-wxqdw, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-r9xzr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-j2trr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-7p2v7, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-v882wb
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 33m2s on Ginkgo node 2 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows Enabled cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:494
    With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2021-11-11T20:31:39Z"}
++ early_exit_handler
++ '[' -n 166 ']'
++ kill -TERM 166
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 19 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Thu, 11 Nov 2021 19:42:51 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-w71k0o" for hosting the cluster
Nov 11 19:42:51.972: INFO: starting to create namespace for hosting the "capz-e2e-w71k0o" test spec
2021/11/11 19:42:51 failed trying to get namespace (capz-e2e-w71k0o):namespaces "capz-e2e-w71k0o" not found
INFO: Creating namespace capz-e2e-w71k0o
INFO: Creating event watcher for namespace "capz-e2e-w71k0o"
Nov 11 19:42:52.047: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-w71k0o-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-w71k0o-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 89 lines ...
STEP: waiting for job default/curl-to-elb-jobtgs2k2vp66m to be complete
Nov 11 20:09:33.071: INFO: waiting for job default/curl-to-elb-jobtgs2k2vp66m to be complete
Nov 11 20:09:43.104: INFO: job default/curl-to-elb-jobtgs2k2vp66m is complete, took 10.032775358s
STEP: connecting directly to the external LB service
Nov 11 20:09:43.104: INFO: starting attempts to connect directly to the external LB service
2021/11/11 20:09:43 [DEBUG] GET http://52.159.104.163
2021/11/11 20:10:13 [ERR] GET http://52.159.104.163 request failed: Get "http://52.159.104.163": dial tcp 52.159.104.163:80: i/o timeout
2021/11/11 20:10:13 [DEBUG] GET http://52.159.104.163: retrying in 1s (4 left)
Nov 11 20:10:14.133: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 11 20:10:14.133: INFO: starting to delete external LB service web-windows0tam1d-elb
Nov 11 20:10:14.177: INFO: starting to delete deployment web-windows0tam1d
Nov 11 20:10:14.192: INFO: starting to delete job curl-to-elb-jobtgs2k2vp66m
... skipping 23 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-w71k0o-win-vmss-control-plane-h29h5, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-n4dlv, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-xzd6p, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-hbmcs, container coredns
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-w71k0o-win-vmss-control-plane-h29h5, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-wkgzt, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-w71k0o-win-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000671155s
STEP: Dumping all the Cluster API resources in the "capz-e2e-w71k0o" namespace
STEP: Deleting all clusters in the capz-e2e-w71k0o namespace
STEP: Deleting cluster capz-e2e-w71k0o-win-vmss
INFO: Waiting for the Cluster capz-e2e-w71k0o/capz-e2e-w71k0o-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-w71k0o-win-vmss to be deleted
W1111 20:31:39.548113   24171 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: very short watch: pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
E1111 20:31:40.973615   24171 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:39207/api/v1/namespaces/capz-e2e-w71k0o/events?resourceVersion=65371": dial tcp 127.0.0.1:39207: connect: connection refused
E1111 20:31:43.413467   24171 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:39207/api/v1/namespaces/capz-e2e-w71k0o/events?resourceVersion=65371": dial tcp 127.0.0.1:39207: connect: connection refused
E1111 20:31:47.968368   24171 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:39207/api/v1/namespaces/capz-e2e-w71k0o/events?resourceVersion=65371": dial tcp 127.0.0.1:39207: connect: connection refused
E1111 20:31:56.026469   24171 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:39207/api/v1/namespaces/capz-e2e-w71k0o/events?resourceVersion=65371": dial tcp 127.0.0.1:39207: connect: connection refused
E1111 20:32:12.203510   24171 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:39207/api/v1/namespaces/capz-e2e-w71k0o/events?resourceVersion=65371": dial tcp 127.0.0.1:39207: connect: connection refused
E1111 20:32:44.156702   24171 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:39207/api/v1/namespaces/capz-e2e-w71k0o/events?resourceVersion=65371": dial tcp 127.0.0.1:39207: connect: connection refused
E1111 20:33:41.074606   24171 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:39207/api/v1/namespaces/capz-e2e-w71k0o/events?resourceVersion=65371": dial tcp 127.0.0.1:39207: connect: connection refused
E1111 20:34:22.509374   24171 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:39207/api/v1/namespaces/capz-e2e-w71k0o/events?resourceVersion=65371": dial tcp 127.0.0.1:39207: connect: connection refused
E1111 20:35:11.475317   24171 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:39207/api/v1/namespaces/capz-e2e-w71k0o/events?resourceVersion=65371": dial tcp 127.0.0.1:39207: connect: connection refused
E1111 20:35:56.730741   24171 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:39207/api/v1/namespaces/capz-e2e-w71k0o/events?resourceVersion=65371": dial tcp 127.0.0.1:39207: connect: connection refused
E1111 20:36:49.889749   24171 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:39207/api/v1/namespaces/capz-e2e-w71k0o/events?resourceVersion=65371": dial tcp 127.0.0.1:39207: connect: connection refused
E1111 20:37:27.006668   24171 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:39207/api/v1/namespaces/capz-e2e-w71k0o/events?resourceVersion=65371": dial tcp 127.0.0.1:39207: connect: connection refused
E1111 20:38:10.600651   24171 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:39207/api/v1/namespaces/capz-e2e-w71k0o/events?resourceVersion=65371": dial tcp 127.0.0.1:39207: connect: connection refused
E1111 20:38:49.460336   24171 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:39207/api/v1/namespaces/capz-e2e-w71k0o/events?resourceVersion=65371": dial tcp 127.0.0.1:39207: connect: connection refused
E1111 20:39:35.989097   24171 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:39207/api/v1/namespaces/capz-e2e-w71k0o/events?resourceVersion=65371": dial tcp 127.0.0.1:39207: connect: connection refused
E1111 20:40:29.354022   24171 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:39207/api/v1/namespaces/capz-e2e-w71k0o/events?resourceVersion=65371": dial tcp 127.0.0.1:39207: connect: connection refused
E1111 20:41:24.068092   24171 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:39207/api/v1/namespaces/capz-e2e-w71k0o/events?resourceVersion=65371": dial tcp 127.0.0.1:39207: connect: connection refused
E1111 20:42:04.189740   24171 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:39207/api/v1/namespaces/capz-e2e-w71k0o/events?resourceVersion=65371": dial tcp 127.0.0.1:39207: connect: connection refused
STEP: Redacting sensitive information from logs


• Failure in Spec Teardown (AfterEach) [3650.387 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 46 lines ...
    	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:256 +0x1da
    testing.tRunner(0xc000487800, 0x23174f8)
    	/usr/local/go/src/testing/testing.go:1193 +0xef
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1238 +0x2b3
------------------------------
E1111 19:50:39.269484   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:51:25.223935   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:52:12.953092   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:52:57.348498   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:53:31.005543   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:54:08.588468   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:54:39.610802   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:55:37.256911   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:56:19.953130   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:57:13.050016   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:57:59.028987   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:58:54.910110   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 19:59:40.740568   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:00:16.464556   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:00:53.680410   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:01:52.336988   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:02:38.572740   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:03:21.682786   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:04:16.258409   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:04:55.697807   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:05:52.625042   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:06:43.366426   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:07:39.858092   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:08:37.686359   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:09:24.454222   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:09:58.079006   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:10:46.199188   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:11:21.467245   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:12:08.586907   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:12:45.540324   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:13:29.906583   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:14:20.787463   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:15:00.105333   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:15:44.848948   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:16:20.111785   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:17:01.919169   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:17:50.313677   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:18:50.332460   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:19:41.812170   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:20:25.888446   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:21:16.797711   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:22:12.995441   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:22:53.553582   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:23:46.483895   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:24:39.308530   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:25:36.368558   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:26:09.324717   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:26:45.722793   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:27:24.642815   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:28:00.135066   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:28:41.614259   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:29:20.863463   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:29:53.403783   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:30:51.591908   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:31:39.387571   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:32:34.128625   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:33:19.812431   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:34:14.364606   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:35:06.405411   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:35:44.689093   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:36:17.892488   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:37:12.331031   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:37:43.465426   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:38:14.961866   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:39:00.863596   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:39:31.737622   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:40:21.481721   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:41:08.626703   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:42:03.082284   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:42:48.734537   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1111 20:43:31.864185   24174 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-cicsfw/events?resourceVersion=8876": dial tcp: lookup capz-e2e-cicsfw-public-custom-vnet-1ce628d5.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Tearing down the management cluster
INFO: Deleting the kind cluster "capz-e2e" failed. You may need to remove this by hand.



Summarizing 1 Failure:

[Fail] Workload cluster creation [AfterEach] Creating a Windows enabled VMSS cluster with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.3/framework/cluster_helpers.go:165

Ran 9 of 22 Specs in 7642.189 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 2h8m43.783712103s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
{"component":"entrypoint","file":"prow/entrypoint/run.go:252","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process gracefully exited before 15m0s grace period","severity":"error","time":"2021-11-11T20:43:42Z"}