This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2022-05-09 19:40
Elapsed1h50m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating a GPU-enabled cluster with a single control plane node and 1 node 33m39s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sGPU\-enabled\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377
Timed out after 1200.001s.
Expected
    <bool>: false
to be true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:76
				
				Click to see stdout/stderrfrom junit.e2e_suite.3.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 430 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Mon, 09 May 2022 19:47:33 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-srs5np" for hosting the cluster
May  9 19:47:33.848: INFO: starting to create namespace for hosting the "capz-e2e-srs5np" test spec
2022/05/09 19:47:33 failed trying to get namespace (capz-e2e-srs5np):namespaces "capz-e2e-srs5np" not found
INFO: Creating namespace capz-e2e-srs5np
INFO: Creating event watcher for namespace "capz-e2e-srs5np"
May  9 19:47:33.935: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-srs5np-ipv6
INFO: Creating the workload cluster with name "capz-e2e-srs5np-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 556.267943ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-srs5np" namespace
STEP: Deleting all clusters in the capz-e2e-srs5np namespace
STEP: Deleting cluster capz-e2e-srs5np-ipv6
INFO: Waiting for the Cluster capz-e2e-srs5np/capz-e2e-srs5np-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-srs5np-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-srs5np-ipv6-control-plane-xvssq, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2f9cf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-t2mhn, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-srs5np-ipv6-control-plane-zxsqn, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-srs5np-ipv6-control-plane-zxsqn, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7vp9m, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-srs5np-ipv6-control-plane-zxsqn, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lp8j8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4d82f, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-srs5np-ipv6-control-plane-xvssq, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2sv56, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-srs5np-ipv6-control-plane-xvssq, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-srs5np-ipv6-control-plane-zxsqn, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-srs5np-ipv6-control-plane-xvssq, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6bzq4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5pc5s, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8nrlw, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-srs5np
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 15m58s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Mon, 09 May 2022 20:03:31 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-jorrez" for hosting the cluster
May  9 20:03:31.964: INFO: starting to create namespace for hosting the "capz-e2e-jorrez" test spec
2022/05/09 20:03:31 failed trying to get namespace (capz-e2e-jorrez):namespaces "capz-e2e-jorrez" not found
INFO: Creating namespace capz-e2e-jorrez
INFO: Creating event watcher for namespace "capz-e2e-jorrez"
May  9 20:03:32.003: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-jorrez-vmss
INFO: Creating the workload cluster with name "capz-e2e-jorrez-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 106 lines ...
STEP: Fetching activity logs took 599.85433ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-jorrez" namespace
STEP: Deleting all clusters in the capz-e2e-jorrez namespace
STEP: Deleting cluster capz-e2e-jorrez-vmss
INFO: Waiting for the Cluster capz-e2e-jorrez/capz-e2e-jorrez-vmss to be deleted
STEP: Waiting for cluster capz-e2e-jorrez-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-br6hf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-jorrez-vmss-control-plane-dplzb, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-x7zsn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-8dz44, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-jorrez-vmss-control-plane-dplzb, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-t7jbz, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-26xbp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-jorrez-vmss-control-plane-dplzb, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-jorrez-vmss-control-plane-dplzb, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-jorrez
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 17m52s on Ginkgo node 3 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Mon, 09 May 2022 19:47:33 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-tm06rn" for hosting the cluster
May  9 19:47:33.843: INFO: starting to create namespace for hosting the "capz-e2e-tm06rn" test spec
2022/05/09 19:47:33 failed trying to get namespace (capz-e2e-tm06rn):namespaces "capz-e2e-tm06rn" not found
INFO: Creating namespace capz-e2e-tm06rn
INFO: Creating event watcher for namespace "capz-e2e-tm06rn"
May  9 19:47:33.920: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-tm06rn-ha
INFO: Creating the workload cluster with name "capz-e2e-tm06rn-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 59 lines ...
STEP: waiting for job default/curl-to-elb-jobe6wzwbf7f6u to be complete
May  9 19:58:06.033: INFO: waiting for job default/curl-to-elb-jobe6wzwbf7f6u to be complete
May  9 19:58:16.119: INFO: job default/curl-to-elb-jobe6wzwbf7f6u is complete, took 10.086608s
STEP: connecting directly to the external LB service
May  9 19:58:16.119: INFO: starting attempts to connect directly to the external LB service
2022/05/09 19:58:16 [DEBUG] GET http://20.237.64.214
2022/05/09 19:58:46 [ERR] GET http://20.237.64.214 request failed: Get "http://20.237.64.214": dial tcp 20.237.64.214:80: i/o timeout
2022/05/09 19:58:46 [DEBUG] GET http://20.237.64.214: retrying in 1s (4 left)
May  9 19:58:47.179: INFO: successfully connected to the external LB service
STEP: deleting the test resources
May  9 19:58:47.179: INFO: starting to delete external LB service weba4kr7p-elb
May  9 19:58:47.284: INFO: starting to delete deployment weba4kr7p
May  9 19:58:47.319: INFO: starting to delete job curl-to-elb-jobe6wzwbf7f6u
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
May  9 19:58:47.421: INFO: starting to create dev deployment namespace
2022/05/09 19:58:47 failed trying to get namespace (development):namespaces "development" not found
2022/05/09 19:58:47 namespace development does not exist, creating...
STEP: Creating production namespace
May  9 19:58:47.493: INFO: starting to create prod deployment namespace
2022/05/09 19:58:47 failed trying to get namespace (production):namespaces "production" not found
2022/05/09 19:58:47 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
May  9 19:58:47.565: INFO: starting to create frontend-prod deployments
May  9 19:58:47.602: INFO: starting to create frontend-dev deployments
May  9 19:58:47.644: INFO: starting to create backend deployments
May  9 19:58:47.704: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
May  9 19:59:10.535: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.205.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
May  9 20:01:21.328: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
May  9 20:01:21.518: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.205.132 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.205.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
May  9 20:05:43.471: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
May  9 20:05:43.639: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.205.133 port 80: Connection timed out

STEP: Cleaning up after ourselves
May  9 20:07:54.544: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
May  9 20:07:54.710: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.205.130 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.205.133 port 80: Connection timed out

STEP: Cleaning up after ourselves
May  9 20:12:16.689: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
May  9 20:12:16.844: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.205.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
May  9 20:14:27.761: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
May  9 20:14:27.950: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.205.132 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-tm06rn-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-tm06rn/capz-e2e-tm06rn-ha logs
May  9 20:16:39.241: INFO: INFO: Collecting logs for node capz-e2e-tm06rn-ha-control-plane-5krh7 in cluster capz-e2e-tm06rn-ha in namespace capz-e2e-tm06rn

May  9 20:16:50.697: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-tm06rn-ha-control-plane-5krh7
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-tm06rn-ha-control-plane-sd5dh, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-ldsfv, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-jw85x, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-q6xrh, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-tm06rn-ha-control-plane-sd5dh, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-tm06rn-ha-control-plane-5krh7, container kube-controller-manager
STEP: Got error while iterating over activity logs for resource group capz-e2e-tm06rn-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000471871s
STEP: Dumping all the Cluster API resources in the "capz-e2e-tm06rn" namespace
STEP: Deleting all clusters in the capz-e2e-tm06rn namespace
STEP: Deleting cluster capz-e2e-tm06rn-ha
INFO: Waiting for the Cluster capz-e2e-tm06rn/capz-e2e-tm06rn-ha to be deleted
STEP: Waiting for cluster capz-e2e-tm06rn-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-tm06rn-ha-control-plane-sd5dh, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-tm06rn-ha-control-plane-5krh7, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-tm06rn-ha-control-plane-sd5dh, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-vcc22, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mtwsw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-9gj56, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-tm06rn-ha-control-plane-5krh7, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ldsfv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-tm06rn-ha-control-plane-5krh7, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-n6tqm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-tm06rn-ha-control-plane-sd5dh, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-v68wb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-q6xrh, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-jw85x, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-tm06rn-ha-control-plane-5krh7, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-n4tz4, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5mtvg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-tm06rn-ha-control-plane-sd5dh, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-jz74b, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-tm06rn
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 42m41s on Ginkgo node 2 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Mon, 09 May 2022 19:47:33 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-5ns8he" for hosting the cluster
May  9 19:47:33.829: INFO: starting to create namespace for hosting the "capz-e2e-5ns8he" test spec
2022/05/09 19:47:33 failed trying to get namespace (capz-e2e-5ns8he):namespaces "capz-e2e-5ns8he" not found
INFO: Creating namespace capz-e2e-5ns8he
INFO: Creating event watcher for namespace "capz-e2e-5ns8he"
May  9 19:47:33.866: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-5ns8he-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-5ns8he-public-custom-vnet-control-plane-fq48q, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-7j7vg, container coredns
STEP: Dumping workload cluster capz-e2e-5ns8he/capz-e2e-5ns8he-public-custom-vnet Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-q5rhs, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-prxvv, container coredns
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-5ns8he-public-custom-vnet-control-plane-fq48q, container etcd
STEP: Got error while iterating over activity logs for resource group capz-e2e-5ns8he-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000822919s
STEP: Dumping all the Cluster API resources in the "capz-e2e-5ns8he" namespace
STEP: Deleting all clusters in the capz-e2e-5ns8he namespace
STEP: Deleting cluster capz-e2e-5ns8he-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-5ns8he/capz-e2e-5ns8he-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-5ns8he-public-custom-vnet to be deleted
W0509 20:44:02.896261   24186 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I0509 20:44:33.828235   24186 trace.go:205] Trace[1136027710]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (09-May-2022 20:44:03.827) (total time: 30000ms):
Trace[1136027710]: [30.000879616s] [30.000879616s] END
E0509 20:44:33.828324   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp 20.121.175.230:6443: i/o timeout
I0509 20:45:07.016286   24186 trace.go:205] Trace[295462828]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (09-May-2022 20:44:37.015) (total time: 30001ms):
Trace[295462828]: [30.001229712s] [30.001229712s] END
E0509 20:45:07.016352   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp 20.121.175.230:6443: i/o timeout
I0509 20:45:42.984508   24186 trace.go:205] Trace[161420519]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (09-May-2022 20:45:12.983) (total time: 30000ms):
Trace[161420519]: [30.000944198s] [30.000944198s] END
E0509 20:45:42.984574   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp 20.121.175.230:6443: i/o timeout
I0509 20:46:21.796841   24186 trace.go:205] Trace[1901722852]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (09-May-2022 20:45:51.795) (total time: 30001ms):
Trace[1901722852]: [30.001083559s] [30.001083559s] END
E0509 20:46:21.796893   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp 20.121.175.230:6443: i/o timeout
I0509 20:47:14.504052   24186 trace.go:205] Trace[756485230]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (09-May-2022 20:46:44.503) (total time: 30000ms):
Trace[756485230]: [30.000815164s] [30.000815164s] END
E0509 20:47:14.504117   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp 20.121.175.230:6443: i/o timeout
I0509 20:48:31.268648   24186 trace.go:205] Trace[921160481]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (09-May-2022 20:48:01.268) (total time: 30000ms):
Trace[921160481]: [30.000558422s] [30.000558422s] END
E0509 20:48:31.268714   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp 20.121.175.230:6443: i/o timeout
E0509 20:49:20.355011   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-5ns8he
STEP: Running additional cleanup for the "create-workload-cluster" test spec
May  9 20:49:24.417: INFO: deleting an existing virtual network "custom-vnet"
May  9 20:49:35.123: INFO: deleting an existing route table "node-routetable"
May  9 20:49:37.568: INFO: deleting an existing network security group "node-nsg"
May  9 20:49:47.870: INFO: deleting an existing network security group "control-plane-nsg"
E0509 20:49:50.808530   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
May  9 20:49:58.160: INFO: verifying the existing resource group "capz-e2e-5ns8he-public-custom-vnet" is empty
May  9 20:49:58.226: INFO: deleting the existing resource group "capz-e2e-5ns8he-public-custom-vnet"
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E0509 20:50:29.605652   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 20:51:07.193337   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 1h3m50s on Ginkgo node 1 of 3


• [SLOW TEST:3829.985 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Mon, 09 May 2022 20:30:14 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-jfplx0" for hosting the cluster
May  9 20:30:14.549: INFO: starting to create namespace for hosting the "capz-e2e-jfplx0" test spec
2022/05/09 20:30:14 failed trying to get namespace (capz-e2e-jfplx0):namespaces "capz-e2e-jfplx0" not found
INFO: Creating namespace capz-e2e-jfplx0
INFO: Creating event watcher for namespace "capz-e2e-jfplx0"
May  9 20:30:14.587: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-jfplx0-oot
INFO: Creating the workload cluster with name "capz-e2e-jfplx0-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-jobqf52yk32bwy to be complete
May  9 20:39:18.810: INFO: waiting for job default/curl-to-elb-jobqf52yk32bwy to be complete
May  9 20:39:28.878: INFO: job default/curl-to-elb-jobqf52yk32bwy is complete, took 10.068408398s
STEP: connecting directly to the external LB service
May  9 20:39:28.878: INFO: starting attempts to connect directly to the external LB service
2022/05/09 20:39:28 [DEBUG] GET http://20.81.101.174
2022/05/09 20:39:58 [ERR] GET http://20.81.101.174 request failed: Get "http://20.81.101.174": dial tcp 20.81.101.174:80: i/o timeout
2022/05/09 20:39:58 [DEBUG] GET http://20.81.101.174: retrying in 1s (4 left)
May  9 20:39:59.939: INFO: successfully connected to the external LB service
STEP: deleting the test resources
May  9 20:39:59.939: INFO: starting to delete external LB service webhz2w0h-elb
May  9 20:39:59.989: INFO: starting to delete deployment webhz2w0h
May  9 20:40:00.023: INFO: starting to delete job curl-to-elb-jobqf52yk32bwy
... skipping 34 lines ...
STEP: Fetching activity logs took 614.229764ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-jfplx0" namespace
STEP: Deleting all clusters in the capz-e2e-jfplx0 namespace
STEP: Deleting cluster capz-e2e-jfplx0-oot
INFO: Waiting for the Cluster capz-e2e-jfplx0/capz-e2e-jfplx0-oot to be deleted
STEP: Waiting for cluster capz-e2e-jfplx0-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-jfplx0-oot-control-plane-hlcvk, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xd6ws, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-5ksjc, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-fwhhr, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-7qx4t, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-4rbk7, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-r7b5s, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8df7p, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-jfplx0-oot-control-plane-hlcvk, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-jfplx0-oot-control-plane-hlcvk, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-sfvtg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-controller-manager, container cloud-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-jfplx0-oot-control-plane-hlcvk, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5k9rf, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-jfplx0
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 22m41s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Mon, 09 May 2022 20:21:23 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-yb7xjy" for hosting the cluster
May  9 20:21:23.572: INFO: starting to create namespace for hosting the "capz-e2e-yb7xjy" test spec
2022/05/09 20:21:23 failed trying to get namespace (capz-e2e-yb7xjy):namespaces "capz-e2e-yb7xjy" not found
INFO: Creating namespace capz-e2e-yb7xjy
INFO: Creating event watcher for namespace "capz-e2e-yb7xjy"
May  9 20:21:23.626: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-yb7xjy-gpu
INFO: Creating the workload cluster with name "capz-e2e-yb7xjy-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: Fetching activity logs took 977.617617ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-yb7xjy" namespace
STEP: Deleting all clusters in the capz-e2e-yb7xjy namespace
STEP: Deleting cluster capz-e2e-yb7xjy-gpu
INFO: Waiting for the Cluster capz-e2e-yb7xjy/capz-e2e-yb7xjy-gpu to be deleted
STEP: Waiting for cluster capz-e2e-yb7xjy-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-d8xn7, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-25hpt, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-yb7xjy-gpu-control-plane-62wxf, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-yb7xjy-gpu-control-plane-62wxf, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-yb7xjy-gpu-control-plane-62wxf, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-yb7xjy-gpu-control-plane-62wxf, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kpxrz, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ln28x, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-srlh7, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-yb7xjy
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 33m39s on Ginkgo node 3 of 3

... skipping 57 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Mon, 09 May 2022 20:51:23 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-1217y1" for hosting the cluster
May  9 20:51:23.818: INFO: starting to create namespace for hosting the "capz-e2e-1217y1" test spec
2022/05/09 20:51:23 failed trying to get namespace (capz-e2e-1217y1):namespaces "capz-e2e-1217y1" not found
INFO: Creating namespace capz-e2e-1217y1
INFO: Creating event watcher for namespace "capz-e2e-1217y1"
May  9 20:51:23.850: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-1217y1-aks
INFO: Creating the workload cluster with name "capz-e2e-1217y1-aks" using the "aks-multi-tenancy" template (Kubernetes v1.22.6, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E0509 20:51:51.359849   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 20:52:46.322293   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 20:53:22.829233   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 20:53:55.716839   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 20:54:51.708812   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 20:55:32.322074   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 20:56:04.723146   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 20:56:45.086486   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 20:57:20.078026   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 20:58:15.042008   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 20:58:47.374543   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
May  9 20:58:55.795: INFO: Waiting for the first control plane machine managed by capz-e2e-1217y1/capz-e2e-1217y1-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
E0509 20:59:44.376678   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:00:15.441863   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:00:59.808775   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
May  9 21:01:26.653: INFO: Waiting for the first control plane machine managed by capz-e2e-1217y1/capz-e2e-1217y1-aks to be provisioned
STEP: Waiting for all control plane nodes to exist
INFO: Waiting for the machine deployments to be provisioned
INFO: Waiting for the machine pools to be provisioned
STEP: Waiting for the machine pool workload nodes to exist
... skipping 10 lines ...
STEP: time sync OK for host aks-agentpool1-89163024-vmss000000
STEP: time sync OK for host aks-agentpool1-89163024-vmss000000
STEP: Dumping logs from the "capz-e2e-1217y1-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-1217y1/capz-e2e-1217y1-aks logs
May  9 21:01:33.169: INFO: INFO: Collecting logs for node aks-agentpool1-89163024-vmss000000 in cluster capz-e2e-1217y1-aks in namespace capz-e2e-1217y1

E0509 21:01:56.118341   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:02:34.508259   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:03:05.086801   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
May  9 21:03:44.012: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-1217y1/capz-e2e-1217y1-aks: [dialing public load balancer at capz-e2e-1217y1-aks-eaddc9ce.hcp.eastus.azmk8s.io: dial tcp 52.224.150.23:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
May  9 21:03:44.499: INFO: INFO: Collecting logs for node aks-agentpool1-89163024-vmss000000 in cluster capz-e2e-1217y1-aks in namespace capz-e2e-1217y1

E0509 21:03:55.957387   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:04:40.938799   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:05:14.382972   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:05:51.714814   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
May  9 21:05:55.084: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-1217y1/capz-e2e-1217y1-aks: [dialing public load balancer at capz-e2e-1217y1-aks-eaddc9ce.hcp.eastus.azmk8s.io: dial tcp 52.224.150.23:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-1217y1/capz-e2e-1217y1-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 446.360519ms
STEP: Creating log watcher for controller kube-system/cloud-node-manager-4xbmp, container cloud-node-manager
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-wpcr6, container node-driver-registrar
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-wpcr6, container liveness-probe
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-wpcr6, container azurefile
... skipping 20 lines ...
STEP: Fetching activity logs took 513.864323ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-1217y1" namespace
STEP: Deleting all clusters in the capz-e2e-1217y1 namespace
STEP: Deleting cluster capz-e2e-1217y1-aks
INFO: Waiting for the Cluster capz-e2e-1217y1/capz-e2e-1217y1-aks to be deleted
STEP: Waiting for cluster capz-e2e-1217y1-aks to be deleted
E0509 21:06:28.369785   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:07:02.170887   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:07:58.132707   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:08:37.337317   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:09:11.435657   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:09:57.041420   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:10:56.545402   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:11:36.922723   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:12:07.342642   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:12:42.104068   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:13:17.299495   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-1217y1
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E0509 21:14:06.535494   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:14:56.509658   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 23m42s on Ginkgo node 1 of 3


• [SLOW TEST:1421.879 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 8 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Mon, 09 May 2022 20:55:03 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-hu4zj5" for hosting the cluster
May  9 20:55:03.072: INFO: starting to create namespace for hosting the "capz-e2e-hu4zj5" test spec
2022/05/09 20:55:03 failed trying to get namespace (capz-e2e-hu4zj5):namespaces "capz-e2e-hu4zj5" not found
INFO: Creating namespace capz-e2e-hu4zj5
INFO: Creating event watcher for namespace "capz-e2e-hu4zj5"
May  9 20:55:03.118: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-hu4zj5-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-hu4zj5-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 129 lines ...
STEP: Fetching activity logs took 1.02869745s
STEP: Dumping all the Cluster API resources in the "capz-e2e-hu4zj5" namespace
STEP: Deleting all clusters in the capz-e2e-hu4zj5 namespace
STEP: Deleting cluster capz-e2e-hu4zj5-win-vmss
INFO: Waiting for the Cluster capz-e2e-hu4zj5/capz-e2e-hu4zj5-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-hu4zj5-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-m5px7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-vw25g, container kube-flannel: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-hu4zj5
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 29m44s on Ginkgo node 3 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Mon, 09 May 2022 20:52:55 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-r4by8u" for hosting the cluster
May  9 20:52:55.711: INFO: starting to create namespace for hosting the "capz-e2e-r4by8u" test spec
2022/05/09 20:52:55 failed trying to get namespace (capz-e2e-r4by8u):namespaces "capz-e2e-r4by8u" not found
INFO: Creating namespace capz-e2e-r4by8u
INFO: Creating event watcher for namespace "capz-e2e-r4by8u"
May  9 20:52:55.748: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-r4by8u-win-ha
INFO: Creating the workload cluster with name "capz-e2e-r4by8u-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 145 lines ...
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-nwpgm, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-7g8b8, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-proxy-22b4w, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-5p266, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-vh2vh, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-r4by8u-win-ha-control-plane-62fqc, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-r4by8u-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001109368s
STEP: Dumping all the Cluster API resources in the "capz-e2e-r4by8u" namespace
STEP: Deleting all clusters in the capz-e2e-r4by8u namespace
STEP: Deleting cluster capz-e2e-r4by8u-win-ha
INFO: Waiting for the Cluster capz-e2e-r4by8u/capz-e2e-r4by8u-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-r4by8u-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-nwpgm, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-kq9cc, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-r4by8u
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 36m57s on Ginkgo node 2 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows Enabled cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:494
    With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496
------------------------------
E0509 21:15:44.321781   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:16:40.425215   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:17:17.784045   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:17:47.949728   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:18:35.399848   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:19:23.527222   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:20:02.786612   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:21:02.193603   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:21:54.888762   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:22:26.673807   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:23:04.207368   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:23:45.512500   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:24:40.104861   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:25:16.413499   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:25:59.257253   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:26:48.933735   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:27:40.700523   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:28:35.817638   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:29:08.417316   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0509 21:29:44.190573   24186 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5ns8he/events?resourceVersion=10914": dial tcp: lookup capz-e2e-5ns8he-public-custom-vnet-52cc21c1.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a GPU-enabled cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:76

Ran 9 of 22 Specs in 6257.893 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 1h45m46.2065172s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...