This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-19 06:35
Elapsed1h46m
Revisionmain

Test Failures


capz-e2e Workload cluster creation Creating a GPU-enabled cluster with a single control plane node and 1 node 34m18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sGPU\-enabled\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413
Timed out after 1200.001s.
Expected
    <bool>: false
to be true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:76
				
				Click to see stdout/stderrfrom junit.e2e_suite.2.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 15 Skipped Tests

Error lines from build-log.txt

... skipping 431 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:288

INFO: "With ipv6 worker node" started at Fri, 19 Nov 2021 06:42:51 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-6o69hf" for hosting the cluster
Nov 19 06:42:51.816: INFO: starting to create namespace for hosting the "capz-e2e-6o69hf" test spec
2021/11/19 06:42:51 failed trying to get namespace (capz-e2e-6o69hf):namespaces "capz-e2e-6o69hf" not found
INFO: Creating namespace capz-e2e-6o69hf
INFO: Creating event watcher for namespace "capz-e2e-6o69hf"
Nov 19 06:42:51.902: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-6o69hf-ipv6
INFO: Creating the workload cluster with name "capz-e2e-6o69hf-ipv6" using the "ipv6" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 611.048423ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-6o69hf" namespace
STEP: Deleting all clusters in the capz-e2e-6o69hf namespace
STEP: Deleting cluster capz-e2e-6o69hf-ipv6
INFO: Waiting for the Cluster capz-e2e-6o69hf/capz-e2e-6o69hf-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-6o69hf-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-6o69hf-ipv6-control-plane-4l28l, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-6o69hf-ipv6-control-plane-j9rrq, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-6o69hf-ipv6-control-plane-9vqq4, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-6o69hf-ipv6-control-plane-9vqq4, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5nt6l, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4js7v, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-77qxt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-9nssr, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-t47vv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-6o69hf-ipv6-control-plane-4l28l, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qcfqs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-6o69hf-ipv6-control-plane-4l28l, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-lph7k, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-vnv5j, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-6o69hf-ipv6-control-plane-j9rrq, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-6o69hf-ipv6-control-plane-9vqq4, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rlv26, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-6o69hf-ipv6-control-plane-9vqq4, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-6o69hf-ipv6-control-plane-4l28l, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5q9xg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-6o69hf-ipv6-control-plane-j9rrq, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-6o69hf-ipv6-control-plane-j9rrq, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-xbdcm, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-6o69hf
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 17m19s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:334

INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" started at Fri, 19 Nov 2021 07:00:10 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-6neog1" for hosting the cluster
Nov 19 07:00:10.507: INFO: starting to create namespace for hosting the "capz-e2e-6neog1" test spec
2021/11/19 07:00:10 failed trying to get namespace (capz-e2e-6neog1):namespaces "capz-e2e-6neog1" not found
INFO: Creating namespace capz-e2e-6neog1
INFO: Creating event watcher for namespace "capz-e2e-6neog1"
Nov 19 07:00:10.545: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-6neog1-vmss
INFO: Creating the workload cluster with name "capz-e2e-6neog1-vmss" using the "machine-pool" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 148 lines ...
Nov 19 07:18:00.000: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

Nov 19 07:18:00.254: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-6neog1-vmss in namespace capz-e2e-6neog1

Nov 19 07:18:26.557: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set win-p-win

Failed to get logs for machine pool capz-e2e-6neog1-vmss-mp-win, cluster capz-e2e-6neog1/capz-e2e-6neog1-vmss: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-6neog1/capz-e2e-6neog1-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 260.352117ms
STEP: Dumping workload cluster capz-e2e-6neog1/capz-e2e-6neog1-vmss Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-c8rsk, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-mmt9z, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-hvn2m, container kube-proxy
... skipping 10 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-qvwrp, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-br8ls, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-kg6fj, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-windows-kg6fj, container calico-node-felix
STEP: Creating log watcher for controller kube-system/calico-node-windows-f9dcc, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-windows-f9dcc, container calico-node-felix
STEP: Error starting logs stream for pod kube-system/kube-proxy-windows-fnqlt, container kube-proxy: container "kube-proxy" in pod "kube-proxy-windows-fnqlt" is waiting to start: trying and failing to pull image
STEP: Error starting logs stream for pod kube-system/kube-proxy-windows-5lmcj, container kube-proxy: container "kube-proxy" in pod "kube-proxy-windows-5lmcj" is waiting to start: trying and failing to pull image
STEP: Got error while iterating over activity logs for resource group capz-e2e-6neog1-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001027773s
STEP: Dumping all the Cluster API resources in the "capz-e2e-6neog1" namespace
STEP: Deleting all clusters in the capz-e2e-6neog1 namespace
STEP: Deleting cluster capz-e2e-6neog1-vmss
INFO: Waiting for the Cluster capz-e2e-6neog1/capz-e2e-6neog1-vmss to be deleted
STEP: Waiting for cluster capz-e2e-6neog1-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-9vl9v, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-kg6fj, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mmt9z, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-s6ztv, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-6neog1-vmss-control-plane-5wfdq, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-f9dcc, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-6neog1-vmss-control-plane-5wfdq, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-6neog1-vmss-control-plane-5wfdq, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-br8ls, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-kg6fj, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-6neog1-vmss-control-plane-5wfdq, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-f9dcc, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-c8rsk, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-6neog1
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" ran for 26m5s on Ginkgo node 2 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:205

INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" started at Fri, 19 Nov 2021 06:42:50 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-5epd8b" for hosting the cluster
Nov 19 06:42:50.454: INFO: starting to create namespace for hosting the "capz-e2e-5epd8b" test spec
2021/11/19 06:42:50 failed trying to get namespace (capz-e2e-5epd8b):namespaces "capz-e2e-5epd8b" not found
INFO: Creating namespace capz-e2e-5epd8b
INFO: Creating event watcher for namespace "capz-e2e-5epd8b"
Nov 19 06:42:50.502: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-5epd8b-ha
INFO: Creating the workload cluster with name "capz-e2e-5epd8b-ha" using the "(default)" template (Kubernetes v1.22.4, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 65 lines ...
STEP: waiting for job default/curl-to-elb-jobvtx82wrd30e to be complete
Nov 19 06:50:42.868: INFO: waiting for job default/curl-to-elb-jobvtx82wrd30e to be complete
Nov 19 06:50:52.900: INFO: job default/curl-to-elb-jobvtx82wrd30e is complete, took 10.03188858s
STEP: connecting directly to the external LB service
Nov 19 06:50:52.900: INFO: starting attempts to connect directly to the external LB service
2021/11/19 06:50:52 [DEBUG] GET http://52.159.90.224
2021/11/19 06:51:22 [ERR] GET http://52.159.90.224 request failed: Get "http://52.159.90.224": dial tcp 52.159.90.224:80: i/o timeout
2021/11/19 06:51:22 [DEBUG] GET http://52.159.90.224: retrying in 1s (4 left)
Nov 19 06:51:23.926: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 19 06:51:23.926: INFO: starting to delete external LB service web1ov1b3-elb
Nov 19 06:51:23.998: INFO: starting to delete deployment web1ov1b3
Nov 19 06:51:24.017: INFO: starting to delete job curl-to-elb-jobvtx82wrd30e
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 19 06:51:24.077: INFO: starting to create dev deployment namespace
2021/11/19 06:51:24 failed trying to get namespace (development):namespaces "development" not found
2021/11/19 06:51:24 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 19 06:51:24.152: INFO: starting to create prod deployment namespace
2021/11/19 06:51:24 failed trying to get namespace (production):namespaces "production" not found
2021/11/19 06:51:24 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 19 06:51:24.242: INFO: starting to create frontend-prod deployments
Nov 19 06:51:24.263: INFO: starting to create frontend-dev deployments
Nov 19 06:51:24.293: INFO: starting to create backend deployments
Nov 19 06:51:24.321: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 19 06:51:46.243: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.146.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 19 06:53:57.689: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 19 06:53:57.797: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.146.132 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.146.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 19 06:58:19.833: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 19 06:58:19.942: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.21.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 19 07:00:31.667: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 19 07:00:31.778: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.146.131 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.21.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 19 07:04:53.810: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 19 07:04:53.925: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.146.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 19 07:07:04.122: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 19 07:07:04.236: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.146.132 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowsw28ucr to be available
Nov 19 07:09:14.339: INFO: starting to wait for deployment to become available
Nov 19 07:10:04.441: INFO: Deployment default/web-windowsw28ucr is now available, took 50.101740682s
... skipping 51 lines ...
Nov 19 07:14:00.697: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-5epd8b-ha-md-0-fptvn

Nov 19 07:14:00.973: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster capz-e2e-5epd8b-ha in namespace capz-e2e-5epd8b

Nov 19 07:14:26.663: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-5epd8b-ha-md-win-nwt6n

Failed to get logs for machine capz-e2e-5epd8b-ha-md-win-6c8896cccc-6ztnd, cluster capz-e2e-5epd8b/capz-e2e-5epd8b-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 19 07:14:26.882: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster capz-e2e-5epd8b-ha in namespace capz-e2e-5epd8b

Nov 19 07:14:54.605: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-5epd8b-ha-md-win-2b4qn

Failed to get logs for machine capz-e2e-5epd8b-ha-md-win-6c8896cccc-db8v5, cluster capz-e2e-5epd8b/capz-e2e-5epd8b-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-5epd8b/capz-e2e-5epd8b-ha kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-node-bhkpl, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-klt9g, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-5epd8b-ha-control-plane-7nfgd, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-windows-6jzkl, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-5epd8b-ha-control-plane-6l6hh, container kube-scheduler
... skipping 22 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-rzlkb, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-m5gtk, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-vmr4q, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-85bb5, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-sv6vb, container coredns
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-5epd8b-ha-control-plane-6hgdr, container kube-scheduler
STEP: Error starting logs stream for pod kube-system/kube-proxy-windows-jf4ht, container kube-proxy: container "kube-proxy" in pod "kube-proxy-windows-jf4ht" is waiting to start: trying and failing to pull image
STEP: Error starting logs stream for pod kube-system/kube-proxy-windows-85bb5, container kube-proxy: container "kube-proxy" in pod "kube-proxy-windows-85bb5" is waiting to start: trying and failing to pull image
STEP: Got error while iterating over activity logs for resource group capz-e2e-5epd8b-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00070518s
STEP: Dumping all the Cluster API resources in the "capz-e2e-5epd8b" namespace
STEP: Deleting all clusters in the capz-e2e-5epd8b namespace
STEP: Deleting cluster capz-e2e-5epd8b-ha
INFO: Waiting for the Cluster capz-e2e-5epd8b/capz-e2e-5epd8b-ha to be deleted
STEP: Waiting for cluster capz-e2e-5epd8b-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-6jzkl, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-5epd8b-ha-control-plane-6hgdr, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-5epd8b-ha-control-plane-6hgdr, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-5epd8b-ha-control-plane-7nfgd, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-5epd8b-ha-control-plane-6l6hh, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rzlkb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-bhkpl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-5epd8b-ha-control-plane-6l6hh, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-m5gtk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-5epd8b-ha-control-plane-6l6hh, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-96lv8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-5epd8b-ha-control-plane-7nfgd, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-skmvb, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-5epd8b-ha-control-plane-6l6hh, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-wtcb4, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-sv6vb, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bzvr6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-klt9g, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-5epd8b-ha-control-plane-6hgdr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-5epd8b-ha-control-plane-7nfgd, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mr5zk, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-6jzkl, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-vmr4q, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-vmr4q, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-5epd8b-ha-control-plane-7nfgd, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ch4tg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-5epd8b-ha-control-plane-6hgdr, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-5epd8b
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" ran for 44m8s on Ginkgo node 3 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:144

INFO: "Creates a public management cluster in the same vnet" started at Fri, 19 Nov 2021 06:42:37 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-3vndff" for hosting the cluster
Nov 19 06:42:37.454: INFO: starting to create namespace for hosting the "capz-e2e-3vndff" test spec
2021/11/19 06:42:37 failed trying to get namespace (capz-e2e-3vndff):namespaces "capz-e2e-3vndff" not found
INFO: Creating namespace capz-e2e-3vndff
INFO: Creating event watcher for namespace "capz-e2e-3vndff"
Nov 19 06:42:37.491: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-3vndff-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-dlj99, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-w6gb8, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-3vndff-public-custom-vnet-control-plane-68vvc, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-8fw9x, container coredns
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-ch5rw, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-3vndff-public-custom-vnet-control-plane-68vvc, container kube-controller-manager
STEP: Got error while iterating over activity logs for resource group capz-e2e-3vndff-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000360585s
STEP: Dumping all the Cluster API resources in the "capz-e2e-3vndff" namespace
STEP: Deleting all clusters in the capz-e2e-3vndff namespace
STEP: Deleting cluster capz-e2e-3vndff-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-3vndff/capz-e2e-3vndff-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-3vndff-public-custom-vnet to be deleted
W1119 07:30:44.228691   24497 reflector.go:441] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1119 07:31:15.789585   24497 trace.go:205] Trace[177895684]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (19-Nov-2021 07:30:45.788) (total time: 30000ms):
Trace[177895684]: [30.0008755s] [30.0008755s] END
E1119 07:31:15.789682   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp 52.159.92.206:6443: i/o timeout
I1119 07:31:47.524279   24497 trace.go:205] Trace[1827448722]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (19-Nov-2021 07:31:17.523) (total time: 30000ms):
Trace[1827448722]: [30.00099553s] [30.00099553s] END
E1119 07:31:47.524353   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp 52.159.92.206:6443: i/o timeout
I1119 07:32:22.578238   24497 trace.go:205] Trace[2129063761]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (19-Nov-2021 07:31:52.577) (total time: 30000ms):
Trace[2129063761]: [30.00091486s] [30.00091486s] END
E1119 07:32:22.578304   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp 52.159.92.206:6443: i/o timeout
I1119 07:33:05.148010   24497 trace.go:205] Trace[56725840]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (19-Nov-2021 07:32:35.147) (total time: 30000ms):
Trace[56725840]: [30.00067415s] [30.00067415s] END
E1119 07:33:05.148074   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp 52.159.92.206:6443: i/o timeout
I1119 07:33:49.406456   24497 trace.go:205] Trace[1895792365]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (19-Nov-2021 07:33:19.405) (total time: 30000ms):
Trace[1895792365]: [30.000576481s] [30.000576481s] END
E1119 07:33:49.406521   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp 52.159.92.206:6443: i/o timeout
E1119 07:34:22.513816   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-3vndff
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 19 07:34:38.165: INFO: deleting an existing virtual network "custom-vnet"
Nov 19 07:34:48.873: INFO: deleting an existing route table "node-routetable"
Nov 19 07:34:59.313: INFO: deleting an existing network security group "node-nsg"
E1119 07:35:05.555173   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 19 07:35:09.774: INFO: deleting an existing network security group "control-plane-nsg"
Nov 19 07:35:20.220: INFO: verifying the existing resource group "capz-e2e-3vndff-public-custom-vnet" is empty
Nov 19 07:35:21.809: INFO: deleting the existing resource group "capz-e2e-3vndff-public-custom-vnet"
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1119 07:35:43.522269   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 07:36:17.096205   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 53m50s on Ginkgo node 1 of 3


• [SLOW TEST:3230.231 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:455

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Fri, 19 Nov 2021 07:26:58 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-xtp293" for hosting the cluster
Nov 19 07:26:58.319: INFO: starting to create namespace for hosting the "capz-e2e-xtp293" test spec
2021/11/19 07:26:58 failed trying to get namespace (capz-e2e-xtp293):namespaces "capz-e2e-xtp293" not found
INFO: Creating namespace capz-e2e-xtp293
INFO: Creating event watcher for namespace "capz-e2e-xtp293"
Nov 19 07:26:58.353: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-xtp293-oot
INFO: Creating the workload cluster with name "capz-e2e-xtp293-oot" using the "external-cloud-provider" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 98 lines ...
STEP: Fetching activity logs took 583.944653ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-xtp293" namespace
STEP: Deleting all clusters in the capz-e2e-xtp293 namespace
STEP: Deleting cluster capz-e2e-xtp293-oot
INFO: Waiting for the Cluster capz-e2e-xtp293/capz-e2e-xtp293-oot to be deleted
STEP: Waiting for cluster capz-e2e-xtp293-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rqc4w, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-l7rn2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-49r26, container cloud-node-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-xtp293
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 16m12s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:490

INFO: "with a single control plane node and 1 node" started at Fri, 19 Nov 2021 07:36:27 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-hmopmu" for hosting the cluster
Nov 19 07:36:27.688: INFO: starting to create namespace for hosting the "capz-e2e-hmopmu" test spec
2021/11/19 07:36:27 failed trying to get namespace (capz-e2e-hmopmu):namespaces "capz-e2e-hmopmu" not found
INFO: Creating namespace capz-e2e-hmopmu
INFO: Creating event watcher for namespace "capz-e2e-hmopmu"
Nov 19 07:36:27.719: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-hmopmu-aks
INFO: Creating the workload cluster with name "capz-e2e-hmopmu-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1119 07:36:57.952793   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 07:37:32.334334   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 07:38:11.273125   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 07:39:04.422725   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 07:40:04.319076   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 19 07:40:40.404: INFO: Waiting for the first control plane machine managed by capz-e2e-hmopmu/capz-e2e-hmopmu-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
INFO: Waiting for control plane to be ready
Nov 19 07:40:50.459: INFO: Waiting for the first control plane machine managed by capz-e2e-hmopmu/capz-e2e-hmopmu-aks to be provisioned
STEP: Waiting for all control plane nodes to exist
INFO: Waiting for the machine deployments to be provisioned
INFO: Waiting for the machine pools to be provisioned
STEP: Waiting for the machine pool workload nodes to exist
STEP: Waiting for the machine pool workload nodes to exist
Nov 19 07:40:50.932: INFO: want 2 instances, found 0 ready and 0 available. generation: 1, observedGeneration: 0
E1119 07:40:54.602444   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 19 07:40:55.950: INFO: want 2 instances, found 2 ready and 2 available. generation: 1, observedGeneration: 1
Nov 19 07:40:55.970: INFO: mapping nsenter pods to hostnames for host-by-host execution
Nov 19 07:40:55.971: INFO: found host aks-agentpool0-12381085-vmss000000 with pod nsenter-6dfpq
Nov 19 07:40:55.971: INFO: found host aks-agentpool1-12381085-vmss000000 with pod nsenter-brv7b
STEP: checking that time synchronization is healthy on aks-agentpool1-12381085-vmss000000
STEP: checking that time synchronization is healthy on aks-agentpool1-12381085-vmss000000
... skipping 2 lines ...
STEP: time sync OK for host aks-agentpool1-12381085-vmss000000
STEP: time sync OK for host aks-agentpool1-12381085-vmss000000
STEP: Dumping logs from the "capz-e2e-hmopmu-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-hmopmu/capz-e2e-hmopmu-aks logs
Nov 19 07:40:56.642: INFO: INFO: Collecting logs for node aks-agentpool1-12381085-vmss000000 in cluster capz-e2e-hmopmu-aks in namespace capz-e2e-hmopmu

E1119 07:41:41.517745   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 07:42:12.062493   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 07:43:06.265596   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 19 07:43:06.804: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-hmopmu/capz-e2e-hmopmu-aks: [dialing public load balancer at capz-e2e-hmopmu-aks-cad9ab70.hcp.northcentralus.azmk8s.io: dial tcp 52.162.98.102:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
Nov 19 07:43:07.340: INFO: INFO: Collecting logs for node aks-agentpool1-12381085-vmss000000 in cluster capz-e2e-hmopmu-aks in namespace capz-e2e-hmopmu

E1119 07:43:52.318428   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 07:44:26.417996   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 07:45:12.814933   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 19 07:45:17.876: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-hmopmu/capz-e2e-hmopmu-aks: [dialing public load balancer at capz-e2e-hmopmu-aks-cad9ab70.hcp.northcentralus.azmk8s.io: dial tcp 52.162.98.102:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-hmopmu/capz-e2e-hmopmu-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 420.878562ms
STEP: Dumping workload cluster capz-e2e-hmopmu/capz-e2e-hmopmu-aks Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-llplz, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-autoscaler-54d55c8b75-mp6gf, container autoscaler
STEP: Creating log watcher for controller kube-system/kube-proxy-x7znl, container kube-proxy
... skipping 8 lines ...
STEP: Fetching activity logs took 539.086213ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-hmopmu" namespace
STEP: Deleting all clusters in the capz-e2e-hmopmu namespace
STEP: Deleting cluster capz-e2e-hmopmu-aks
INFO: Waiting for the Cluster capz-e2e-hmopmu/capz-e2e-hmopmu-aks to be deleted
STEP: Waiting for cluster capz-e2e-hmopmu-aks to be deleted
E1119 07:46:05.646158   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 07:47:03.713926   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 07:47:37.760772   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 07:48:27.370345   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-hmopmu
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1119 07:49:05.282276   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 13m13s on Ginkgo node 1 of 3


• [SLOW TEST:792.951 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413

INFO: "with a single control plane node and 1 node" started at Fri, 19 Nov 2021 07:26:15 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-z3e749" for hosting the cluster
Nov 19 07:26:15.097: INFO: starting to create namespace for hosting the "capz-e2e-z3e749" test spec
2021/11/19 07:26:15 failed trying to get namespace (capz-e2e-z3e749):namespaces "capz-e2e-z3e749" not found
INFO: Creating namespace capz-e2e-z3e749
INFO: Creating event watcher for namespace "capz-e2e-z3e749"
Nov 19 07:26:15.141: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-z3e749-gpu
INFO: Creating the workload cluster with name "capz-e2e-z3e749-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.4, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 124 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Fri, 19 Nov 2021 07:43:10 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-b8n0um" for hosting the cluster
Nov 19 07:43:10.730: INFO: starting to create namespace for hosting the "capz-e2e-b8n0um" test spec
2021/11/19 07:43:10 failed trying to get namespace (capz-e2e-b8n0um):namespaces "capz-e2e-b8n0um" not found
INFO: Creating namespace capz-e2e-b8n0um
INFO: Creating event watcher for namespace "capz-e2e-b8n0um"
Nov 19 07:43:10.758: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-b8n0um-win-ha
INFO: Creating the workload cluster with name "capz-e2e-b8n0um-win-ha" using the "windows" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 91 lines ...
STEP: waiting for job default/curl-to-elb-jobx1nks38kn8t to be complete
Nov 19 07:55:15.557: INFO: waiting for job default/curl-to-elb-jobx1nks38kn8t to be complete
Nov 19 07:55:25.594: INFO: job default/curl-to-elb-jobx1nks38kn8t is complete, took 10.037021159s
STEP: connecting directly to the external LB service
Nov 19 07:55:25.594: INFO: starting attempts to connect directly to the external LB service
2021/11/19 07:55:25 [DEBUG] GET http://23.96.207.143
2021/11/19 07:55:55 [ERR] GET http://23.96.207.143 request failed: Get "http://23.96.207.143": dial tcp 23.96.207.143:80: i/o timeout
2021/11/19 07:55:55 [DEBUG] GET http://23.96.207.143: retrying in 1s (4 left)
Nov 19 07:56:03.791: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 19 07:56:03.791: INFO: starting to delete external LB service web-windowsaib6mm-elb
Nov 19 07:56:03.894: INFO: starting to delete deployment web-windowsaib6mm
Nov 19 07:56:03.915: INFO: starting to delete job curl-to-elb-jobx1nks38kn8t
... skipping 49 lines ...
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-ls8qf, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-b8n0um-win-ha-control-plane-rmm2d, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-zqct4, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-77gj4, container kube-flannel
STEP: Dumping workload cluster capz-e2e-b8n0um/capz-e2e-b8n0um-win-ha Azure activity log
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-b8n0um-win-ha-control-plane-4mrp8, container kube-apiserver
STEP: Got error while iterating over activity logs for resource group capz-e2e-b8n0um-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000793308s
STEP: Dumping all the Cluster API resources in the "capz-e2e-b8n0um" namespace
STEP: Deleting all clusters in the capz-e2e-b8n0um namespace
STEP: Deleting cluster capz-e2e-b8n0um-win-ha
INFO: Waiting for the Cluster capz-e2e-b8n0um/capz-e2e-b8n0um-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-b8n0um-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-b8n0um-win-ha-control-plane-rmm2d, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-h2629, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-b8n0um-win-ha-control-plane-nn54z, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-ls8qf, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-qsfmb, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-b8n0um-win-ha-control-plane-rmm2d, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hqsf9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-b8n0um-win-ha-control-plane-nn54z, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-zqct4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-9z5cp, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-b8n0um-win-ha-control-plane-nn54z, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-b8n0um-win-ha-control-plane-nn54z, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-b8n0um-win-ha-control-plane-rmm2d, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-b8n0um-win-ha-control-plane-rmm2d, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-b8n0um
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 31m54s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:579

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Fri, 19 Nov 2021 07:49:40 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-3j390b" for hosting the cluster
Nov 19 07:49:40.641: INFO: starting to create namespace for hosting the "capz-e2e-3j390b" test spec
2021/11/19 07:49:40 failed trying to get namespace (capz-e2e-3j390b):namespaces "capz-e2e-3j390b" not found
INFO: Creating namespace capz-e2e-3j390b
INFO: Creating event watcher for namespace "capz-e2e-3j390b"
Nov 19 07:49:40.678: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-3j390b-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-3j390b-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.4, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 12 lines ...
kubeadmconfig.bootstrap.cluster.x-k8s.io/capz-e2e-3j390b-win-vmss-mp-win created
clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-3j390b-win-vmss-flannel created
configmap/cni-capz-e2e-3j390b-win-vmss-flannel created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1119 07:49:57.377654   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 07:50:29.992275   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by capz-e2e-3j390b/capz-e2e-3j390b-win-vmss-control-plane to be provisioned
STEP: Waiting for one control plane node to exist
E1119 07:51:16.750235   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 07:52:10.541059   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 07:52:45.936500   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 07:53:39.688566   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
INFO: Waiting for control plane capz-e2e-3j390b/capz-e2e-3j390b-win-vmss-control-plane to be ready (implies underlying nodes to be ready as well)
STEP: Waiting for the control plane to be ready
INFO: Waiting for the machine deployments to be provisioned
INFO: Waiting for the machine pools to be provisioned
STEP: Waiting for the machine pool workload nodes to exist
E1119 07:54:28.309848   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 07:55:00.213164   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 07:55:34.558519   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 07:56:29.278966   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 07:57:00.974807   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 07:57:58.732281   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Waiting for the machine pool workload nodes to exist
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/webltfnzw to be available
Nov 19 07:58:22.499: INFO: starting to wait for deployment to become available
E1119 07:58:29.453669   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 19 07:58:42.561: INFO: Deployment default/webltfnzw is now available, took 20.061617513s
STEP: creating an internal Load Balancer service
Nov 19 07:58:42.561: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/webltfnzw-ilb to be available
Nov 19 07:58:42.593: INFO: waiting for service default/webltfnzw-ilb to be available
E1119 07:59:17.081900   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 19 07:59:32.688: INFO: service default/webltfnzw-ilb is available, took 50.09520136s
STEP: connecting to the internal LB service from a curl pod
Nov 19 07:59:32.702: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-jobxsmpn to be complete
Nov 19 07:59:32.729: INFO: waiting for job default/curl-to-ilb-jobxsmpn to be complete
Nov 19 07:59:42.768: INFO: job default/curl-to-ilb-jobxsmpn is complete, took 10.039317025s
STEP: deleting the ilb test resources
Nov 19 07:59:42.768: INFO: deleting the ilb service: webltfnzw-ilb
Nov 19 07:59:42.802: INFO: deleting the ilb job: curl-to-ilb-jobxsmpn
STEP: creating an external Load Balancer service
Nov 19 07:59:42.817: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/webltfnzw-elb to be available
Nov 19 07:59:42.866: INFO: waiting for service default/webltfnzw-elb to be available
E1119 08:00:06.605862   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 08:00:37.507381   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 19 08:01:03.003: INFO: service default/webltfnzw-elb is available, took 1m20.137369521s
STEP: connecting to the external LB service from a curl pod
Nov 19 08:01:03.017: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-jobqlyqm5y5ew6 to be complete
Nov 19 08:01:03.037: INFO: waiting for job default/curl-to-elb-jobqlyqm5y5ew6 to be complete
Nov 19 08:01:13.077: INFO: job default/curl-to-elb-jobqlyqm5y5ew6 is complete, took 10.040083085s
... skipping 6 lines ...
Nov 19 08:01:20.253: INFO: starting to delete deployment webltfnzw
Nov 19 08:01:20.267: INFO: starting to delete job curl-to-elb-jobqlyqm5y5ew6
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windows7192tg to be available
Nov 19 08:01:20.360: INFO: starting to wait for deployment to become available
E1119 08:01:29.130224   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 08:02:10.861728   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 19 08:02:20.478: INFO: Deployment default/web-windows7192tg is now available, took 1m0.118181397s
STEP: creating an internal Load Balancer service
Nov 19 08:02:20.478: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/web-windows7192tg-ilb to be available
Nov 19 08:02:20.510: INFO: waiting for service default/web-windows7192tg-ilb to be available
E1119 08:02:58.715436   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 19 08:03:10.596: INFO: service default/web-windows7192tg-ilb is available, took 50.086010406s
STEP: connecting to the internal LB service from a curl pod
Nov 19 08:03:10.609: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-jobzu45m to be complete
Nov 19 08:03:10.626: INFO: waiting for job default/curl-to-ilb-jobzu45m to be complete
Nov 19 08:03:20.655: INFO: job default/curl-to-ilb-jobzu45m is complete, took 10.029192138s
STEP: deleting the ilb test resources
Nov 19 08:03:20.655: INFO: deleting the ilb service: web-windows7192tg-ilb
Nov 19 08:03:20.695: INFO: deleting the ilb job: curl-to-ilb-jobzu45m
STEP: creating an external Load Balancer service
Nov 19 08:03:20.715: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/web-windows7192tg-elb to be available
Nov 19 08:03:20.766: INFO: waiting for service default/web-windows7192tg-elb to be available
E1119 08:03:44.775748   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 08:04:27.623085   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 19 08:04:30.887: INFO: service default/web-windows7192tg-elb is available, took 1m10.120791867s
STEP: connecting to the external LB service from a curl pod
Nov 19 08:04:30.900: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-job7dv9qk4x58g to be complete
Nov 19 08:04:30.919: INFO: waiting for job default/curl-to-elb-job7dv9qk4x58g to be complete
Nov 19 08:04:40.948: INFO: job default/curl-to-elb-job7dv9qk4x58g is complete, took 10.02839187s
... skipping 12 lines ...
Nov 19 08:04:51.223: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-3j390b-win-vmss-control-plane-429k8

Nov 19 08:04:51.873: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-3j390b-win-vmss in namespace capz-e2e-3j390b

Nov 19 08:05:08.304: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-3j390b-win-vmss-mp-0

Failed to get logs for machine pool capz-e2e-3j390b-win-vmss-mp-0, cluster capz-e2e-3j390b/capz-e2e-3j390b-win-vmss: [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1]
Nov 19 08:05:08.546: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-3j390b-win-vmss in namespace capz-e2e-3j390b

E1119 08:05:15.151046   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 19 08:05:39.336: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

STEP: Dumping workload cluster capz-e2e-3j390b/capz-e2e-3j390b-win-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 253.51788ms
STEP: Dumping workload cluster capz-e2e-3j390b/capz-e2e-3j390b-win-vmss Azure activity log
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-5sbsm, container kube-flannel
... skipping 11 lines ...
STEP: Fetching activity logs took 1.157724709s
STEP: Dumping all the Cluster API resources in the "capz-e2e-3j390b" namespace
STEP: Deleting all clusters in the capz-e2e-3j390b namespace
STEP: Deleting cluster capz-e2e-3j390b-win-vmss
INFO: Waiting for the Cluster capz-e2e-3j390b/capz-e2e-3j390b-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-3j390b-win-vmss to be deleted
E1119 08:05:46.409864   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 08:06:44.376802   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 08:07:17.982724   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 08:08:10.579467   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 08:08:50.323276   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 08:09:25.542463   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 08:10:24.680675   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-5sbsm, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-m2ctl, container kube-proxy: http2: client connection lost
E1119 08:10:55.130002   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 08:11:45.434428   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 08:12:32.717913   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 08:13:03.654363   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 08:13:57.058379   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 08:14:56.729114   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 08:15:31.915146   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 08:16:25.556486   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 08:17:09.855632   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-3j390b
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1119 08:17:44.626637   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 08:18:43.106597   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1119 08:19:29.117261   24497 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-3vndff/events?resourceVersion=11033": dial tcp: lookup capz-e2e-3vndff-public-custom-vnet-61b75325.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 29m56s on Ginkgo node 1 of 3


• [SLOW TEST:1795.503 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 5 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a GPU-enabled cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:76

Ran 9 of 24 Specs in 5960.867 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 15 Skipped


Ginkgo ran 1 suite in 1h40m48.760050238s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:176: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:184: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...