This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-20 06:35
Elapsed2h2m
Revisionmain

Test Failures


capz-e2e Workload cluster creation Creating a GPU-enabled cluster with a single control plane node and 1 node 34m50s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sGPU\-enabled\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413
Timed out after 1200.000s.
Expected
    <bool>: false
to be true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:76
				
				Click to see stdout/stderrfrom junit.e2e_suite.3.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 15 Skipped Tests

Error lines from build-log.txt

... skipping 434 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:288

INFO: "With ipv6 worker node" started at Sat, 20 Nov 2021 06:43:50 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-ss0mj2" for hosting the cluster
Nov 20 06:43:50.176: INFO: starting to create namespace for hosting the "capz-e2e-ss0mj2" test spec
2021/11/20 06:43:50 failed trying to get namespace (capz-e2e-ss0mj2):namespaces "capz-e2e-ss0mj2" not found
INFO: Creating namespace capz-e2e-ss0mj2
INFO: Creating event watcher for namespace "capz-e2e-ss0mj2"
Nov 20 06:43:50.208: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-ss0mj2-ipv6
INFO: Creating the workload cluster with name "capz-e2e-ss0mj2-ipv6" using the "ipv6" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 567.159098ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-ss0mj2" namespace
STEP: Deleting all clusters in the capz-e2e-ss0mj2 namespace
STEP: Deleting cluster capz-e2e-ss0mj2-ipv6
INFO: Waiting for the Cluster capz-e2e-ss0mj2/capz-e2e-ss0mj2-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-ss0mj2-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ss0mj2-ipv6-control-plane-xcp6g, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rv7c6, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ss0mj2-ipv6-control-plane-29snl, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-76fcp, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-zbccp, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-fdzs8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-sf6jj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-f4qhb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7tcss, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ss0mj2-ipv6-control-plane-trdvh, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ss0mj2-ipv6-control-plane-29snl, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ss0mj2-ipv6-control-plane-xcp6g, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ss0mj2-ipv6-control-plane-29snl, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ss0mj2-ipv6-control-plane-29snl, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ss0mj2-ipv6-control-plane-trdvh, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-zzv79, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ss0mj2-ipv6-control-plane-trdvh, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ss0mj2-ipv6-control-plane-xcp6g, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ss0mj2-ipv6-control-plane-trdvh, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ss0mj2-ipv6-control-plane-xcp6g, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qcgqv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7xrbj, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-kbv5c, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-ss0mj2
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 16m23s on Ginkgo node 1 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:205

INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" started at Sat, 20 Nov 2021 06:43:50 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-0ix2e1" for hosting the cluster
Nov 20 06:43:50.023: INFO: starting to create namespace for hosting the "capz-e2e-0ix2e1" test spec
2021/11/20 06:43:50 failed trying to get namespace (capz-e2e-0ix2e1):namespaces "capz-e2e-0ix2e1" not found
INFO: Creating namespace capz-e2e-0ix2e1
INFO: Creating event watcher for namespace "capz-e2e-0ix2e1"
Nov 20 06:43:50.067: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-0ix2e1-ha
INFO: Creating the workload cluster with name "capz-e2e-0ix2e1-ha" using the "(default)" template (Kubernetes v1.22.4, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 75 lines ...
Nov 20 06:54:29.112: INFO: starting to delete external LB service webw5gjjx-elb
Nov 20 06:54:29.232: INFO: starting to delete deployment webw5gjjx
Nov 20 06:54:29.296: INFO: starting to delete job curl-to-elb-jobn39hk07bjpx
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 20 06:54:29.414: INFO: starting to create dev deployment namespace
2021/11/20 06:54:29 failed trying to get namespace (development):namespaces "development" not found
2021/11/20 06:54:29 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 20 06:54:29.544: INFO: starting to create prod deployment namespace
2021/11/20 06:54:29 failed trying to get namespace (production):namespaces "production" not found
2021/11/20 06:54:29 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 20 06:54:29.666: INFO: starting to create frontend-prod deployments
Nov 20 06:54:29.729: INFO: starting to create frontend-dev deployments
Nov 20 06:54:29.798: INFO: starting to create backend deployments
Nov 20 06:54:29.874: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 20 06:54:54.133: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.142.130 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 20 06:57:03.850: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 20 06:57:04.143: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.142.130 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.142.130 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 20 07:01:25.560: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 20 07:01:25.797: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.142.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 20 07:03:37.070: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 20 07:03:37.362: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.155.132 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.142.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 20 07:07:59.215: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 20 07:07:59.501: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.142.130 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 20 07:10:10.282: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 20 07:10:10.518: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.142.130 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowsj6y13l to be available
Nov 20 07:12:21.817: INFO: starting to wait for deployment to become available
Nov 20 07:13:12.196: INFO: Deployment default/web-windowsj6y13l is now available, took 50.379544727s
... skipping 51 lines ...
Nov 20 07:18:31.722: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-0ix2e1-ha-md-0-cfjzz

Nov 20 07:18:32.056: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster capz-e2e-0ix2e1-ha in namespace capz-e2e-0ix2e1

Nov 20 07:18:58.187: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-0ix2e1-ha-md-win-qmjz7

Failed to get logs for machine capz-e2e-0ix2e1-ha-md-win-7c4ffb79b-fx6dr, cluster capz-e2e-0ix2e1/capz-e2e-0ix2e1-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 20 07:18:58.672: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster capz-e2e-0ix2e1-ha in namespace capz-e2e-0ix2e1

Nov 20 07:19:36.509: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-0ix2e1-ha-md-win-6v6r9

Failed to get logs for machine capz-e2e-0ix2e1-ha-md-win-7c4ffb79b-xm7bq, cluster capz-e2e-0ix2e1/capz-e2e-0ix2e1-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-0ix2e1/capz-e2e-0ix2e1-ha kube-system pod logs
STEP: Fetching kube-system pod logs took 508.988158ms
STEP: Creating log watcher for controller kube-system/calico-node-2nzlh, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-2jhlj, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-prvc6, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-4995q, container kube-proxy
... skipping 22 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-9cjxv, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-dswjp, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-29g77, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-c5lvd, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-0ix2e1-ha-control-plane-hcb5v, container etcd
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-0ix2e1-ha-control-plane-b65nv, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-0ix2e1-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000694858s
STEP: Dumping all the Cluster API resources in the "capz-e2e-0ix2e1" namespace
STEP: Deleting all clusters in the capz-e2e-0ix2e1 namespace
STEP: Deleting cluster capz-e2e-0ix2e1-ha
INFO: Waiting for the Cluster capz-e2e-0ix2e1/capz-e2e-0ix2e1-ha to be deleted
STEP: Waiting for cluster capz-e2e-0ix2e1-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-c5lvd, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-0ix2e1-ha-control-plane-hcb5v, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-0ix2e1-ha-control-plane-4zzl4, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-2jhlj, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-jx4vc, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-dqwsl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-0ix2e1-ha-control-plane-4zzl4, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-z5gh5, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-0ix2e1-ha-control-plane-hcb5v, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-0ix2e1-ha-control-plane-4zzl4, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fjh22, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-prvc6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-qplsl, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-0ix2e1-ha-control-plane-hcb5v, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7lpm9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-89w65, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-29g77, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-0ix2e1-ha-control-plane-4zzl4, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-z5gh5, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vl86f, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dswjp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-0ix2e1-ha-control-plane-hcb5v, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-0ix2e1
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" ran for 45m24s on Ginkgo node 3 of 3

... skipping 8 lines ...
  with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:334

INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" started at Sat, 20 Nov 2021 07:00:13 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-3x4mz7" for hosting the cluster
Nov 20 07:00:13.263: INFO: starting to create namespace for hosting the "capz-e2e-3x4mz7" test spec
2021/11/20 07:00:13 failed trying to get namespace (capz-e2e-3x4mz7):namespaces "capz-e2e-3x4mz7" not found
INFO: Creating namespace capz-e2e-3x4mz7
INFO: Creating event watcher for namespace "capz-e2e-3x4mz7"
Nov 20 07:00:13.297: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-3x4mz7-vmss
INFO: Creating the workload cluster with name "capz-e2e-3x4mz7-vmss" using the "machine-pool" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 62 lines ...
STEP: waiting for job default/curl-to-elb-jobt129sfvg93c to be complete
Nov 20 07:10:43.466: INFO: waiting for job default/curl-to-elb-jobt129sfvg93c to be complete
Nov 20 07:10:53.585: INFO: job default/curl-to-elb-jobt129sfvg93c is complete, took 10.119036196s
STEP: connecting directly to the external LB service
Nov 20 07:10:53.585: INFO: starting attempts to connect directly to the external LB service
2021/11/20 07:10:53 [DEBUG] GET http://20.99.161.140
2021/11/20 07:11:23 [ERR] GET http://20.99.161.140 request failed: Get "http://20.99.161.140": dial tcp 20.99.161.140:80: i/o timeout
2021/11/20 07:11:23 [DEBUG] GET http://20.99.161.140: retrying in 1s (4 left)
Nov 20 07:11:31.943: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 20 07:11:31.943: INFO: starting to delete external LB service webxl9cg2-elb
Nov 20 07:11:32.019: INFO: starting to delete deployment webxl9cg2
Nov 20 07:11:32.073: INFO: starting to delete job curl-to-elb-jobt129sfvg93c
... skipping 69 lines ...
Nov 20 07:18:55.214: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-3x4mz7-vmss-mp-0

Nov 20 07:18:55.581: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-3x4mz7-vmss in namespace capz-e2e-3x4mz7

Nov 20 07:19:10.273: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-e2e-3x4mz7-vmss-mp-0

Failed to get logs for machine pool capz-e2e-3x4mz7-vmss-mp-0, cluster capz-e2e-3x4mz7/capz-e2e-3x4mz7-vmss: [[running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1], [running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1]]
Nov 20 07:19:10.668: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-3x4mz7-vmss in namespace capz-e2e-3x4mz7

Nov 20 07:19:39.944: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

Nov 20 07:19:40.227: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-3x4mz7-vmss in namespace capz-e2e-3x4mz7

Nov 20 07:20:09.024: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set win-p-win

Failed to get logs for machine pool capz-e2e-3x4mz7-vmss-mp-win, cluster capz-e2e-3x4mz7/capz-e2e-3x4mz7-vmss: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-3x4mz7/capz-e2e-3x4mz7-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 616.499603ms
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-3x4mz7-vmss-control-plane-ckj2s, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-rvhfn, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-xc52d, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-proxy-9v672, container kube-proxy
... skipping 10 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-cg28f, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-v2jhq, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-3x4mz7-vmss-control-plane-ckj2s, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-windows-xc52d, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-lsfcq, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-gnh2k, container calico-node-felix
STEP: Got error while iterating over activity logs for resource group capz-e2e-3x4mz7-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00101166s
STEP: Dumping all the Cluster API resources in the "capz-e2e-3x4mz7" namespace
STEP: Deleting all clusters in the capz-e2e-3x4mz7 namespace
STEP: Deleting cluster capz-e2e-3x4mz7-vmss
INFO: Waiting for the Cluster capz-e2e-3x4mz7/capz-e2e-3x4mz7-vmss to be deleted
STEP: Waiting for cluster capz-e2e-3x4mz7-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-7tzkv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-cg28f, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rq45c, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-v2jhq, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-3x4mz7
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" ran for 29m43s on Ginkgo node 1 of 3

... skipping 10 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:144

INFO: "Creates a public management cluster in the same vnet" started at Sat, 20 Nov 2021 06:43:49 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-254kkx" for hosting the cluster
Nov 20 06:43:49.628: INFO: starting to create namespace for hosting the "capz-e2e-254kkx" test spec
2021/11/20 06:43:49 failed trying to get namespace (capz-e2e-254kkx):namespaces "capz-e2e-254kkx" not found
INFO: Creating namespace capz-e2e-254kkx
INFO: Creating event watcher for namespace "capz-e2e-254kkx"
Nov 20 06:43:49.674: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-254kkx-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-v6v8d, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-lfjk8, container coredns
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-254kkx-public-custom-vnet-control-plane-xnmhk, container etcd
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-254kkx-public-custom-vnet-control-plane-xnmhk, container kube-scheduler
STEP: Dumping workload cluster capz-e2e-254kkx/capz-e2e-254kkx-public-custom-vnet Azure activity log
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-254kkx-public-custom-vnet-control-plane-xnmhk, container kube-controller-manager
STEP: Got error while iterating over activity logs for resource group capz-e2e-254kkx-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000816454s
STEP: Dumping all the Cluster API resources in the "capz-e2e-254kkx" namespace
STEP: Deleting all clusters in the capz-e2e-254kkx namespace
STEP: Deleting cluster capz-e2e-254kkx-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-254kkx/capz-e2e-254kkx-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-254kkx-public-custom-vnet to be deleted
W1120 07:32:51.659707   24396 reflector.go:441] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1120 07:33:22.958160   24396 trace.go:205] Trace[1532244171]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (20-Nov-2021 07:32:52.956) (total time: 30001ms):
Trace[1532244171]: [30.001549991s] [30.001549991s] END
E1120 07:33:22.958228   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp 20.112.63.191:6443: i/o timeout
I1120 07:33:56.070970   24396 trace.go:205] Trace[1732977062]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (20-Nov-2021 07:33:26.069) (total time: 30001ms):
Trace[1732977062]: [30.001367492s] [30.001367492s] END
E1120 07:33:56.071035   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp 20.112.63.191:6443: i/o timeout
I1120 07:34:29.483285   24396 trace.go:205] Trace[51696508]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (20-Nov-2021 07:33:59.482) (total time: 30000ms):
Trace[51696508]: [30.000669212s] [30.000669212s] END
E1120 07:34:29.483346   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp 20.112.63.191:6443: i/o timeout
I1120 07:35:07.732156   24396 trace.go:205] Trace[1471116765]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (20-Nov-2021 07:34:37.730) (total time: 30001ms):
Trace[1471116765]: [30.001288009s] [30.001288009s] END
E1120 07:35:07.732215   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp 20.112.63.191:6443: i/o timeout
E1120 07:35:32.224001   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-254kkx
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 20 07:35:59.693: INFO: deleting an existing virtual network "custom-vnet"
Nov 20 07:36:10.532: INFO: deleting an existing route table "node-routetable"
Nov 20 07:36:21.119: INFO: deleting an existing network security group "node-nsg"
E1120 07:36:22.940268   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 20 07:36:31.622: INFO: deleting an existing network security group "control-plane-nsg"
Nov 20 07:36:42.090: INFO: verifying the existing resource group "capz-e2e-254kkx-public-custom-vnet" is empty
Nov 20 07:36:43.447: INFO: deleting the existing resource group "capz-e2e-254kkx-public-custom-vnet"
E1120 07:37:12.472433   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 07:37:56.607637   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "Creates a public management cluster in the same vnet" ran for 54m55s on Ginkgo node 2 of 3


• [SLOW TEST:3294.811 seconds]
... skipping 8 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:455

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Sat, 20 Nov 2021 07:29:55 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-0c6uz1" for hosting the cluster
Nov 20 07:29:55.967: INFO: starting to create namespace for hosting the "capz-e2e-0c6uz1" test spec
2021/11/20 07:29:55 failed trying to get namespace (capz-e2e-0c6uz1):namespaces "capz-e2e-0c6uz1" not found
INFO: Creating namespace capz-e2e-0c6uz1
INFO: Creating event watcher for namespace "capz-e2e-0c6uz1"
Nov 20 07:29:55.995: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-0c6uz1-oot
INFO: Creating the workload cluster with name "capz-e2e-0c6uz1-oot" using the "external-cloud-provider" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 98 lines ...
STEP: Fetching activity logs took 1.035759351s
STEP: Dumping all the Cluster API resources in the "capz-e2e-0c6uz1" namespace
STEP: Deleting all clusters in the capz-e2e-0c6uz1 namespace
STEP: Deleting cluster capz-e2e-0c6uz1-oot
INFO: Waiting for the Cluster capz-e2e-0c6uz1/capz-e2e-0c6uz1-oot to be deleted
STEP: Waiting for cluster capz-e2e-0c6uz1-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-6hhw8, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-0c6uz1-oot-control-plane-9vx9k, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-zmtw2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ds558, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lgr8w, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-frjm2, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-cdbm6, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2vdds, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-9s995, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wx5mb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-controller-manager, container cloud-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-0c6uz1-oot-control-plane-9vx9k, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-0c6uz1-oot-control-plane-9vx9k, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-hlfpw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-0c6uz1-oot-control-plane-9vx9k, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-q446l, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-x89w7, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-0c6uz1
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 17m5s on Ginkgo node 1 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:490

INFO: "with a single control plane node and 1 node" started at Sat, 20 Nov 2021 07:38:44 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-i5loxy" for hosting the cluster
Nov 20 07:38:44.444: INFO: starting to create namespace for hosting the "capz-e2e-i5loxy" test spec
2021/11/20 07:38:44 failed trying to get namespace (capz-e2e-i5loxy):namespaces "capz-e2e-i5loxy" not found
INFO: Creating namespace capz-e2e-i5loxy
INFO: Creating event watcher for namespace "capz-e2e-i5loxy"
Nov 20 07:38:44.471: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-i5loxy-aks
INFO: Creating the workload cluster with name "capz-e2e-i5loxy-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1120 07:38:52.594914   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 07:39:51.244517   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 07:40:44.827558   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 07:41:34.207113   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 07:42:15.482722   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 07:43:06.863741   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 20 07:43:15.608: INFO: Waiting for the first control plane machine managed by capz-e2e-i5loxy/capz-e2e-i5loxy-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
INFO: Waiting for control plane to be ready
Nov 20 07:43:35.655: INFO: Waiting for the first control plane machine managed by capz-e2e-i5loxy/capz-e2e-i5loxy-aks to be provisioned
STEP: Waiting for all control plane nodes to exist
... skipping 13 lines ...
STEP: time sync OK for host aks-agentpool1-96705930-vmss000000
STEP: time sync OK for host aks-agentpool1-96705930-vmss000000
STEP: Dumping logs from the "capz-e2e-i5loxy-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-i5loxy/capz-e2e-i5loxy-aks logs
Nov 20 07:43:42.932: INFO: INFO: Collecting logs for node aks-agentpool1-96705930-vmss000000 in cluster capz-e2e-i5loxy-aks in namespace capz-e2e-i5loxy

E1120 07:43:53.453813   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 07:44:50.343816   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 07:45:33.363722   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 20 07:45:52.377: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-i5loxy/capz-e2e-i5loxy-aks: [dialing public load balancer at capz-e2e-i5loxy-aks-d5bbda2a.hcp.westus2.azmk8s.io: dial tcp 51.143.49.59:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
Nov 20 07:45:52.963: INFO: INFO: Collecting logs for node aks-agentpool1-96705930-vmss000000 in cluster capz-e2e-i5loxy-aks in namespace capz-e2e-i5loxy

E1120 07:46:24.149543   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 07:47:16.050478   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 20 07:48:03.452: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-i5loxy/capz-e2e-i5loxy-aks: [dialing public load balancer at capz-e2e-i5loxy-aks-d5bbda2a.hcp.westus2.azmk8s.io: dial tcp 51.143.49.59:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-i5loxy/capz-e2e-i5loxy-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 643.111613ms
STEP: Dumping workload cluster capz-e2e-i5loxy/capz-e2e-i5loxy-aks Azure activity log
STEP: Creating log watcher for controller kube-system/calico-typha-deployment-76cb9744d8-85d97, container calico-typha
STEP: Creating log watcher for controller kube-system/metrics-server-569f6547dd-fw8xf, container metrics-server
STEP: Creating log watcher for controller kube-system/calico-typha-horizontal-autoscaler-599c7bb664-9srrd, container autoscaler
... skipping 8 lines ...
STEP: Fetching activity logs took 482.997699ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-i5loxy" namespace
STEP: Deleting all clusters in the capz-e2e-i5loxy namespace
STEP: Deleting cluster capz-e2e-i5loxy-aks
INFO: Waiting for the Cluster capz-e2e-i5loxy/capz-e2e-i5loxy-aks to be deleted
STEP: Waiting for cluster capz-e2e-i5loxy-aks to be deleted
E1120 07:48:13.587132   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 07:49:04.057039   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 07:49:58.626974   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 07:50:30.241773   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 07:51:14.720837   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 07:52:05.881181   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-i5loxy
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1120 07:52:52.554927   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 14m42s on Ginkgo node 2 of 3


• [SLOW TEST:882.050 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413

INFO: "with a single control plane node and 1 node" started at Sat, 20 Nov 2021 07:29:13 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-4nv6dv" for hosting the cluster
Nov 20 07:29:13.816: INFO: starting to create namespace for hosting the "capz-e2e-4nv6dv" test spec
2021/11/20 07:29:13 failed trying to get namespace (capz-e2e-4nv6dv):namespaces "capz-e2e-4nv6dv" not found
INFO: Creating namespace capz-e2e-4nv6dv
INFO: Creating event watcher for namespace "capz-e2e-4nv6dv"
Nov 20 07:29:13.853: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-4nv6dv-gpu
INFO: Creating the workload cluster with name "capz-e2e-4nv6dv-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.4, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: Fetching activity logs took 1.194814616s
STEP: Dumping all the Cluster API resources in the "capz-e2e-4nv6dv" namespace
STEP: Deleting all clusters in the capz-e2e-4nv6dv namespace
STEP: Deleting cluster capz-e2e-4nv6dv-gpu
INFO: Waiting for the Cluster capz-e2e-4nv6dv/capz-e2e-4nv6dv-gpu to be deleted
STEP: Waiting for cluster capz-e2e-4nv6dv-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-kf7t5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-z8s9q, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-4nv6dv
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 34m51s on Ginkgo node 3 of 3

... skipping 59 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Sat, 20 Nov 2021 07:47:00 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-76bmrw" for hosting the cluster
Nov 20 07:47:00.562: INFO: starting to create namespace for hosting the "capz-e2e-76bmrw" test spec
2021/11/20 07:47:00 failed trying to get namespace (capz-e2e-76bmrw):namespaces "capz-e2e-76bmrw" not found
INFO: Creating namespace capz-e2e-76bmrw
INFO: Creating event watcher for namespace "capz-e2e-76bmrw"
Nov 20 07:47:00.591: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-76bmrw-win-ha
INFO: Creating the workload cluster with name "capz-e2e-76bmrw-win-ha" using the "windows" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 55 lines ...
STEP: waiting for job default/curl-to-elb-job4b219iup66j to be complete
Nov 20 07:56:24.194: INFO: waiting for job default/curl-to-elb-job4b219iup66j to be complete
Nov 20 07:56:34.305: INFO: job default/curl-to-elb-job4b219iup66j is complete, took 10.110600167s
STEP: connecting directly to the external LB service
Nov 20 07:56:34.305: INFO: starting attempts to connect directly to the external LB service
2021/11/20 07:56:34 [DEBUG] GET http://20.99.141.40
2021/11/20 07:57:04 [ERR] GET http://20.99.141.40 request failed: Get "http://20.99.141.40": dial tcp 20.99.141.40:80: i/o timeout
2021/11/20 07:57:04 [DEBUG] GET http://20.99.141.40: retrying in 1s (4 left)
Nov 20 07:57:05.418: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 20 07:57:05.419: INFO: starting to delete external LB service webden11t-elb
Nov 20 07:57:05.513: INFO: starting to delete deployment webden11t
Nov 20 07:57:05.579: INFO: starting to delete job curl-to-elb-job4b219iup66j
... skipping 85 lines ...
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-f8cw8, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-cd5rj, container kube-flannel
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-76bmrw-win-ha-control-plane-49brq, container etcd
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-n85sl, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-md69v, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-proxy-z6hr2, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-76bmrw-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000578455s
STEP: Dumping all the Cluster API resources in the "capz-e2e-76bmrw" namespace
STEP: Deleting all clusters in the capz-e2e-76bmrw namespace
STEP: Deleting cluster capz-e2e-76bmrw-win-ha
INFO: Waiting for the Cluster capz-e2e-76bmrw/capz-e2e-76bmrw-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-76bmrw-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-cd5rj, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-76bmrw-win-ha-control-plane-j5fm6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-76bmrw-win-ha-control-plane-j5fm6, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-76bmrw-win-ha-control-plane-49brq, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-76bmrw-win-ha-control-plane-49brq, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-jzknr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-l74z8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rcgh5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-76bmrw-win-ha-control-plane-j5fm6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-76bmrw-win-ha-control-plane-49brq, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-f8cw8, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-76bmrw-win-ha-control-plane-j5fm6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8rb5w, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-76bmrw-win-ha-control-plane-49brq, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-76bmrw
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 27m49s on Ginkgo node 1 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows Enabled cluster with dockershim
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:530
    With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2021-11-20T08:35:48Z"}
++ early_exit_handler
++ '[' -n 161 ']'
++ kill -TERM 161
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 15 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:579

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Sat, 20 Nov 2021 07:53:26 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-ora8w0" for hosting the cluster
Nov 20 07:53:26.497: INFO: starting to create namespace for hosting the "capz-e2e-ora8w0" test spec
2021/11/20 07:53:26 failed trying to get namespace (capz-e2e-ora8w0):namespaces "capz-e2e-ora8w0" not found
INFO: Creating namespace capz-e2e-ora8w0
INFO: Creating event watcher for namespace "capz-e2e-ora8w0"
Nov 20 07:53:26.531: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-ora8w0-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-ora8w0-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.4, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 12 lines ...
kubeadmconfig.bootstrap.cluster.x-k8s.io/capz-e2e-ora8w0-win-vmss-mp-win created
clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-ora8w0-win-vmss-flannel created
configmap/cni-capz-e2e-ora8w0-win-vmss-flannel created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1120 07:53:31.290241   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 07:54:17.723473   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by capz-e2e-ora8w0/capz-e2e-ora8w0-win-vmss-control-plane to be provisioned
STEP: Waiting for one control plane node to exist
E1120 07:55:12.823104   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 07:55:55.464708   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
INFO: Waiting for control plane capz-e2e-ora8w0/capz-e2e-ora8w0-win-vmss-control-plane to be ready (implies underlying nodes to be ready as well)
STEP: Waiting for the control plane to be ready
E1120 07:56:33.601897   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for the machine deployments to be provisioned
INFO: Waiting for the machine pools to be provisioned
STEP: Waiting for the machine pool workload nodes to exist
E1120 07:57:04.646160   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 07:57:58.022010   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Waiting for the machine pool workload nodes to exist
E1120 07:58:55.948818   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 07:59:49.537845   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:00:34.298422   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:01:12.782372   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:01:48.456283   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:02:30.775876   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:03:23.663936   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:04:05.192814   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:04:40.130538   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:05:17.439356   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:05:54.228447   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:06:48.688877   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web95pzhv to be available
Nov 20 08:06:58.578: INFO: starting to wait for deployment to become available
Nov 20 08:07:18.767: INFO: Deployment default/web95pzhv is now available, took 20.189558495s
STEP: creating an internal Load Balancer service
Nov 20 08:07:18.767: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/web95pzhv-ilb to be available
Nov 20 08:07:18.854: INFO: waiting for service default/web95pzhv-ilb to be available
E1120 08:07:43.086571   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 20 08:08:09.211: INFO: service default/web95pzhv-ilb is available, took 50.357268483s
STEP: connecting to the internal LB service from a curl pod
Nov 20 08:08:09.270: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-joby3x8n to be complete
Nov 20 08:08:09.342: INFO: waiting for job default/curl-to-ilb-joby3x8n to be complete
Nov 20 08:08:19.469: INFO: job default/curl-to-ilb-joby3x8n is complete, took 10.126382694s
STEP: deleting the ilb test resources
Nov 20 08:08:19.469: INFO: deleting the ilb service: web95pzhv-ilb
Nov 20 08:08:19.554: INFO: deleting the ilb job: curl-to-ilb-joby3x8n
STEP: creating an external Load Balancer service
Nov 20 08:08:19.613: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/web95pzhv-elb to be available
Nov 20 08:08:19.684: INFO: waiting for service default/web95pzhv-elb to be available
E1120 08:08:34.134373   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:09:14.286976   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 20 08:09:40.220: INFO: service default/web95pzhv-elb is available, took 1m20.536355768s
STEP: connecting to the external LB service from a curl pod
Nov 20 08:09:40.278: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-jobx85j75jpimo to be complete
Nov 20 08:09:40.341: INFO: waiting for job default/curl-to-elb-jobx85j75jpimo to be complete
Nov 20 08:09:50.457: INFO: job default/curl-to-elb-jobx85j75jpimo is complete, took 10.116614471s
... skipping 6 lines ...
Nov 20 08:09:50.649: INFO: starting to delete deployment web95pzhv
Nov 20 08:09:50.710: INFO: starting to delete job curl-to-elb-jobx85j75jpimo
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windows0yv9es to be available
Nov 20 08:09:50.936: INFO: starting to wait for deployment to become available
E1120 08:09:54.125927   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:10:24.320231   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 20 08:11:01.444: INFO: Deployment default/web-windows0yv9es is now available, took 1m10.507971381s
STEP: creating an internal Load Balancer service
Nov 20 08:11:01.444: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/web-windows0yv9es-ilb to be available
Nov 20 08:11:01.518: INFO: waiting for service default/web-windows0yv9es-ilb to be available
E1120 08:11:21.211741   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 20 08:11:51.871: INFO: service default/web-windows0yv9es-ilb is available, took 50.352690004s
STEP: connecting to the internal LB service from a curl pod
Nov 20 08:11:51.928: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-job8bste to be complete
Nov 20 08:11:51.990: INFO: waiting for job default/curl-to-ilb-job8bste to be complete
Nov 20 08:12:02.106: INFO: job default/curl-to-ilb-job8bste is complete, took 10.115874215s
STEP: deleting the ilb test resources
Nov 20 08:12:02.106: INFO: deleting the ilb service: web-windows0yv9es-ilb
Nov 20 08:12:02.183: INFO: deleting the ilb job: curl-to-ilb-job8bste
STEP: creating an external Load Balancer service
Nov 20 08:12:02.243: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/web-windows0yv9es-elb to be available
Nov 20 08:12:02.313: INFO: waiting for service default/web-windows0yv9es-elb to be available
E1120 08:12:02.662865   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:12:41.718656   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:13:19.731907   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 20 08:13:22.849: INFO: service default/web-windows0yv9es-elb is available, took 1m20.536290153s
STEP: connecting to the external LB service from a curl pod
Nov 20 08:13:22.907: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-job2urqindgqfj to be complete
Nov 20 08:13:22.971: INFO: waiting for job default/curl-to-elb-job2urqindgqfj to be complete
Nov 20 08:13:33.090: INFO: job default/curl-to-elb-job2urqindgqfj is complete, took 10.119316443s
... skipping 10 lines ...
Nov 20 08:13:33.446: INFO: INFO: Collecting logs for node capz-e2e-ora8w0-win-vmss-control-plane-5gncm in cluster capz-e2e-ora8w0-win-vmss in namespace capz-e2e-ora8w0

Nov 20 08:13:46.505: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-ora8w0-win-vmss-control-plane-5gncm

Nov 20 08:13:47.510: INFO: INFO: Collecting logs for node capz-e2e-ora8w0-win-vmss-mp-0000000 in cluster capz-e2e-ora8w0-win-vmss in namespace capz-e2e-ora8w0

E1120 08:14:01.607323   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 20 08:14:30.886: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-ora8w0-win-vmss-mp-0

Nov 20 08:14:31.413: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-ora8w0-win-vmss in namespace capz-e2e-ora8w0

E1120 08:15:00.168835   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 20 08:15:06.637: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

STEP: Dumping workload cluster capz-e2e-ora8w0/capz-e2e-ora8w0-win-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 640.9113ms
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-f7b6r, container coredns
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-ora8w0-win-vmss-control-plane-5gncm, container kube-apiserver
... skipping 11 lines ...
STEP: Fetching activity logs took 993.724375ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-ora8w0" namespace
STEP: Deleting all clusters in the capz-e2e-ora8w0 namespace
STEP: Deleting cluster capz-e2e-ora8w0-win-vmss
INFO: Waiting for the Cluster capz-e2e-ora8w0/capz-e2e-ora8w0-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-ora8w0-win-vmss to be deleted
E1120 08:15:45.500019   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:16:15.654801   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:16:55.618659   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:17:41.309934   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:18:22.792366   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:19:15.237638   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:19:49.543164   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:20:42.539016   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:21:19.350709   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:21:59.066429   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:22:55.400111   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:23:39.498658   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:24:18.472630   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:25:03.543590   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:25:51.452245   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:26:40.541517   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:27:20.380165   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Got error while streaming logs for pod kube-system/kube-proxy-g7ss7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ora8w0-win-vmss-control-plane-5gncm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-htc5b, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ora8w0-win-vmss-control-plane-5gncm, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-n25qb, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-f7b6r, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ora8w0-win-vmss-control-plane-5gncm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ora8w0-win-vmss-control-plane-5gncm, container etcd: http2: client connection lost
E1120 08:28:17.532587   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:29:14.054438   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:30:05.475841   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:30:42.154218   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:31:33.748598   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:32:25.173501   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:33:02.999830   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:33:46.530590   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-ora8w0
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1120 08:34:18.696831   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:34:50.733685   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1120 08:35:37.397798   24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-254kkx/events?resourceVersion=10390": dial tcp: lookup capz-e2e-254kkx-public-custom-vnet-eff6f75d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 42m46s on Ginkgo node 2 of 3


• [SLOW TEST:2565.701 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows enabled VMSS cluster with dockershim
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:578
    with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:579
------------------------------
STEP: Tearing down the management cluster
INFO: Deleting the kind cluster "capz-e2e" failed. You may need to remove this by hand.



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a GPU-enabled cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:76

Ran 9 of 24 Specs in 6917.018 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 15 Skipped


Ginkgo ran 1 suite in 1h56m51.574332008s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:176: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:184: test-e2e] Error 2
Program process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
================================================================================
Done cleaning up after docker in docker.
All sensitive variables are redacted
{"component":"entrypoint","file":"prow/entrypoint/run.go:252","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process gracefully exited before 15m0s grace period","severity":"error","time":"2021-11-20T08:37:54Z"}