This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-30 18:39
Elapsed1h46m
Revisionmain

Test Failures


capz-e2e Workload cluster creation Creating a GPU-enabled cluster with a single control plane node and 1 node 35m29s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sGPU\-enabled\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413
Timed out after 1200.000s.
Expected
    <bool>: false
to be true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:76
				
				Click to see stdout/stderrfrom junit.e2e_suite.3.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 15 Skipped Tests

Error lines from build-log.txt

... skipping 433 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:288

INFO: "With ipv6 worker node" started at Tue, 30 Nov 2021 18:48:11 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-s6f3dr" for hosting the cluster
Nov 30 18:48:11.874: INFO: starting to create namespace for hosting the "capz-e2e-s6f3dr" test spec
2021/11/30 18:48:11 failed trying to get namespace (capz-e2e-s6f3dr):namespaces "capz-e2e-s6f3dr" not found
INFO: Creating namespace capz-e2e-s6f3dr
INFO: Creating event watcher for namespace "capz-e2e-s6f3dr"
Nov 30 18:48:11.915: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-s6f3dr-ipv6
INFO: Creating the workload cluster with name "capz-e2e-s6f3dr-ipv6" using the "ipv6" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 559.971602ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-s6f3dr" namespace
STEP: Deleting all clusters in the capz-e2e-s6f3dr namespace
STEP: Deleting cluster capz-e2e-s6f3dr-ipv6
INFO: Waiting for the Cluster capz-e2e-s6f3dr/capz-e2e-s6f3dr-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-s6f3dr-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-6zjkk, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-s6f3dr-ipv6-control-plane-9z6z5, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-tjzng, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-qvr9f, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-s6f3dr-ipv6-control-plane-9z6z5, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ptfr2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-zfmwl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-s6f3dr-ipv6-control-plane-92qxc, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6s8r8, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-n5pd9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8btrc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-s6f3dr-ipv6-control-plane-9z6z5, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-s6f3dr-ipv6-control-plane-pqmmf, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-whzds, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6xv5d, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-s6f3dr-ipv6-control-plane-92qxc, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-s6f3dr-ipv6-control-plane-pqmmf, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-s6f3dr-ipv6-control-plane-9z6z5, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-s6f3dr-ipv6-control-plane-92qxc, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-x7drh, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-s6f3dr-ipv6-control-plane-pqmmf, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-s6f3dr-ipv6-control-plane-pqmmf, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-s6f3dr-ipv6-control-plane-92qxc, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-s6f3dr
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 17m29s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:334

INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" started at Tue, 30 Nov 2021 19:05:40 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-vex8hl" for hosting the cluster
Nov 30 19:05:40.677: INFO: starting to create namespace for hosting the "capz-e2e-vex8hl" test spec
2021/11/30 19:05:40 failed trying to get namespace (capz-e2e-vex8hl):namespaces "capz-e2e-vex8hl" not found
INFO: Creating namespace capz-e2e-vex8hl
INFO: Creating event watcher for namespace "capz-e2e-vex8hl"
Nov 30 19:05:40.708: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-vex8hl-vmss
INFO: Creating the workload cluster with name "capz-e2e-vex8hl-vmss" using the "machine-pool" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 148 lines ...
Nov 30 19:24:55.746: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

Nov 30 19:24:56.157: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-vex8hl-vmss in namespace capz-e2e-vex8hl

Nov 30 19:25:33.404: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set win-p-win

Failed to get logs for machine pool capz-e2e-vex8hl-vmss-mp-win, cluster capz-e2e-vex8hl/capz-e2e-vex8hl-vmss: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-vex8hl/capz-e2e-vex8hl-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 983.732173ms
STEP: Dumping workload cluster capz-e2e-vex8hl/capz-e2e-vex8hl-vmss Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-vfljn, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-vex8hl-vmss-control-plane-g8fp8, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-pkxhd, container kube-proxy
... skipping 10 lines ...
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-wspz8, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-hl59p, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-d9swv, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-mq6xc, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-khbfv, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-nnrvr, container calico-node-felix
STEP: Got error while iterating over activity logs for resource group capz-e2e-vex8hl-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000602447s
STEP: Dumping all the Cluster API resources in the "capz-e2e-vex8hl" namespace
STEP: Deleting all clusters in the capz-e2e-vex8hl namespace
STEP: Deleting cluster capz-e2e-vex8hl-vmss
INFO: Waiting for the Cluster capz-e2e-vex8hl/capz-e2e-vex8hl-vmss to be deleted
STEP: Waiting for cluster capz-e2e-vex8hl-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-vex8hl-vmss-control-plane-g8fp8, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-nnrvr, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mq6xc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-sfj92, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-vfljn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-khbfv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2wt6p, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-2h4t8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-vex8hl-vmss-control-plane-g8fp8, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hl59p, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-vex8hl-vmss-control-plane-g8fp8, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-wspz8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-d9swv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-sfj92, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-nnrvr, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-vex8hl-vmss-control-plane-g8fp8, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-pkxhd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-9h782, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-fq5g7, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-vex8hl
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" ran for 27m39s on Ginkgo node 3 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:205

INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" started at Tue, 30 Nov 2021 18:48:11 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-v17w4r" for hosting the cluster
Nov 30 18:48:11.371: INFO: starting to create namespace for hosting the "capz-e2e-v17w4r" test spec
2021/11/30 18:48:11 failed trying to get namespace (capz-e2e-v17w4r):namespaces "capz-e2e-v17w4r" not found
INFO: Creating namespace capz-e2e-v17w4r
INFO: Creating event watcher for namespace "capz-e2e-v17w4r"
Nov 30 18:48:11.406: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-v17w4r-ha
INFO: Creating the workload cluster with name "capz-e2e-v17w4r-ha" using the "(default)" template (Kubernetes v1.22.4, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 75 lines ...
Nov 30 18:58:27.402: INFO: starting to delete external LB service webpbazp9-elb
Nov 30 18:58:27.619: INFO: starting to delete deployment webpbazp9
Nov 30 18:58:27.727: INFO: starting to delete job curl-to-elb-job5s8ifhmhi21
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 30 18:58:27.936: INFO: starting to create dev deployment namespace
2021/11/30 18:58:28 failed trying to get namespace (development):namespaces "development" not found
2021/11/30 18:58:28 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 30 18:58:28.164: INFO: starting to create prod deployment namespace
2021/11/30 18:58:28 failed trying to get namespace (production):namespaces "production" not found
2021/11/30 18:58:28 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 30 18:58:28.479: INFO: starting to create frontend-prod deployments
Nov 30 18:58:28.670: INFO: starting to create frontend-dev deployments
Nov 30 18:58:28.925: INFO: starting to create backend deployments
Nov 30 18:58:29.046: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 30 18:58:55.844: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.54.2 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 30 19:01:06.932: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 30 19:01:07.311: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.54.2 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.54.2 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 30 19:05:29.618: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 30 19:05:30.003: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.54.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 30 19:07:42.197: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 30 19:07:42.573: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.168.131 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.54.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 30 19:12:06.388: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 30 19:12:06.768: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.54.2 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 30 19:14:19.513: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 30 19:14:19.891: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.54.2 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowsluvv2k to be available
Nov 30 19:16:32.577: INFO: starting to wait for deployment to become available
Nov 30 19:17:33.342: INFO: Deployment default/web-windowsluvv2k is now available, took 1m0.765530273s
... skipping 51 lines ...
Nov 30 19:23:12.067: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-v17w4r-ha-md-0-544qs

Nov 30 19:23:12.490: INFO: INFO: Collecting logs for node 10.1.0.7 in cluster capz-e2e-v17w4r-ha in namespace capz-e2e-v17w4r

Nov 30 19:23:39.657: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-v17w4r-ha-md-win-cvw7g

Failed to get logs for machine capz-e2e-v17w4r-ha-md-win-7d5d7f44db-pnvkd, cluster capz-e2e-v17w4r/capz-e2e-v17w4r-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 30 19:23:40.109: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster capz-e2e-v17w4r-ha in namespace capz-e2e-v17w4r

Nov 30 19:24:20.713: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-v17w4r-ha-md-win-89txt

Failed to get logs for machine capz-e2e-v17w4r-ha-md-win-7d5d7f44db-x7489, cluster capz-e2e-v17w4r/capz-e2e-v17w4r-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-v17w4r/capz-e2e-v17w4r-ha kube-system pod logs
STEP: Fetching kube-system pod logs took 845.637425ms
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-v17w4r-ha-control-plane-bkj5s, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-v17w4r-ha-control-plane-qzzg5, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-2nqmv, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-8cq2p, container coredns
... skipping 22 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-t4lsg, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-ffrpc, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-windows-ffrpc, container calico-node-felix
STEP: Creating log watcher for controller kube-system/calico-node-windows-nc2w7, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-windows-nc2w7, container calico-node-felix
STEP: Creating log watcher for controller kube-system/calico-node-xnznj, container calico-node
STEP: Got error while iterating over activity logs for resource group capz-e2e-v17w4r-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000795749s
STEP: Dumping all the Cluster API resources in the "capz-e2e-v17w4r" namespace
STEP: Deleting all clusters in the capz-e2e-v17w4r namespace
STEP: Deleting cluster capz-e2e-v17w4r-ha
INFO: Waiting for the Cluster capz-e2e-v17w4r/capz-e2e-v17w4r-ha to be deleted
STEP: Waiting for cluster capz-e2e-v17w4r-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-nc2w7, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-ffrpc, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xnznj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8cq2p, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6btxl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-g4d2f, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-r56fx, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-v17w4r-ha-control-plane-qzzg5, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-x8dhn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-t4lsg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-v17w4r-ha-control-plane-qzzg5, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-r9b6d, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-962lg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-v17w4r-ha-control-plane-qzzg5, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-zm2gh, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-ffrpc, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-2nqmv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-nc2w7, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-v17w4r-ha-control-plane-qzzg5, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-v17w4r
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" ran for 47m6s on Ginkgo node 2 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:144

INFO: "Creates a public management cluster in the same vnet" started at Tue, 30 Nov 2021 18:48:11 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-sudgpo" for hosting the cluster
Nov 30 18:48:11.000: INFO: starting to create namespace for hosting the "capz-e2e-sudgpo" test spec
2021/11/30 18:48:11 failed trying to get namespace (capz-e2e-sudgpo):namespaces "capz-e2e-sudgpo" not found
INFO: Creating namespace capz-e2e-sudgpo
INFO: Creating event watcher for namespace "capz-e2e-sudgpo"
Nov 30 18:48:11.036: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-sudgpo-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-mjvjx, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-r7wxh, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-sudgpo-public-custom-vnet-control-plane-wt774, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-h9mqt, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-sudgpo-public-custom-vnet-control-plane-wt774, container etcd
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-sudgpo-public-custom-vnet-control-plane-wt774, container kube-controller-manager
STEP: Got error while iterating over activity logs for resource group capz-e2e-sudgpo-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001119173s
STEP: Dumping all the Cluster API resources in the "capz-e2e-sudgpo" namespace
STEP: Deleting all clusters in the capz-e2e-sudgpo namespace
STEP: Deleting cluster capz-e2e-sudgpo-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-sudgpo/capz-e2e-sudgpo-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-sudgpo-public-custom-vnet to be deleted
W1130 19:34:27.966606   24454 reflector.go:441] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1130 19:34:59.193653   24454 trace.go:205] Trace[436427369]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (30-Nov-2021 19:34:29.192) (total time: 30001ms):
Trace[436427369]: [30.001175289s] [30.001175289s] END
E1130 19:34:59.193724   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp 52.155.228.42:6443: i/o timeout
I1130 19:35:31.873603   24454 trace.go:205] Trace[293269439]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (30-Nov-2021 19:35:01.872) (total time: 30001ms):
Trace[293269439]: [30.001298781s] [30.001298781s] END
E1130 19:35:31.873675   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp 52.155.228.42:6443: i/o timeout
I1130 19:36:06.326569   24454 trace.go:205] Trace[1789852825]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (30-Nov-2021 19:35:36.325) (total time: 30001ms):
Trace[1789852825]: [30.001531871s] [30.001531871s] END
E1130 19:36:06.326619   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp 52.155.228.42:6443: i/o timeout
I1130 19:36:48.344414   24454 trace.go:205] Trace[1432548973]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (30-Nov-2021 19:36:18.343) (total time: 30000ms):
Trace[1432548973]: [30.00066551s] [30.00066551s] END
E1130 19:36:48.344459   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp 52.155.228.42:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-sudgpo
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 30 19:37:00.827: INFO: deleting an existing virtual network "custom-vnet"
E1130 19:37:10.429899   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 19:37:11.827: INFO: deleting an existing route table "node-routetable"
Nov 30 19:37:22.695: INFO: deleting an existing network security group "node-nsg"
Nov 30 19:37:33.536: INFO: deleting an existing network security group "control-plane-nsg"
Nov 30 19:37:44.112: INFO: verifying the existing resource group "capz-e2e-sudgpo-public-custom-vnet" is empty
Nov 30 19:37:44.646: INFO: deleting the existing resource group "capz-e2e-sudgpo-public-custom-vnet"
E1130 19:38:00.921285   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 19:38:39.147178   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1130 19:39:29.894655   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 51m37s on Ginkgo node 1 of 3


• [SLOW TEST:3097.099 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:455

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Tue, 30 Nov 2021 19:35:17 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-45hzhl" for hosting the cluster
Nov 30 19:35:17.226: INFO: starting to create namespace for hosting the "capz-e2e-45hzhl" test spec
2021/11/30 19:35:17 failed trying to get namespace (capz-e2e-45hzhl):namespaces "capz-e2e-45hzhl" not found
INFO: Creating namespace capz-e2e-45hzhl
INFO: Creating event watcher for namespace "capz-e2e-45hzhl"
Nov 30 19:35:17.266: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-45hzhl-oot
INFO: Creating the workload cluster with name "capz-e2e-45hzhl-oot" using the "external-cloud-provider" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 98 lines ...
STEP: Fetching activity logs took 586.783082ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-45hzhl" namespace
STEP: Deleting all clusters in the capz-e2e-45hzhl namespace
STEP: Deleting cluster capz-e2e-45hzhl-oot
INFO: Waiting for the Cluster capz-e2e-45hzhl/capz-e2e-45hzhl-oot to be deleted
STEP: Waiting for cluster capz-e2e-45hzhl-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-cv8n6, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6fssl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jmgj5, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-45hzhl
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 16m19s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:490

INFO: "with a single control plane node and 1 node" started at Tue, 30 Nov 2021 19:39:48 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-5xyczu" for hosting the cluster
Nov 30 19:39:48.101: INFO: starting to create namespace for hosting the "capz-e2e-5xyczu" test spec
2021/11/30 19:39:48 failed trying to get namespace (capz-e2e-5xyczu):namespaces "capz-e2e-5xyczu" not found
INFO: Creating namespace capz-e2e-5xyczu
INFO: Creating event watcher for namespace "capz-e2e-5xyczu"
Nov 30 19:39:48.145: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-5xyczu-aks
INFO: Creating the workload cluster with name "capz-e2e-5xyczu-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1130 19:40:20.859514   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 19:41:09.653811   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 19:41:42.458017   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 19:42:39.491296   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 19:43:20.367153   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 19:43:58.615230   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 30 19:44:29.379: INFO: Waiting for the first control plane machine managed by capz-e2e-5xyczu/capz-e2e-5xyczu-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
INFO: Waiting for control plane to be ready
Nov 30 19:44:39.417: INFO: Waiting for the first control plane machine managed by capz-e2e-5xyczu/capz-e2e-5xyczu-aks to be provisioned
STEP: Waiting for all control plane nodes to exist
... skipping 5 lines ...
Nov 30 19:44:46.165: INFO: want 2 instances, found 2 ready and 2 available. generation: 1, observedGeneration: 1
Nov 30 19:44:46.271: INFO: mapping nsenter pods to hostnames for host-by-host execution
Nov 30 19:44:46.271: INFO: found host aks-agentpool0-22854224-vmss000000 with pod nsenter-254dj
Nov 30 19:44:46.271: INFO: found host aks-agentpool1-22854224-vmss000000 with pod nsenter-cdswl
STEP: checking that time synchronization is healthy on aks-agentpool1-22854224-vmss000000
STEP: checking that time synchronization is healthy on aks-agentpool1-22854224-vmss000000
E1130 19:44:47.995216   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: time sync OK for host aks-agentpool1-22854224-vmss000000
STEP: time sync OK for host aks-agentpool1-22854224-vmss000000
STEP: time sync OK for host aks-agentpool1-22854224-vmss000000
STEP: time sync OK for host aks-agentpool1-22854224-vmss000000
STEP: Dumping logs from the "capz-e2e-5xyczu-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-5xyczu/capz-e2e-5xyczu-aks logs
Nov 30 19:44:48.455: INFO: INFO: Collecting logs for node aks-agentpool1-22854224-vmss000000 in cluster capz-e2e-5xyczu-aks in namespace capz-e2e-5xyczu

E1130 19:45:22.119592   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 19:45:59.662597   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 19:46:48.413246   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 19:46:58.340: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-5xyczu/capz-e2e-5xyczu-aks: [dialing public load balancer at capz-e2e-5xyczu-aks-8fe28f09.hcp.northeurope.azmk8s.io: dial tcp 20.105.123.74:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
Nov 30 19:46:59.156: INFO: INFO: Collecting logs for node aks-agentpool1-22854224-vmss000000 in cluster capz-e2e-5xyczu-aks in namespace capz-e2e-5xyczu

E1130 19:47:45.767667   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 19:48:34.884924   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 19:49:09.416: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-5xyczu/capz-e2e-5xyczu-aks: [dialing public load balancer at capz-e2e-5xyczu-aks-8fe28f09.hcp.northeurope.azmk8s.io: dial tcp 20.105.123.74:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-5xyczu/capz-e2e-5xyczu-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 1.099643233s
STEP: Dumping workload cluster capz-e2e-5xyczu/capz-e2e-5xyczu-aks Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-kr2jr, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-9hnh6, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-jtbbc, container kube-proxy
... skipping 8 lines ...
STEP: Fetching activity logs took 485.664033ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-5xyczu" namespace
STEP: Deleting all clusters in the capz-e2e-5xyczu namespace
STEP: Deleting cluster capz-e2e-5xyczu-aks
INFO: Waiting for the Cluster capz-e2e-5xyczu/capz-e2e-5xyczu-aks to be deleted
STEP: Waiting for cluster capz-e2e-5xyczu-aks to be deleted
E1130 19:49:29.387193   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 19:50:19.640754   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 19:51:06.873946   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 19:51:42.269075   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 19:52:41.787199   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 19:53:19.080753   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-5xyczu
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1130 19:54:00.911351   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 14m56s on Ginkgo node 1 of 3


• [SLOW TEST:896.141 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413

INFO: "with a single control plane node and 1 node" started at Tue, 30 Nov 2021 19:33:19 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-31n8we" for hosting the cluster
Nov 30 19:33:19.865: INFO: starting to create namespace for hosting the "capz-e2e-31n8we" test spec
2021/11/30 19:33:19 failed trying to get namespace (capz-e2e-31n8we):namespaces "capz-e2e-31n8we" not found
INFO: Creating namespace capz-e2e-31n8we
INFO: Creating event watcher for namespace "capz-e2e-31n8we"
Nov 30 19:33:19.898: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-31n8we-gpu
INFO: Creating the workload cluster with name "capz-e2e-31n8we-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.4, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: Fetching activity logs took 1.054209783s
STEP: Dumping all the Cluster API resources in the "capz-e2e-31n8we" namespace
STEP: Deleting all clusters in the capz-e2e-31n8we namespace
STEP: Deleting cluster capz-e2e-31n8we-gpu
INFO: Waiting for the Cluster capz-e2e-31n8we/capz-e2e-31n8we-gpu to be deleted
STEP: Waiting for cluster capz-e2e-31n8we-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-fkzvn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cl5rj, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-31n8we
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 35m29s on Ginkgo node 3 of 3

... skipping 57 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:579

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Tue, 30 Nov 2021 19:54:44 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-xdc5ij" for hosting the cluster
Nov 30 19:54:44.245: INFO: starting to create namespace for hosting the "capz-e2e-xdc5ij" test spec
2021/11/30 19:54:44 failed trying to get namespace (capz-e2e-xdc5ij):namespaces "capz-e2e-xdc5ij" not found
INFO: Creating namespace capz-e2e-xdc5ij
INFO: Creating event watcher for namespace "capz-e2e-xdc5ij"
Nov 30 19:54:44.284: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-xdc5ij-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-xdc5ij-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.4, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 12 lines ...
kubeadmconfig.bootstrap.cluster.x-k8s.io/capz-e2e-xdc5ij-win-vmss-mp-win created
clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-xdc5ij-win-vmss-flannel created
configmap/cni-capz-e2e-xdc5ij-win-vmss-flannel created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1130 19:54:49.646158   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 19:55:31.208608   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by capz-e2e-xdc5ij/capz-e2e-xdc5ij-win-vmss-control-plane to be provisioned
STEP: Waiting for one control plane node to exist
E1130 19:56:17.436417   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 19:56:59.269927   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 19:57:32.817483   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
INFO: Waiting for control plane capz-e2e-xdc5ij/capz-e2e-xdc5ij-win-vmss-control-plane to be ready (implies underlying nodes to be ready as well)
STEP: Waiting for the control plane to be ready
E1130 19:58:13.451915   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for the machine deployments to be provisioned
INFO: Waiting for the machine pools to be provisioned
STEP: Waiting for the machine pool workload nodes to exist
E1130 19:59:12.303844   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Waiting for the machine pool workload nodes to exist
E1130 19:59:59.632298   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 20:00:30.129738   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 20:01:21.770417   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 20:02:14.774369   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 20:02:45.876916   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/webru9l7o to be available
Nov 30 20:03:05.809: INFO: starting to wait for deployment to become available
E1130 20:03:19.405370   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 20:03:26.135: INFO: Deployment default/webru9l7o is now available, took 20.326205508s
STEP: creating an internal Load Balancer service
Nov 30 20:03:26.135: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/webru9l7o-ilb to be available
Nov 30 20:03:26.255: INFO: waiting for service default/webru9l7o-ilb to be available
E1130 20:04:03.682797   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 20:04:16.872: INFO: service default/webru9l7o-ilb is available, took 50.617163412s
STEP: connecting to the internal LB service from a curl pod
Nov 30 20:04:16.974: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-jobw24eh to be complete
Nov 30 20:04:17.089: INFO: waiting for job default/curl-to-ilb-jobw24eh to be complete
Nov 30 20:04:27.293: INFO: job default/curl-to-ilb-jobw24eh is complete, took 10.20384096s
STEP: deleting the ilb test resources
Nov 30 20:04:27.293: INFO: deleting the ilb service: webru9l7o-ilb
Nov 30 20:04:27.416: INFO: deleting the ilb job: curl-to-ilb-jobw24eh
STEP: creating an external Load Balancer service
Nov 30 20:04:27.519: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/webru9l7o-elb to be available
Nov 30 20:04:27.632: INFO: waiting for service default/webru9l7o-elb to be available
E1130 20:05:00.047306   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 20:05:48.559: INFO: service default/webru9l7o-elb is available, took 1m20.926554449s
STEP: connecting to the external LB service from a curl pod
Nov 30 20:05:48.661: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-jobxdcajdlye58 to be complete
Nov 30 20:05:48.767: INFO: waiting for job default/curl-to-elb-jobxdcajdlye58 to be complete
E1130 20:05:55.755570   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 20:05:58.972: INFO: job default/curl-to-elb-jobxdcajdlye58 is complete, took 10.205489712s
STEP: connecting directly to the external LB service
Nov 30 20:05:58.972: INFO: starting attempts to connect directly to the external LB service
2021/11/30 20:05:58 [DEBUG] GET http://20.93.96.20
Nov 30 20:05:59.176: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 30 20:05:59.176: INFO: starting to delete external LB service webru9l7o-elb
Nov 30 20:05:59.300: INFO: starting to delete deployment webru9l7o
Nov 30 20:05:59.403: INFO: starting to delete job curl-to-elb-jobxdcajdlye58
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowsgflomj to be available
Nov 30 20:05:59.749: INFO: starting to wait for deployment to become available
E1130 20:06:53.208085   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 20:07:10.629: INFO: Deployment default/web-windowsgflomj is now available, took 1m10.879840385s
STEP: creating an internal Load Balancer service
Nov 30 20:07:10.629: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/web-windowsgflomj-ilb to be available
Nov 30 20:07:10.748: INFO: waiting for service default/web-windowsgflomj-ilb to be available
E1130 20:07:33.708141   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 20:08:01.365: INFO: service default/web-windowsgflomj-ilb is available, took 50.617343832s
STEP: connecting to the internal LB service from a curl pod
Nov 30 20:08:01.467: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-jobich6j to be complete
Nov 30 20:08:01.572: INFO: waiting for job default/curl-to-ilb-jobich6j to be complete
Nov 30 20:08:11.776: INFO: job default/curl-to-ilb-jobich6j is complete, took 10.20440228s
STEP: deleting the ilb test resources
Nov 30 20:08:11.812: INFO: deleting the ilb service: web-windowsgflomj-ilb
Nov 30 20:08:11.932: INFO: deleting the ilb job: curl-to-ilb-jobich6j
STEP: creating an external Load Balancer service
Nov 30 20:08:12.035: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/web-windowsgflomj-elb to be available
Nov 30 20:08:12.151: INFO: waiting for service default/web-windowsgflomj-elb to be available
E1130 20:08:22.976496   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 20:09:03.400475   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 20:09:33.075: INFO: service default/web-windowsgflomj-elb is available, took 1m20.924107421s
STEP: connecting to the external LB service from a curl pod
Nov 30 20:09:33.177: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-job20m3n7y641o to be complete
Nov 30 20:09:33.286: INFO: waiting for job default/curl-to-elb-job20m3n7y641o to be complete
E1130 20:09:35.100029   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 20:09:43.490: INFO: job default/curl-to-elb-job20m3n7y641o is complete, took 10.204577006s
STEP: connecting directly to the external LB service
Nov 30 20:09:43.490: INFO: starting attempts to connect directly to the external LB service
2021/11/30 20:09:43 [DEBUG] GET http://20.93.99.87
E1130 20:10:05.331005   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
2021/11/30 20:10:13 [ERR] GET http://20.93.99.87 request failed: Get "http://20.93.99.87": dial tcp 20.93.99.87:80: i/o timeout
2021/11/30 20:10:13 [DEBUG] GET http://20.93.99.87: retrying in 1s (4 left)
Nov 30 20:10:21.941: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 30 20:10:21.941: INFO: starting to delete external LB service web-windowsgflomj-elb
Nov 30 20:10:22.069: INFO: starting to delete deployment web-windowsgflomj
Nov 30 20:10:22.172: INFO: starting to delete job curl-to-elb-job20m3n7y641o
... skipping 6 lines ...
Nov 30 20:10:34.913: INFO: INFO: Collecting logs for node capz-e2e-xdc5ij-win-vmss-mp-0000000 in cluster capz-e2e-xdc5ij-win-vmss in namespace capz-e2e-xdc5ij

Nov 30 20:10:47.190: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-xdc5ij-win-vmss-mp-0

Nov 30 20:10:47.697: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-xdc5ij-win-vmss in namespace capz-e2e-xdc5ij

E1130 20:10:51.311216   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 20:11:21.150: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

STEP: Dumping workload cluster capz-e2e-xdc5ij/capz-e2e-xdc5ij-win-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 724.470207ms
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-k8kqb, container coredns
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-xdc5ij-win-vmss-control-plane-6wbcr, container kube-scheduler
... skipping 5 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-xdc5ij-win-vmss-control-plane-6wbcr, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-vdzpp, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-xdc5ij-win-vmss-control-plane-6wbcr, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-j29ls, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-cqll6, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-h8lgw, container kube-flannel
E1130 20:11:46.043438   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Got error while iterating over activity logs for resource group capz-e2e-xdc5ij-win-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000452798s
STEP: Dumping all the Cluster API resources in the "capz-e2e-xdc5ij" namespace
STEP: Deleting all clusters in the capz-e2e-xdc5ij namespace
STEP: Deleting cluster capz-e2e-xdc5ij-win-vmss
INFO: Waiting for the Cluster capz-e2e-xdc5ij/capz-e2e-xdc5ij-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-xdc5ij-win-vmss to be deleted
E1130 20:12:26.564505   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vdzpp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-v2wgv, container kube-flannel: http2: client connection lost
E1130 20:13:09.141803   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 20:14:06.024339   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 20:15:05.306758   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 20:15:50.565227   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 20:16:31.488140   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 20:17:28.689221   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 20:18:00.620009   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 20:18:56.674087   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 20:19:54.579850   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-xdc5ij
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1130 20:20:27.328165   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 20:21:00.221681   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 20:21:42.108222   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 20:22:22.405790   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 20:22:56.039077   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 29m5s on Ginkgo node 1 of 3


• [SLOW TEST:1745.093 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Tue, 30 Nov 2021 19:51:36 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-c5wwtz" for hosting the cluster
Nov 30 19:51:36.620: INFO: starting to create namespace for hosting the "capz-e2e-c5wwtz" test spec
2021/11/30 19:51:36 failed trying to get namespace (capz-e2e-c5wwtz):namespaces "capz-e2e-c5wwtz" not found
INFO: Creating namespace capz-e2e-c5wwtz
INFO: Creating event watcher for namespace "capz-e2e-c5wwtz"
Nov 30 19:51:36.657: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-c5wwtz-win-ha
INFO: Creating the workload cluster with name "capz-e2e-c5wwtz-win-ha" using the "windows" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 151 lines ...
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-c5wwtz-win-ha-control-plane-r66bb, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-c5wwtz-win-ha-control-plane-zd4rp, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-x45hg, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-c5wwtz-win-ha-control-plane-qswd4, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-ltb8b, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-rrjtl, container kube-flannel
STEP: Got error while iterating over activity logs for resource group capz-e2e-c5wwtz-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000443836s
STEP: Dumping all the Cluster API resources in the "capz-e2e-c5wwtz" namespace
STEP: Deleting all clusters in the capz-e2e-c5wwtz namespace
STEP: Deleting cluster capz-e2e-c5wwtz-win-ha
INFO: Waiting for the Cluster capz-e2e-c5wwtz/capz-e2e-c5wwtz-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-c5wwtz-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-t8gfk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-x45hg, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-qth25, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-c5wwtz-win-ha-control-plane-r66bb, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lh4b6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-c5wwtz-win-ha-control-plane-r66bb, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-c5wwtz-win-ha-control-plane-r66bb, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-c5wwtz-win-ha-control-plane-r66bb, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-kf5kw, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-2bq6c, container kube-flannel: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-c5wwtz
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 32m21s on Ginkgo node 2 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows Enabled cluster with dockershim
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:530
    With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532
------------------------------
E1130 20:23:51.525797   24454 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-sudgpo/events?resourceVersion=11160": dial tcp: lookup capz-e2e-sudgpo-public-custom-vnet-46d3230b.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a GPU-enabled cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:76

Ran 9 of 24 Specs in 5971.397 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 15 Skipped


Ginkgo ran 1 suite in 1h40m50.94891993s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:176: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:184: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...