This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-21 18:36
Elapsed1h46m
Revisionmain

Test Failures


capz-e2e Workload cluster creation Creating a GPU-enabled cluster with a single control plane node and 1 node 34m18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sGPU\-enabled\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413
Timed out after 1200.001s.
Expected
    <bool>: false
to be true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:76
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 15 Skipped Tests

Error lines from build-log.txt

... skipping 438 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:288

INFO: "With ipv6 worker node" started at Sun, 21 Nov 2021 18:45:39 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-f61un7" for hosting the cluster
Nov 21 18:45:39.770: INFO: starting to create namespace for hosting the "capz-e2e-f61un7" test spec
2021/11/21 18:45:39 failed trying to get namespace (capz-e2e-f61un7):namespaces "capz-e2e-f61un7" not found
INFO: Creating namespace capz-e2e-f61un7
INFO: Creating event watcher for namespace "capz-e2e-f61un7"
Nov 21 18:45:39.809: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-f61un7-ipv6
INFO: Creating the workload cluster with name "capz-e2e-f61un7-ipv6" using the "ipv6" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 552.6773ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-f61un7" namespace
STEP: Deleting all clusters in the capz-e2e-f61un7 namespace
STEP: Deleting cluster capz-e2e-f61un7-ipv6
INFO: Waiting for the Cluster capz-e2e-f61un7/capz-e2e-f61un7-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-f61un7-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-f61un7-ipv6-control-plane-vpjsr, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-wcqw4, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-f61un7-ipv6-control-plane-vs77h, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-f61un7-ipv6-control-plane-vs77h, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wn5xv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-97fmx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-lmm6n, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-hl494, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-f61un7-ipv6-control-plane-zlpnm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-f61un7-ipv6-control-plane-vs77h, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vdcg4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5qkjq, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fq6nz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-f61un7-ipv6-control-plane-zlpnm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-f61un7-ipv6-control-plane-vpjsr, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-f61un7-ipv6-control-plane-zlpnm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-9phqw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-f61un7-ipv6-control-plane-vpjsr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-f61un7-ipv6-control-plane-vpjsr, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-fblrf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-f61un7-ipv6-control-plane-vs77h, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-f61un7-ipv6-control-plane-zlpnm, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-jhv98, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-f61un7
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 17m34s on Ginkgo node 1 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:334

INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" started at Sun, 21 Nov 2021 19:03:13 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-c73hbd" for hosting the cluster
Nov 21 19:03:13.332: INFO: starting to create namespace for hosting the "capz-e2e-c73hbd" test spec
2021/11/21 19:03:13 failed trying to get namespace (capz-e2e-c73hbd):namespaces "capz-e2e-c73hbd" not found
INFO: Creating namespace capz-e2e-c73hbd
INFO: Creating event watcher for namespace "capz-e2e-c73hbd"
Nov 21 19:03:13.371: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-c73hbd-vmss
INFO: Creating the workload cluster with name "capz-e2e-c73hbd-vmss" using the "machine-pool" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 62 lines ...
STEP: waiting for job default/curl-to-elb-jobe3wtin8n7ej to be complete
Nov 21 19:13:45.794: INFO: waiting for job default/curl-to-elb-jobe3wtin8n7ej to be complete
Nov 21 19:13:56.007: INFO: job default/curl-to-elb-jobe3wtin8n7ej is complete, took 10.212465141s
STEP: connecting directly to the external LB service
Nov 21 19:13:56.007: INFO: starting attempts to connect directly to the external LB service
2021/11/21 19:13:56 [DEBUG] GET http://20.93.14.142
2021/11/21 19:14:26 [ERR] GET http://20.93.14.142 request failed: Get "http://20.93.14.142": dial tcp 20.93.14.142:80: i/o timeout
2021/11/21 19:14:26 [DEBUG] GET http://20.93.14.142: retrying in 1s (4 left)
Nov 21 19:14:42.595: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 21 19:14:42.595: INFO: starting to delete external LB service web5t72qv-elb
Nov 21 19:14:42.741: INFO: starting to delete deployment web5t72qv
Nov 21 19:14:42.848: INFO: starting to delete job curl-to-elb-jobe3wtin8n7ej
... skipping 25 lines ...
STEP: waiting for job default/curl-to-elb-jobvg0vyxywycg to be complete
Nov 21 19:18:16.881: INFO: waiting for job default/curl-to-elb-jobvg0vyxywycg to be complete
Nov 21 19:18:27.094: INFO: job default/curl-to-elb-jobvg0vyxywycg is complete, took 10.212577718s
STEP: connecting directly to the external LB service
Nov 21 19:18:27.094: INFO: starting attempts to connect directly to the external LB service
2021/11/21 19:18:27 [DEBUG] GET http://20.93.14.142
2021/11/21 19:18:57 [ERR] GET http://20.93.14.142 request failed: Get "http://20.93.14.142": dial tcp 20.93.14.142:80: i/o timeout
2021/11/21 19:18:57 [DEBUG] GET http://20.93.14.142: retrying in 1s (4 left)
Nov 21 19:19:05.508: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 21 19:19:05.508: INFO: starting to delete external LB service web-windowsajvpzt-elb
Nov 21 19:19:05.642: INFO: starting to delete deployment web-windowsajvpzt
Nov 21 19:19:05.752: INFO: starting to delete job curl-to-elb-jobvg0vyxywycg
... skipping 33 lines ...
Nov 21 19:22:56.831: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-c73hbd-vmss-mp-0

Nov 21 19:22:57.301: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-c73hbd-vmss in namespace capz-e2e-c73hbd

Nov 21 19:23:14.994: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-e2e-c73hbd-vmss-mp-0

Failed to get logs for machine pool capz-e2e-c73hbd-vmss-mp-0, cluster capz-e2e-c73hbd/capz-e2e-c73hbd-vmss: [[running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1], [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1]]
Nov 21 19:23:15.424: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-c73hbd-vmss in namespace capz-e2e-c73hbd

Nov 21 19:23:50.674: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

Nov 21 19:23:51.084: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-c73hbd-vmss in namespace capz-e2e-c73hbd

Nov 21 19:24:24.963: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set win-p-win

Failed to get logs for machine pool capz-e2e-c73hbd-vmss-mp-win, cluster capz-e2e-c73hbd/capz-e2e-c73hbd-vmss: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-c73hbd/capz-e2e-c73hbd-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 1.035410393s
STEP: Dumping workload cluster capz-e2e-c73hbd/capz-e2e-c73hbd-vmss Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-m8szt, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-mbtxn, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-mpkmp, container calico-node
... skipping 10 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-c73hbd-vmss-control-plane-8kbp8, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-rkwvz, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-c73hbd-vmss-control-plane-8kbp8, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-b6msv, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-cx8fr, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-pswqr, container calico-node
STEP: Got error while iterating over activity logs for resource group capz-e2e-c73hbd-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001192387s
STEP: Dumping all the Cluster API resources in the "capz-e2e-c73hbd" namespace
STEP: Deleting all clusters in the capz-e2e-c73hbd namespace
STEP: Deleting cluster capz-e2e-c73hbd-vmss
INFO: Waiting for the Cluster capz-e2e-c73hbd/capz-e2e-c73hbd-vmss to be deleted
STEP: Waiting for cluster capz-e2e-c73hbd-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-bjtdx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-c73hbd-vmss-control-plane-8kbp8, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9rxzs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-mbtxn, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-tfskx, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-m8szt, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mpkmp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-wf4g2, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-c73hbd-vmss-control-plane-8kbp8, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-tfskx, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-mbtxn, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rkwvz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-c73hbd-vmss-control-plane-8kbp8, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8d2ng, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-b6msv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-c73hbd-vmss-control-plane-8kbp8, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cx8fr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-657rm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-pswqr, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-c73hbd
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" ran for 28m56s on Ginkgo node 1 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:205

INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" started at Sun, 21 Nov 2021 18:45:37 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-1rp1vf" for hosting the cluster
Nov 21 18:45:37.948: INFO: starting to create namespace for hosting the "capz-e2e-1rp1vf" test spec
2021/11/21 18:45:37 failed trying to get namespace (capz-e2e-1rp1vf):namespaces "capz-e2e-1rp1vf" not found
INFO: Creating namespace capz-e2e-1rp1vf
INFO: Creating event watcher for namespace "capz-e2e-1rp1vf"
Nov 21 18:45:37.996: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-1rp1vf-ha
INFO: Creating the workload cluster with name "capz-e2e-1rp1vf-ha" using the "(default)" template (Kubernetes v1.22.4, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 67 lines ...
STEP: waiting for job default/curl-to-elb-joboav8zhk3vx2 to be complete
Nov 21 18:56:07.659: INFO: waiting for job default/curl-to-elb-joboav8zhk3vx2 to be complete
Nov 21 18:56:17.873: INFO: job default/curl-to-elb-joboav8zhk3vx2 is complete, took 10.213927072s
STEP: connecting directly to the external LB service
Nov 21 18:56:17.873: INFO: starting attempts to connect directly to the external LB service
2021/11/21 18:56:17 [DEBUG] GET http://20.105.41.213
2021/11/21 18:56:47 [ERR] GET http://20.105.41.213 request failed: Get "http://20.105.41.213": dial tcp 20.105.41.213:80: i/o timeout
2021/11/21 18:56:47 [DEBUG] GET http://20.105.41.213: retrying in 1s (4 left)
2021/11/21 18:57:18 [ERR] GET http://20.105.41.213 request failed: Get "http://20.105.41.213": dial tcp 20.105.41.213:80: i/o timeout
2021/11/21 18:57:18 [DEBUG] GET http://20.105.41.213: retrying in 2s (3 left)
Nov 21 18:57:21.084: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 21 18:57:21.084: INFO: starting to delete external LB service webhr9qgs-elb
Nov 21 18:57:21.237: INFO: starting to delete deployment webhr9qgs
Nov 21 18:57:21.349: INFO: starting to delete job curl-to-elb-joboav8zhk3vx2
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 21 18:57:21.503: INFO: starting to create dev deployment namespace
2021/11/21 18:57:21 failed trying to get namespace (development):namespaces "development" not found
2021/11/21 18:57:21 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 21 18:57:21.727: INFO: starting to create prod deployment namespace
2021/11/21 18:57:21 failed trying to get namespace (production):namespaces "production" not found
2021/11/21 18:57:21 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 21 18:57:21.949: INFO: starting to create frontend-prod deployments
Nov 21 18:57:22.060: INFO: starting to create frontend-dev deployments
Nov 21 18:57:22.170: INFO: starting to create backend deployments
Nov 21 18:57:22.281: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 21 18:57:48.771: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.95.68 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 21 18:59:59.013: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 21 18:59:59.431: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.95.68 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.95.68 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 21 19:04:20.395: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 21 19:04:20.815: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.201.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 21 19:06:33.515: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 21 19:06:33.897: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.201.130 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.201.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 21 19:10:57.708: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 21 19:10:58.092: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.95.68 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 21 19:13:09.542: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 21 19:13:09.965: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.95.68 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowsy1djaz to be available
Nov 21 19:15:22.017: INFO: starting to wait for deployment to become available
Nov 21 19:16:32.943: INFO: Deployment default/web-windowsy1djaz is now available, took 1m10.926195409s
... skipping 51 lines ...
Nov 21 19:20:29.732: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-1rp1vf-ha-md-0-clr5h

Nov 21 19:20:30.108: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster capz-e2e-1rp1vf-ha in namespace capz-e2e-1rp1vf

Nov 21 19:20:56.543: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-1rp1vf-ha-md-win-htf88

Failed to get logs for machine capz-e2e-1rp1vf-ha-md-win-7468979bd5-677jh, cluster capz-e2e-1rp1vf/capz-e2e-1rp1vf-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 21 19:20:56.931: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster capz-e2e-1rp1vf-ha in namespace capz-e2e-1rp1vf

Nov 21 19:21:23.813: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-1rp1vf-ha-md-win-kbbjs

Failed to get logs for machine capz-e2e-1rp1vf-ha-md-win-7468979bd5-pb469, cluster capz-e2e-1rp1vf/capz-e2e-1rp1vf-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-1rp1vf/capz-e2e-1rp1vf-ha kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-node-7ftgf, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-tkdrc, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-1rp1vf-ha-control-plane-gfdkh, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-windows-lqdrf, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-proxy-lblqt, container kube-proxy
... skipping 22 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-windows-phlnq, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-windows-lqdrf, container calico-node-startup
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-hr6w8, container coredns
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-1rp1vf-ha-control-plane-8qmfp, container kube-apiserver
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-8qnmr, container coredns
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-gkqqz, container calico-kube-controllers
STEP: Got error while iterating over activity logs for resource group capz-e2e-1rp1vf-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000777587s
STEP: Dumping all the Cluster API resources in the "capz-e2e-1rp1vf" namespace
STEP: Deleting all clusters in the capz-e2e-1rp1vf namespace
STEP: Deleting cluster capz-e2e-1rp1vf-ha
INFO: Waiting for the Cluster capz-e2e-1rp1vf/capz-e2e-1rp1vf-ha to be deleted
STEP: Waiting for cluster capz-e2e-1rp1vf-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-lqdrf, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-1rp1vf-ha-control-plane-gfdkh, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bt4jq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-9qk4q, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-1rp1vf-ha-control-plane-gfdkh, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-phlnq, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-1rp1vf-ha-control-plane-8qmfp, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-tkdrc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mgvhk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2qvbs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-phlnq, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-1rp1vf-ha-control-plane-8qmfp, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-1rp1vf-ha-control-plane-8qmfp, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-1rp1vf-ha-control-plane-gfdkh, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-dc4t7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-vp4s5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-1rp1vf-ha-control-plane-gfdkh, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lvc94, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-lsp78, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-wvpp9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-1rp1vf-ha-control-plane-8qmfp, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-lqdrf, container calico-node-startup: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-1rp1vf
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" ran for 48m2s on Ginkgo node 3 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:144

INFO: "Creates a public management cluster in the same vnet" started at Sun, 21 Nov 2021 18:45:31 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-8576kd" for hosting the cluster
Nov 21 18:45:31.746: INFO: starting to create namespace for hosting the "capz-e2e-8576kd" test spec
2021/11/21 18:45:31 failed trying to get namespace (capz-e2e-8576kd):namespaces "capz-e2e-8576kd" not found
INFO: Creating namespace capz-e2e-8576kd
INFO: Creating event watcher for namespace "capz-e2e-8576kd"
Nov 21 18:45:31.795: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-8576kd-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-8576kd-public-custom-vnet-control-plane-jksrz, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-qjgbg, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-pnh7l, container calico-node
STEP: Dumping workload cluster capz-e2e-8576kd/capz-e2e-8576kd-public-custom-vnet Azure activity log
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-8576kd-public-custom-vnet-control-plane-jksrz, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-4tr8q, container coredns
STEP: Got error while iterating over activity logs for resource group capz-e2e-8576kd-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000658982s
STEP: Dumping all the Cluster API resources in the "capz-e2e-8576kd" namespace
STEP: Deleting all clusters in the capz-e2e-8576kd namespace
STEP: Deleting cluster capz-e2e-8576kd-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-8576kd/capz-e2e-8576kd-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-8576kd-public-custom-vnet to be deleted
W1121 19:30:23.959098   24494 reflector.go:441] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1121 19:30:55.745255   24494 trace.go:205] Trace[1322501341]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (21-Nov-2021 19:30:25.744) (total time: 30000ms):
Trace[1322501341]: [30.000529495s] [30.000529495s] END
E1121 19:30:55.745317   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp 20.105.41.34:6443: i/o timeout
I1121 19:31:30.765869   24494 trace.go:205] Trace[1466327734]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (21-Nov-2021 19:31:00.765) (total time: 30000ms):
Trace[1466327734]: [30.000725711s] [30.000725711s] END
E1121 19:31:30.765938   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp 20.105.41.34:6443: i/o timeout
I1121 19:32:12.555599   24494 trace.go:205] Trace[43571261]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (21-Nov-2021 19:31:42.554) (total time: 30000ms):
Trace[43571261]: [30.00073808s] [30.00073808s] END
E1121 19:32:12.555655   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp 20.105.41.34:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-8576kd
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 21 19:33:02.558: INFO: deleting an existing virtual network "custom-vnet"
I1121 19:33:03.315161   24494 trace.go:205] Trace[896814484]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (21-Nov-2021 19:32:33.313) (total time: 30001ms):
Trace[896814484]: [30.001455767s] [30.001455767s] END
E1121 19:33:03.315216   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp 20.105.41.34:6443: i/o timeout
Nov 21 19:33:13.343: INFO: deleting an existing route table "node-routetable"
Nov 21 19:33:24.598: INFO: deleting an existing network security group "node-nsg"
Nov 21 19:33:35.584: INFO: deleting an existing network security group "control-plane-nsg"
E1121 19:33:40.199577   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 19:33:46.577: INFO: verifying the existing resource group "capz-e2e-8576kd-public-custom-vnet" is empty
Nov 21 19:33:46.647: INFO: deleting the existing resource group "capz-e2e-8576kd-public-custom-vnet"
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1121 19:34:20.805163   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 49m15s on Ginkgo node 2 of 3


• [SLOW TEST:2954.763 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:490

INFO: "with a single control plane node and 1 node" started at Sun, 21 Nov 2021 19:34:46 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-fn7nrl" for hosting the cluster
Nov 21 19:34:46.511: INFO: starting to create namespace for hosting the "capz-e2e-fn7nrl" test spec
2021/11/21 19:34:46 failed trying to get namespace (capz-e2e-fn7nrl):namespaces "capz-e2e-fn7nrl" not found
INFO: Creating namespace capz-e2e-fn7nrl
INFO: Creating event watcher for namespace "capz-e2e-fn7nrl"
Nov 21 19:34:46.544: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-fn7nrl-aks
INFO: Creating the workload cluster with name "capz-e2e-fn7nrl-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1121 19:34:57.288990   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:35:29.927059   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:36:27.761555   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:37:22.172350   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:38:15.046971   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 21 19:38:27.926: INFO: Waiting for the first control plane machine managed by capz-e2e-fn7nrl/capz-e2e-fn7nrl-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
INFO: Waiting for control plane to be ready
Nov 21 19:38:37.965: INFO: Waiting for the first control plane machine managed by capz-e2e-fn7nrl/capz-e2e-fn7nrl-aks to be provisioned
STEP: Waiting for all control plane nodes to exist
... skipping 13 lines ...
STEP: time sync OK for host aks-agentpool1-31337603-vmss000000
STEP: time sync OK for host aks-agentpool1-31337603-vmss000000
STEP: Dumping logs from the "capz-e2e-fn7nrl-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-fn7nrl/capz-e2e-fn7nrl-aks logs
Nov 21 19:38:47.164: INFO: INFO: Collecting logs for node aks-agentpool1-31337603-vmss000000 in cluster capz-e2e-fn7nrl-aks in namespace capz-e2e-fn7nrl

E1121 19:38:56.860241   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:39:30.647676   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:40:12.323224   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 19:40:56.791: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-fn7nrl/capz-e2e-fn7nrl-aks: [dialing public load balancer at capz-e2e-fn7nrl-aks-8b7ab66f.hcp.northeurope.azmk8s.io: dial tcp 20.67.173.68:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
Nov 21 19:40:57.369: INFO: INFO: Collecting logs for node aks-agentpool1-31337603-vmss000000 in cluster capz-e2e-fn7nrl-aks in namespace capz-e2e-fn7nrl

E1121 19:41:04.935307   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:41:58.529628   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:42:52.724346   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 19:43:07.859: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-fn7nrl/capz-e2e-fn7nrl-aks: [dialing public load balancer at capz-e2e-fn7nrl-aks-8b7ab66f.hcp.northeurope.azmk8s.io: dial tcp 20.67.173.68:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-fn7nrl/capz-e2e-fn7nrl-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 978.5082ms
STEP: Dumping workload cluster capz-e2e-fn7nrl/capz-e2e-fn7nrl-aks Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-fjv6h, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-84d976c568-kmlcx, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-vztcs, container kube-proxy
... skipping 8 lines ...
STEP: Fetching activity logs took 681.677936ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-fn7nrl" namespace
STEP: Deleting all clusters in the capz-e2e-fn7nrl namespace
STEP: Deleting cluster capz-e2e-fn7nrl-aks
INFO: Waiting for the Cluster capz-e2e-fn7nrl/capz-e2e-fn7nrl-aks to be deleted
STEP: Waiting for cluster capz-e2e-fn7nrl-aks to be deleted
E1121 19:43:52.296877   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:44:44.244284   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:45:19.694599   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:46:18.868065   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:46:48.977636   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-fn7nrl
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 12m54s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:455

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Sun, 21 Nov 2021 19:33:39 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-tj4tit" for hosting the cluster
Nov 21 19:33:39.919: INFO: starting to create namespace for hosting the "capz-e2e-tj4tit" test spec
2021/11/21 19:33:39 failed trying to get namespace (capz-e2e-tj4tit):namespaces "capz-e2e-tj4tit" not found
INFO: Creating namespace capz-e2e-tj4tit
INFO: Creating event watcher for namespace "capz-e2e-tj4tit"
Nov 21 19:33:39.974: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-tj4tit-oot
INFO: Creating the workload cluster with name "capz-e2e-tj4tit-oot" using the "external-cloud-provider" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-jobf9seuny5i5p to be complete
Nov 21 19:42:15.820: INFO: waiting for job default/curl-to-elb-jobf9seuny5i5p to be complete
Nov 21 19:42:26.038: INFO: job default/curl-to-elb-jobf9seuny5i5p is complete, took 10.218395289s
STEP: connecting directly to the external LB service
Nov 21 19:42:26.038: INFO: starting attempts to connect directly to the external LB service
2021/11/21 19:42:26 [DEBUG] GET http://20.93.42.161
2021/11/21 19:42:56 [ERR] GET http://20.93.42.161 request failed: Get "http://20.93.42.161": dial tcp 20.93.42.161:80: i/o timeout
2021/11/21 19:42:56 [DEBUG] GET http://20.93.42.161: retrying in 1s (4 left)
2021/11/21 19:43:27 [ERR] GET http://20.93.42.161 request failed: Get "http://20.93.42.161": dial tcp 20.93.42.161:80: i/o timeout
2021/11/21 19:43:27 [DEBUG] GET http://20.93.42.161: retrying in 2s (3 left)
Nov 21 19:43:29.247: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 21 19:43:29.247: INFO: starting to delete external LB service webwptnnp-elb
Nov 21 19:43:29.368: INFO: starting to delete deployment webwptnnp
Nov 21 19:43:29.473: INFO: starting to delete job curl-to-elb-jobf9seuny5i5p
... skipping 34 lines ...
STEP: Fetching activity logs took 559.26972ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-tj4tit" namespace
STEP: Deleting all clusters in the capz-e2e-tj4tit namespace
STEP: Deleting cluster capz-e2e-tj4tit-oot
INFO: Waiting for the Cluster capz-e2e-tj4tit/capz-e2e-tj4tit-oot to be deleted
STEP: Waiting for cluster capz-e2e-tj4tit-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5sx4c, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-6pfmw, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-pl5rz, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-tj4tit-oot-control-plane-6b2fm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-wmtdd, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-tj4tit-oot-control-plane-6b2fm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-tj4tit-oot-control-plane-6b2fm, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-controller-manager, container cloud-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2cfrw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-d7kqr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-zghtg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-l9cl2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-tj4tit-oot-control-plane-6b2fm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-tkzql, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-tj4tit
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 17m58s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413

INFO: "with a single control plane node and 1 node" started at Sun, 21 Nov 2021 19:32:09 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-7re6ri" for hosting the cluster
Nov 21 19:32:09.338: INFO: starting to create namespace for hosting the "capz-e2e-7re6ri" test spec
2021/11/21 19:32:09 failed trying to get namespace (capz-e2e-7re6ri):namespaces "capz-e2e-7re6ri" not found
INFO: Creating namespace capz-e2e-7re6ri
INFO: Creating event watcher for namespace "capz-e2e-7re6ri"
Nov 21 19:32:09.365: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-7re6ri-gpu
INFO: Creating the workload cluster with name "capz-e2e-7re6ri-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.4, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: Fetching activity logs took 1.120183274s
STEP: Dumping all the Cluster API resources in the "capz-e2e-7re6ri" namespace
STEP: Deleting all clusters in the capz-e2e-7re6ri namespace
STEP: Deleting cluster capz-e2e-7re6ri-gpu
INFO: Waiting for the Cluster capz-e2e-7re6ri/capz-e2e-7re6ri-gpu to be deleted
STEP: Waiting for cluster capz-e2e-7re6ri-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-629ll, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7t4r4, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-7re6ri
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 34m19s on Ginkgo node 1 of 3

... skipping 57 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Sun, 21 Nov 2021 19:47:40 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-36j2js" for hosting the cluster
Nov 21 19:47:40.882: INFO: starting to create namespace for hosting the "capz-e2e-36j2js" test spec
2021/11/21 19:47:40 failed trying to get namespace (capz-e2e-36j2js):namespaces "capz-e2e-36j2js" not found
INFO: Creating namespace capz-e2e-36j2js
INFO: Creating event watcher for namespace "capz-e2e-36j2js"
Nov 21 19:47:40.919: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-36j2js-win-ha
INFO: Creating the workload cluster with name "capz-e2e-36j2js-win-ha" using the "windows" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 12 lines ...
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created
clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-36j2js-win-ha-flannel created
configmap/cni-capz-e2e-36j2js-win-ha-flannel created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1121 19:47:47.139021   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:48:25.881585   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by capz-e2e-36j2js/capz-e2e-36j2js-win-ha-control-plane to be provisioned
STEP: Waiting for one control plane node to exist
E1121 19:49:11.578592   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:49:54.184738   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:50:33.053590   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
INFO: Waiting for the remaining control plane machines managed by capz-e2e-36j2js/capz-e2e-36j2js-win-ha-control-plane to be provisioned
STEP: Waiting for all control plane nodes to exist
E1121 19:51:25.296402   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:52:23.811573   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:53:22.157054   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:54:09.507163   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:54:58.067338   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:55:31.065482   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane capz-e2e-36j2js/capz-e2e-36j2js-win-ha-control-plane to be ready (implies underlying nodes to be ready as well)
STEP: Waiting for the control plane to be ready
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
STEP: Waiting for the workload nodes to exist
INFO: Waiting for the machine pools to be provisioned
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/websvzrwm to be available
Nov 21 19:56:02.986: INFO: starting to wait for deployment to become available
E1121 19:56:05.561854   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 19:56:23.327: INFO: Deployment default/websvzrwm is now available, took 20.341252289s
STEP: creating an internal Load Balancer service
Nov 21 19:56:23.327: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/websvzrwm-ilb to be available
Nov 21 19:56:23.506: INFO: waiting for service default/websvzrwm-ilb to be available
E1121 19:57:05.500051   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 19:57:24.276: INFO: service default/websvzrwm-ilb is available, took 1m0.769915221s
STEP: connecting to the internal LB service from a curl pod
Nov 21 19:57:24.383: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-job70zpj to be complete
Nov 21 19:57:24.532: INFO: waiting for job default/curl-to-ilb-job70zpj to be complete
Nov 21 19:57:34.747: INFO: job default/curl-to-ilb-job70zpj is complete, took 10.215112277s
STEP: deleting the ilb test resources
Nov 21 19:57:34.747: INFO: deleting the ilb service: websvzrwm-ilb
Nov 21 19:57:34.931: INFO: deleting the ilb job: curl-to-ilb-job70zpj
STEP: creating an external Load Balancer service
Nov 21 19:57:35.047: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/websvzrwm-elb to be available
Nov 21 19:57:35.206: INFO: waiting for service default/websvzrwm-elb to be available
E1121 19:57:46.458066   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 19:58:43.943169   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 19:59:16.416: INFO: service default/websvzrwm-elb is available, took 1m41.21031979s
STEP: connecting to the external LB service from a curl pod
Nov 21 19:59:16.522: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-jobdqek8r5wh1x to be complete
Nov 21 19:59:16.662: INFO: waiting for job default/curl-to-elb-jobdqek8r5wh1x to be complete
E1121 19:59:23.337537   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 19:59:26.875: INFO: job default/curl-to-elb-jobdqek8r5wh1x is complete, took 10.212788455s
STEP: connecting directly to the external LB service
Nov 21 19:59:26.875: INFO: starting attempts to connect directly to the external LB service
2021/11/21 19:59:26 [DEBUG] GET http://20.93.45.153
E1121 19:59:53.852568   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
2021/11/21 19:59:56 [ERR] GET http://20.93.45.153 request failed: Get "http://20.93.45.153": dial tcp 20.93.45.153:80: i/o timeout
2021/11/21 19:59:56 [DEBUG] GET http://20.93.45.153: retrying in 1s (4 left)
Nov 21 19:59:58.083: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 21 19:59:58.083: INFO: starting to delete external LB service websvzrwm-elb
Nov 21 19:59:58.272: INFO: starting to delete deployment websvzrwm
Nov 21 19:59:58.397: INFO: starting to delete job curl-to-elb-jobdqek8r5wh1x
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windows4rfsjl to be available
Nov 21 19:59:58.790: INFO: starting to wait for deployment to become available
E1121 20:00:31.527307   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 20:01:09.711: INFO: Deployment default/web-windows4rfsjl is now available, took 1m10.920800503s
STEP: creating an internal Load Balancer service
Nov 21 20:01:09.711: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/web-windows4rfsjl-ilb to be available
Nov 21 20:01:09.874: INFO: waiting for service default/web-windows4rfsjl-ilb to be available
E1121 20:01:13.808458   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 20:02:10.623: INFO: service default/web-windows4rfsjl-ilb is available, took 1m0.748822654s
STEP: connecting to the internal LB service from a curl pod
E1121 20:02:10.717031   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 20:02:10.730: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-jobtd4da to be complete
Nov 21 20:02:10.850: INFO: waiting for job default/curl-to-ilb-jobtd4da to be complete
Nov 21 20:02:21.064: INFO: job default/curl-to-ilb-jobtd4da is complete, took 10.213111748s
STEP: deleting the ilb test resources
Nov 21 20:02:21.064: INFO: deleting the ilb service: web-windows4rfsjl-ilb
Nov 21 20:02:21.214: INFO: deleting the ilb job: curl-to-ilb-jobtd4da
STEP: creating an external Load Balancer service
Nov 21 20:02:21.325: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/web-windows4rfsjl-elb to be available
Nov 21 20:02:21.464: INFO: waiting for service default/web-windows4rfsjl-elb to be available
E1121 20:02:46.500622   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:03:36.269561   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 20:04:12.764: INFO: service default/web-windows4rfsjl-elb is available, took 1m51.299614399s
STEP: connecting to the external LB service from a curl pod
Nov 21 20:04:12.870: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-joboq5dzfxqztu to be complete
Nov 21 20:04:12.985: INFO: waiting for job default/curl-to-elb-joboq5dzfxqztu to be complete
Nov 21 20:04:23.198: INFO: job default/curl-to-elb-joboq5dzfxqztu is complete, took 10.212981511s
... skipping 6 lines ...
Nov 21 20:04:23.572: INFO: starting to delete deployment web-windows4rfsjl
Nov 21 20:04:23.684: INFO: starting to delete job curl-to-elb-joboq5dzfxqztu
STEP: Dumping logs from the "capz-e2e-36j2js-win-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-36j2js/capz-e2e-36j2js-win-ha logs
Nov 21 20:04:23.839: INFO: INFO: Collecting logs for node capz-e2e-36j2js-win-ha-control-plane-cgmc5 in cluster capz-e2e-36j2js-win-ha in namespace capz-e2e-36j2js

E1121 20:04:25.236664   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 20:04:38.982: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-36j2js-win-ha-control-plane-cgmc5

Nov 21 20:04:40.364: INFO: INFO: Collecting logs for node capz-e2e-36j2js-win-ha-control-plane-xbb7h in cluster capz-e2e-36j2js-win-ha in namespace capz-e2e-36j2js

Nov 21 20:04:51.385: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-36j2js-win-ha-control-plane-xbb7h

Nov 21 20:04:51.888: INFO: INFO: Collecting logs for node capz-e2e-36j2js-win-ha-control-plane-wvkjd in cluster capz-e2e-36j2js-win-ha in namespace capz-e2e-36j2js

Nov 21 20:05:03.269: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-36j2js-win-ha-control-plane-wvkjd

E1121 20:05:03.382533   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 20:05:03.701: INFO: INFO: Collecting logs for node capz-e2e-36j2js-win-ha-md-0-pjjbd in cluster capz-e2e-36j2js-win-ha in namespace capz-e2e-36j2js

Nov 21 20:05:14.442: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-36j2js-win-ha-md-0-pjjbd

Nov 21 20:05:15.477: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster capz-e2e-36j2js-win-ha in namespace capz-e2e-36j2js

E1121 20:05:34.968715   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 20:05:49.027: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-36j2js-win-ha-md-win-bgnq9

Nov 21 20:05:49.430: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster capz-e2e-36j2js-win-ha in namespace capz-e2e-36j2js

Nov 21 20:06:16.722: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-36j2js-win-ha-md-win-bg7xj

STEP: Dumping workload cluster capz-e2e-36j2js/capz-e2e-36j2js-win-ha kube-system pod logs
E1121 20:06:17.682728   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Fetching kube-system pod logs took 891.227615ms
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-6g4wv, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-36j2js-win-ha-control-plane-cgmc5, container kube-scheduler
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-36j2js-win-ha-control-plane-xbb7h, container etcd
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-h4r8p, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-nzfgc, container coredns
... skipping 16 lines ...
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-ckmcq, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-36j2js-win-ha-control-plane-xbb7h, container kube-controller-manager
STEP: Dumping workload cluster capz-e2e-36j2js/capz-e2e-36j2js-win-ha Azure activity log
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-hvgg8, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-36j2js-win-ha-control-plane-xbb7h, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-36j2js-win-ha-control-plane-wvkjd, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-36j2js-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000231949s
STEP: Dumping all the Cluster API resources in the "capz-e2e-36j2js" namespace
STEP: Deleting all clusters in the capz-e2e-36j2js namespace
STEP: Deleting cluster capz-e2e-36j2js-win-ha
INFO: Waiting for the Cluster capz-e2e-36j2js/capz-e2e-36j2js-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-36j2js-win-ha to be deleted
E1121 20:07:04.286129   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:07:53.079345   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:08:41.741658   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-ckmcq, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-8glss, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-x4l8z, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-36j2js-win-ha-control-plane-cgmc5, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-26ppr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-36j2js-win-ha-control-plane-cgmc5, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-97hzh, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-kn22t, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-36j2js-win-ha-control-plane-cgmc5, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-36j2js-win-ha-control-plane-cgmc5, container kube-scheduler: http2: client connection lost
E1121 20:09:36.416227   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:10:28.130002   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:11:08.054743   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:11:57.814544   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:12:52.501823   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:13:28.832278   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:14:01.649221   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:14:46.297793   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:15:20.985075   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:15:56.875666   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:16:39.008602   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-36j2js
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1121 20:17:30.596033   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:18:28.272236   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 20:19:01.330173   24494 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-8576kd/events?resourceVersion=11160": dial tcp: lookup capz-e2e-8576kd-public-custom-vnet-d3171c65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 31m29s on Ginkgo node 2 of 3


• [SLOW TEST:1888.556 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:579

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Sun, 21 Nov 2021 19:51:38 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-r84ful" for hosting the cluster
Nov 21 19:51:38.338: INFO: starting to create namespace for hosting the "capz-e2e-r84ful" test spec
2021/11/21 19:51:38 failed trying to get namespace (capz-e2e-r84ful):namespaces "capz-e2e-r84ful" not found
INFO: Creating namespace capz-e2e-r84ful
INFO: Creating event watcher for namespace "capz-e2e-r84ful"
Nov 21 19:51:38.377: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-r84ful-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-r84ful-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.4, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-joblwdnkfp6d73 to be complete
Nov 21 20:02:35.386: INFO: waiting for job default/curl-to-elb-joblwdnkfp6d73 to be complete
Nov 21 20:02:45.599: INFO: job default/curl-to-elb-joblwdnkfp6d73 is complete, took 10.212796412s
STEP: connecting directly to the external LB service
Nov 21 20:02:45.599: INFO: starting attempts to connect directly to the external LB service
2021/11/21 20:02:45 [DEBUG] GET http://20.93.49.55
2021/11/21 20:03:15 [ERR] GET http://20.93.49.55 request failed: Get "http://20.93.49.55": dial tcp 20.93.49.55:80: i/o timeout
2021/11/21 20:03:15 [DEBUG] GET http://20.93.49.55: retrying in 1s (4 left)
Nov 21 20:03:16.808: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 21 20:03:16.808: INFO: starting to delete external LB service webjjnw75-elb
Nov 21 20:03:16.938: INFO: starting to delete deployment webjjnw75
Nov 21 20:03:17.044: INFO: starting to delete job curl-to-elb-joblwdnkfp6d73
... skipping 40 lines ...
Nov 21 20:07:03.336: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-r84ful-win-vmss-control-plane-sx5x2

Nov 21 20:07:04.794: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-r84ful-win-vmss in namespace capz-e2e-r84ful

Nov 21 20:07:21.311: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-r84ful-win-vmss-mp-0

Failed to get logs for machine pool capz-e2e-r84ful-win-vmss-mp-0, cluster capz-e2e-r84ful/capz-e2e-r84ful-win-vmss: [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1]
Nov 21 20:07:21.926: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-r84ful-win-vmss in namespace capz-e2e-r84ful

Nov 21 20:07:52.093: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

STEP: Dumping workload cluster capz-e2e-r84ful/capz-e2e-r84ful-win-vmss kube-system pod logs
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-r84ful-win-vmss-control-plane-sx5x2, container kube-apiserver
... skipping 7 lines ...
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-4dfl7, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-proxy-5ww4m, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-xtchh, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-r84ful-win-vmss-control-plane-sx5x2, container kube-controller-manager
STEP: Dumping workload cluster capz-e2e-r84ful/capz-e2e-r84ful-win-vmss Azure activity log
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-2nlsv, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-r84ful-win-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000605875s
STEP: Dumping all the Cluster API resources in the "capz-e2e-r84ful" namespace
STEP: Deleting all clusters in the capz-e2e-r84ful namespace
STEP: Deleting cluster capz-e2e-r84ful-win-vmss
INFO: Waiting for the Cluster capz-e2e-r84ful/capz-e2e-r84ful-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-r84ful-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xtchh, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-99pmp, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-r84ful-win-vmss-control-plane-sx5x2, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-2nlsv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-4dfl7, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-r47jd, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-r84ful-win-vmss-control-plane-sx5x2, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-r84ful-win-vmss-control-plane-sx5x2, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-b9lzs, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-r84ful-win-vmss-control-plane-sx5x2, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-r84ful
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 29m22s on Ginkgo node 3 of 3

... skipping 9 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a GPU-enabled cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:76

Ran 9 of 24 Specs in 5960.219 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 15 Skipped


Ginkgo ran 1 suite in 1h40m46.458845766s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:176: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:184: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...