This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-15 18:33
Elapsed2h13m
Revisionmain

Test Failures


capz-e2e Workload cluster creation Creating a Windows enabled VMSS cluster with dockershim with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node 55m13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sWindows\senabled\sVMSS\scluster\swith\sdockershim\swith\sa\ssingle\scontrol\splane\snode\sand\san\sLinux\sAzureMachinePool\swith\s1\snodes\sand\sWindows\sAzureMachinePool\swith\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:579
Timed out after 900.000s.
Expected
    <int>: 0
to equal
    <int>: 1
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/machinepool_helpers.go:85
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 14 Skipped Tests

Error lines from build-log.txt

... skipping 425 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:288

INFO: "With ipv6 worker node" started at Mon, 15 Nov 2021 18:39:58 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-6jhxmn" for hosting the cluster
Nov 15 18:39:58.049: INFO: starting to create namespace for hosting the "capz-e2e-6jhxmn" test spec
2021/11/15 18:39:58 failed trying to get namespace (capz-e2e-6jhxmn):namespaces "capz-e2e-6jhxmn" not found
INFO: Creating namespace capz-e2e-6jhxmn
INFO: Creating event watcher for namespace "capz-e2e-6jhxmn"
Nov 15 18:39:58.140: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-6jhxmn-ipv6
INFO: Creating the workload cluster with name "capz-e2e-6jhxmn-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 687.061629ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-6jhxmn" namespace
STEP: Deleting all clusters in the capz-e2e-6jhxmn namespace
STEP: Deleting cluster capz-e2e-6jhxmn-ipv6
INFO: Waiting for the Cluster capz-e2e-6jhxmn/capz-e2e-6jhxmn-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-6jhxmn-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-6jhxmn-ipv6-control-plane-qvmw8, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-h8gx8, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-6jhxmn-ipv6-control-plane-96q6q, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-6jhxmn-ipv6-control-plane-76zxc, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-gnkxd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-6jhxmn-ipv6-control-plane-96q6q, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-g6qkp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nbnwj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-6jhxmn-ipv6-control-plane-96q6q, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-6jhxmn-ipv6-control-plane-qvmw8, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8nwwt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-p4g4w, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-6jhxmn-ipv6-control-plane-qvmw8, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-6jhxmn-ipv6-control-plane-96q6q, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-xcb9j, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-hn42v, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-6jhxmn-ipv6-control-plane-qvmw8, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-267wj, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-gwhb9, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-6jhxmn-ipv6-control-plane-76zxc, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-6jhxmn-ipv6-control-plane-76zxc, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-6jhxmn-ipv6-control-plane-76zxc, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-r8dlb, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-6jhxmn
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 18m43s on Ginkgo node 2 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:205

INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" started at Mon, 15 Nov 2021 18:39:58 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-q6nzjp" for hosting the cluster
Nov 15 18:39:58.043: INFO: starting to create namespace for hosting the "capz-e2e-q6nzjp" test spec
2021/11/15 18:39:58 failed trying to get namespace (capz-e2e-q6nzjp):namespaces "capz-e2e-q6nzjp" not found
INFO: Creating namespace capz-e2e-q6nzjp
INFO: Creating event watcher for namespace "capz-e2e-q6nzjp"
Nov 15 18:39:58.144: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-q6nzjp-ha
INFO: Creating the workload cluster with name "capz-e2e-q6nzjp-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 67 lines ...
STEP: waiting for job default/curl-to-elb-jobvi90yt64lp0 to be complete
Nov 15 18:50:38.577: INFO: waiting for job default/curl-to-elb-jobvi90yt64lp0 to be complete
Nov 15 18:50:48.801: INFO: job default/curl-to-elb-jobvi90yt64lp0 is complete, took 10.224476644s
STEP: connecting directly to the external LB service
Nov 15 18:50:48.801: INFO: starting attempts to connect directly to the external LB service
2021/11/15 18:50:48 [DEBUG] GET http://51.138.69.225
2021/11/15 18:51:18 [ERR] GET http://51.138.69.225 request failed: Get "http://51.138.69.225": dial tcp 51.138.69.225:80: i/o timeout
2021/11/15 18:51:18 [DEBUG] GET http://51.138.69.225: retrying in 1s (4 left)
Nov 15 18:51:20.023: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 15 18:51:20.023: INFO: starting to delete external LB service web400h5o-elb
Nov 15 18:51:20.180: INFO: starting to delete deployment web400h5o
Nov 15 18:51:20.299: INFO: starting to delete job curl-to-elb-jobvi90yt64lp0
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 15 18:51:20.457: INFO: starting to create dev deployment namespace
2021/11/15 18:51:20 failed trying to get namespace (development):namespaces "development" not found
2021/11/15 18:51:20 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 15 18:51:20.688: INFO: starting to create prod deployment namespace
2021/11/15 18:51:20 failed trying to get namespace (production):namespaces "production" not found
2021/11/15 18:51:20 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 15 18:51:20.918: INFO: starting to create frontend-prod deployments
Nov 15 18:51:21.036: INFO: starting to create frontend-dev deployments
Nov 15 18:51:21.153: INFO: starting to create backend deployments
Nov 15 18:51:21.268: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 15 18:51:48.340: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.18.194 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 15 18:53:59.896: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 15 18:54:00.328: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.18.194 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.18.194 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 15 18:58:22.041: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 15 18:58:22.450: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.18.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 15 19:00:35.159: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 15 19:00:35.553: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.192.195 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.18.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 15 19:04:59.355: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 15 19:04:59.754: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.18.194 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 15 19:07:12.474: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 15 19:07:12.864: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.18.194 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowsyx17fj to be available
Nov 15 19:09:26.615: INFO: starting to wait for deployment to become available
Nov 15 19:10:27.444: INFO: Deployment default/web-windowsyx17fj is now available, took 1m0.828707566s
... skipping 20 lines ...
STEP: waiting for job default/curl-to-elb-jobgp1lhz6433b to be complete
Nov 15 19:11:49.694: INFO: waiting for job default/curl-to-elb-jobgp1lhz6433b to be complete
Nov 15 19:11:59.917: INFO: job default/curl-to-elb-jobgp1lhz6433b is complete, took 10.222910797s
STEP: connecting directly to the external LB service
Nov 15 19:11:59.917: INFO: starting attempts to connect directly to the external LB service
2021/11/15 19:11:59 [DEBUG] GET http://20.93.233.236
2021/11/15 19:12:29 [ERR] GET http://20.93.233.236 request failed: Get "http://20.93.233.236": dial tcp 20.93.233.236:80: i/o timeout
2021/11/15 19:12:29 [DEBUG] GET http://20.93.233.236: retrying in 1s (4 left)
Nov 15 19:12:31.136: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 15 19:12:31.136: INFO: starting to delete external LB service web-windowsyx17fj-elb
Nov 15 19:12:31.324: INFO: starting to delete deployment web-windowsyx17fj
Nov 15 19:12:31.437: INFO: starting to delete job curl-to-elb-jobgp1lhz6433b
... skipping 20 lines ...
Nov 15 19:13:43.986: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-q6nzjp-ha-md-0-vtk55

Nov 15 19:13:44.433: INFO: INFO: Collecting logs for node 10.1.0.7 in cluster capz-e2e-q6nzjp-ha in namespace capz-e2e-q6nzjp

Nov 15 19:14:20.933: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-q6nzjp-ha-md-win-g8xcr

Failed to get logs for machine capz-e2e-q6nzjp-ha-md-win-6d5d48f9bf-45w7p, cluster capz-e2e-q6nzjp/capz-e2e-q6nzjp-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 15 19:14:21.808: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster capz-e2e-q6nzjp-ha in namespace capz-e2e-q6nzjp

Nov 15 19:14:58.057: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-q6nzjp-ha-md-win-9s78s

Failed to get logs for machine capz-e2e-q6nzjp-ha-md-win-6d5d48f9bf-9bkgn, cluster capz-e2e-q6nzjp/capz-e2e-q6nzjp-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-q6nzjp/capz-e2e-q6nzjp-ha kube-system pod logs
STEP: Fetching kube-system pod logs took 907.966172ms
STEP: Creating log watcher for controller kube-system/calico-node-8cgnd, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-q6nzjp-ha-control-plane-l9c2b, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-q6nzjp-ha-control-plane-h2tc7, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-fl9lq, container calico-node
... skipping 22 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-q6nzjp-ha-control-plane-c8tdv, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-wnqh6, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-q6nzjp-ha-control-plane-h2tc7, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-b9n5t, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-cjsdq, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-gq2fn, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-q6nzjp-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000444986s
STEP: Dumping all the Cluster API resources in the "capz-e2e-q6nzjp" namespace
STEP: Deleting all clusters in the capz-e2e-q6nzjp namespace
STEP: Deleting cluster capz-e2e-q6nzjp-ha
INFO: Waiting for the Cluster capz-e2e-q6nzjp/capz-e2e-q6nzjp-ha to be deleted
STEP: Waiting for cluster capz-e2e-q6nzjp-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-q6nzjp-ha-control-plane-h2tc7, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-q6nzjp-ha-control-plane-c8tdv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-gq2fn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-lr9cg, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-v6cws, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-q6nzjp-ha-control-plane-h2tc7, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ms5cv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-4zfg2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-59zbj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cjsdq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-q6nzjp-ha-control-plane-c8tdv, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kx67c, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-z6z5s, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-lr9cg, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-q6nzjp-ha-control-plane-c8tdv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-b9n5t, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fl9lq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-q6nzjp-ha-control-plane-c8tdv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-q6nzjp-ha-control-plane-h2tc7, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8cgnd, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-q6nzjp-ha-control-plane-h2tc7, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-ptjnm, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-q6nzjp
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" ran for 42m21s on Ginkgo node 3 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:144

INFO: "Creates a public management cluster in the same vnet" started at Mon, 15 Nov 2021 18:39:58 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-dew694" for hosting the cluster
Nov 15 18:39:58.042: INFO: starting to create namespace for hosting the "capz-e2e-dew694" test spec
2021/11/15 18:39:58 failed trying to get namespace (capz-e2e-dew694):namespaces "capz-e2e-dew694" not found
INFO: Creating namespace capz-e2e-dew694
INFO: Creating event watcher for namespace "capz-e2e-dew694"
Nov 15 18:39:58.116: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-dew694-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-p2wdw, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-h4t98, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-dew694-public-custom-vnet-control-plane-xs7bw, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-vdtdh, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-g596g, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-dew694-public-custom-vnet-control-plane-xs7bw, container kube-controller-manager
STEP: Got error while iterating over activity logs for resource group capz-e2e-dew694-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001203754s
STEP: Dumping all the Cluster API resources in the "capz-e2e-dew694" namespace
STEP: Deleting all clusters in the capz-e2e-dew694 namespace
STEP: Deleting cluster capz-e2e-dew694-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-dew694/capz-e2e-dew694-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-dew694-public-custom-vnet to be deleted
W1115 19:27:42.652249   24498 reflector.go:441] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1115 19:28:14.214771   24498 trace.go:205] Trace[203181787]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (15-Nov-2021 19:27:44.213) (total time: 30001ms):
Trace[203181787]: [30.00144177s] [30.00144177s] END
E1115 19:28:14.214868   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp 51.138.67.21:6443: i/o timeout
I1115 19:28:47.325449   24498 trace.go:205] Trace[1233370023]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (15-Nov-2021 19:28:17.324) (total time: 30000ms):
Trace[1233370023]: [30.000546211s] [30.000546211s] END
E1115 19:28:47.325509   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp 51.138.67.21:6443: i/o timeout
I1115 19:29:22.027180   24498 trace.go:205] Trace[1065559935]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (15-Nov-2021 19:28:52.025) (total time: 30001ms):
Trace[1065559935]: [30.001469081s] [30.001469081s] END
E1115 19:29:22.027253   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp 51.138.67.21:6443: i/o timeout
I1115 19:30:01.101159   24498 trace.go:205] Trace[64315856]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (15-Nov-2021 19:29:31.100) (total time: 30000ms):
Trace[64315856]: [30.000756495s] [30.000756495s] END
E1115 19:30:01.101258   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp 51.138.67.21:6443: i/o timeout
I1115 19:30:50.107711   24498 trace.go:205] Trace[1865623865]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (15-Nov-2021 19:30:20.106) (total time: 30000ms):
Trace[1865623865]: [30.000779596s] [30.000779596s] END
E1115 19:30:50.107786   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp 51.138.67.21:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-dew694
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 15 19:31:23.212: INFO: deleting an existing virtual network "custom-vnet"
E1115 19:31:28.464945   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 15 19:31:34.117: INFO: deleting an existing route table "node-routetable"
Nov 15 19:31:45.004: INFO: deleting an existing network security group "node-nsg"
Nov 15 19:31:55.666: INFO: deleting an existing network security group "control-plane-nsg"
Nov 15 19:32:07.012: INFO: verifying the existing resource group "capz-e2e-dew694-public-custom-vnet" is empty
Nov 15 19:32:07.705: INFO: deleting the existing resource group "capz-e2e-dew694-public-custom-vnet"
E1115 19:32:10.090262   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:32:50.292622   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1115 19:33:42.684925   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 54m18s on Ginkgo node 1 of 3


• [SLOW TEST:3257.511 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:334

INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" started at Mon, 15 Nov 2021 18:58:40 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-zvvkbh" for hosting the cluster
Nov 15 18:58:40.839: INFO: starting to create namespace for hosting the "capz-e2e-zvvkbh" test spec
2021/11/15 18:58:40 failed trying to get namespace (capz-e2e-zvvkbh):namespaces "capz-e2e-zvvkbh" not found
INFO: Creating namespace capz-e2e-zvvkbh
INFO: Creating event watcher for namespace "capz-e2e-zvvkbh"
Nov 15 18:58:40.875: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-zvvkbh-vmss
INFO: Creating the workload cluster with name "capz-e2e-zvvkbh-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 142 lines ...
Nov 15 19:21:59.685: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-zvvkbh-vmss-mp-0

Nov 15 19:22:00.271: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-zvvkbh-vmss in namespace capz-e2e-zvvkbh

Nov 15 19:22:16.793: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-e2e-zvvkbh-vmss-mp-0

Failed to get logs for machine pool capz-e2e-zvvkbh-vmss-mp-0, cluster capz-e2e-zvvkbh/capz-e2e-zvvkbh-vmss: [[running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1], [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1]]
Nov 15 19:22:17.223: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-zvvkbh-vmss in namespace capz-e2e-zvvkbh

Nov 15 19:23:03.573: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

Nov 15 19:23:03.993: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-zvvkbh-vmss in namespace capz-e2e-zvvkbh

Nov 15 19:23:41.847: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set win-p-win

Failed to get logs for machine pool capz-e2e-zvvkbh-vmss-mp-win, cluster capz-e2e-zvvkbh/capz-e2e-zvvkbh-vmss: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-zvvkbh/capz-e2e-zvvkbh-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 1.156418939s
STEP: Creating log watcher for controller kube-system/calico-node-9qznv, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-zvvkbh-vmss-control-plane-mfnzh, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-gcwfb, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-fn2w8, container calico-node-startup
... skipping 10 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-zvvkbh-vmss-control-plane-mfnzh, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-zvvkbh-vmss-control-plane-mfnzh, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-g5rtq, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-sbqsl, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-2wldb, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-46wbw, container calico-node
STEP: Got error while iterating over activity logs for resource group capz-e2e-zvvkbh-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000433532s
STEP: Dumping all the Cluster API resources in the "capz-e2e-zvvkbh" namespace
STEP: Deleting all clusters in the capz-e2e-zvvkbh namespace
STEP: Deleting cluster capz-e2e-zvvkbh-vmss
INFO: Waiting for the Cluster capz-e2e-zvvkbh/capz-e2e-zvvkbh-vmss to be deleted
STEP: Waiting for cluster capz-e2e-zvvkbh-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-gcwfb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-fn2w8, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-fn2w8, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-2wldb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-67jd6, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-67jd6, container calico-node-startup: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-zvvkbh
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" ran for 39m21s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413

INFO: "with a single control plane node and 1 node" started at Mon, 15 Nov 2021 19:22:19 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-r7jfpi" for hosting the cluster
Nov 15 19:22:19.409: INFO: starting to create namespace for hosting the "capz-e2e-r7jfpi" test spec
2021/11/15 19:22:19 failed trying to get namespace (capz-e2e-r7jfpi):namespaces "capz-e2e-r7jfpi" not found
INFO: Creating namespace capz-e2e-r7jfpi
INFO: Creating event watcher for namespace "capz-e2e-r7jfpi"
Nov 15 19:22:19.448: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-r7jfpi-gpu
INFO: Creating the workload cluster with name "capz-e2e-r7jfpi-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 58 lines ...
STEP: Fetching activity logs took 583.409117ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-r7jfpi" namespace
STEP: Deleting all clusters in the capz-e2e-r7jfpi namespace
STEP: Deleting cluster capz-e2e-r7jfpi-gpu
INFO: Waiting for the Cluster capz-e2e-r7jfpi/capz-e2e-r7jfpi-gpu to be deleted
STEP: Waiting for cluster capz-e2e-r7jfpi-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-r7jfpi-gpu-control-plane-lqtcj, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-r7jfpi-gpu-control-plane-lqtcj, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-zbdn2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-r7jfpi-gpu-control-plane-lqtcj, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xxpks, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-9nds4, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-7rnx9, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-r7jfpi-gpu-control-plane-lqtcj, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-6dmd6, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-r7jfpi
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 21m48s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:455

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Mon, 15 Nov 2021 19:34:15 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-u3zv4i" for hosting the cluster
Nov 15 19:34:15.556: INFO: starting to create namespace for hosting the "capz-e2e-u3zv4i" test spec
2021/11/15 19:34:15 failed trying to get namespace (capz-e2e-u3zv4i):namespaces "capz-e2e-u3zv4i" not found
INFO: Creating namespace capz-e2e-u3zv4i
INFO: Creating event watcher for namespace "capz-e2e-u3zv4i"
Nov 15 19:34:15.592: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-u3zv4i-oot
INFO: Creating the workload cluster with name "capz-e2e-u3zv4i-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 13 lines ...
configmap/cloud-node-manager-addon created
clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-u3zv4i-oot-calico created
configmap/cni-capz-e2e-u3zv4i-oot-calico created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1115 19:34:28.270964   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:35:05.881971   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:35:36.287204   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by capz-e2e-u3zv4i/capz-e2e-u3zv4i-oot-control-plane to be provisioned
STEP: Waiting for one control plane node to exist
E1115 19:36:34.535748   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:37:30.346509   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:38:10.052541   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
INFO: Waiting for control plane capz-e2e-u3zv4i/capz-e2e-u3zv4i-oot-control-plane to be ready (implies underlying nodes to be ready as well)
STEP: Waiting for the control plane to be ready
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
E1115 19:38:45.385682   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:39:37.601821   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for the machine pools to be provisioned
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web1b5j2b to be available
Nov 15 19:40:07.929: INFO: starting to wait for deployment to become available
E1115 19:40:25.169042   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 15 19:40:38.405: INFO: Deployment default/web1b5j2b is now available, took 30.47581547s
STEP: creating an internal Load Balancer service
Nov 15 19:40:38.405: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/web1b5j2b-ilb to be available
Nov 15 19:40:38.535: INFO: waiting for service default/web1b5j2b-ilb to be available
E1115 19:40:59.016604   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:41:39.696728   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 15 19:41:59.518: INFO: service default/web1b5j2b-ilb is available, took 1m20.983144765s
STEP: connecting to the internal LB service from a curl pod
Nov 15 19:41:59.627: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-jobmyrmg to be complete
Nov 15 19:41:59.751: INFO: waiting for job default/curl-to-ilb-jobmyrmg to be complete
Nov 15 19:42:09.968: INFO: job default/curl-to-ilb-jobmyrmg is complete, took 10.216750086s
STEP: deleting the ilb test resources
Nov 15 19:42:09.968: INFO: deleting the ilb service: web1b5j2b-ilb
Nov 15 19:42:10.098: INFO: deleting the ilb job: curl-to-ilb-jobmyrmg
STEP: creating an external Load Balancer service
Nov 15 19:42:10.207: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/web1b5j2b-elb to be available
Nov 15 19:42:10.334: INFO: waiting for service default/web1b5j2b-elb to be available
E1115 19:42:26.510094   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:43:01.291466   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 15 19:43:31.333: INFO: service default/web1b5j2b-elb is available, took 1m20.998432531s
STEP: connecting to the external LB service from a curl pod
Nov 15 19:43:31.441: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-jobftchlainkgy to be complete
Nov 15 19:43:31.554: INFO: waiting for job default/curl-to-elb-jobftchlainkgy to be complete
Nov 15 19:43:41.772: INFO: job default/curl-to-elb-jobftchlainkgy is complete, took 10.217770438s
STEP: connecting directly to the external LB service
Nov 15 19:43:41.772: INFO: starting attempts to connect directly to the external LB service
2021/11/15 19:43:41 [DEBUG] GET http://51.124.7.22
E1115 19:43:45.924221   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
2021/11/15 19:44:11 [ERR] GET http://51.124.7.22 request failed: Get "http://51.124.7.22": dial tcp 51.124.7.22:80: i/o timeout
2021/11/15 19:44:11 [DEBUG] GET http://51.124.7.22: retrying in 1s (4 left)
Nov 15 19:44:12.995: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 15 19:44:12.995: INFO: starting to delete external LB service web1b5j2b-elb
Nov 15 19:44:13.122: INFO: starting to delete deployment web1b5j2b
Nov 15 19:44:13.231: INFO: starting to delete job curl-to-elb-jobftchlainkgy
... skipping 2 lines ...
Nov 15 19:44:13.383: INFO: INFO: Collecting logs for node capz-e2e-u3zv4i-oot-control-plane-67fd7 in cluster capz-e2e-u3zv4i-oot in namespace capz-e2e-u3zv4i

Nov 15 19:44:27.799: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-u3zv4i-oot-control-plane-67fd7

Nov 15 19:44:29.136: INFO: INFO: Collecting logs for node capz-e2e-u3zv4i-oot-md-0-x9swf in cluster capz-e2e-u3zv4i-oot in namespace capz-e2e-u3zv4i

E1115 19:44:32.495343   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 15 19:44:43.054: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-u3zv4i-oot-md-0-x9swf

Nov 15 19:44:43.575: INFO: INFO: Collecting logs for node capz-e2e-u3zv4i-oot-md-0-ss4cc in cluster capz-e2e-u3zv4i-oot in namespace capz-e2e-u3zv4i

Nov 15 19:44:56.898: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-u3zv4i-oot-md-0-ss4cc

... skipping 20 lines ...
STEP: Fetching activity logs took 654.344532ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-u3zv4i" namespace
STEP: Deleting all clusters in the capz-e2e-u3zv4i namespace
STEP: Deleting cluster capz-e2e-u3zv4i-oot
INFO: Waiting for the Cluster capz-e2e-u3zv4i/capz-e2e-u3zv4i-oot to be deleted
STEP: Waiting for cluster capz-e2e-u3zv4i-oot to be deleted
E1115 19:45:17.361102   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-u3zv4i-oot-control-plane-67fd7, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-controller-manager, container cloud-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-u3zv4i-oot-control-plane-67fd7, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-u3zv4i-oot-control-plane-67fd7, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-mk59p, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-h5dqp, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-rqcxw, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-4g7kj, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-97qlm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-g6j9m, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ghmsg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-u3zv4i-oot-control-plane-67fd7, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-j7vq9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nldzp, container kube-proxy: http2: client connection lost
E1115 19:46:10.783813   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:47:10.440368   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:47:44.716545   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:48:27.698072   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:49:18.818187   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:50:08.298205   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-u3zv4i
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1115 19:50:50.957713   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:51:39.307513   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 17m27s on Ginkgo node 1 of 3


• [SLOW TEST:1047.422 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:490

INFO: "with a single control plane node and 1 node" started at Mon, 15 Nov 2021 19:38:01 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-r4rf27" for hosting the cluster
Nov 15 19:38:01.694: INFO: starting to create namespace for hosting the "capz-e2e-r4rf27" test spec
2021/11/15 19:38:01 failed trying to get namespace (capz-e2e-r4rf27):namespaces "capz-e2e-r4rf27" not found
INFO: Creating namespace capz-e2e-r4rf27
INFO: Creating event watcher for namespace "capz-e2e-r4rf27"
Nov 15 19:38:01.735: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-r4rf27-aks
INFO: Creating the workload cluster with name "capz-e2e-r4rf27-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 34 lines ...
STEP: Dumping logs from the "capz-e2e-r4rf27-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-r4rf27/capz-e2e-r4rf27-aks logs
Nov 15 19:42:32.166: INFO: INFO: Collecting logs for node aks-agentpool1-12379936-vmss000000 in cluster capz-e2e-r4rf27-aks in namespace capz-e2e-r4rf27

Nov 15 19:44:42.807: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-r4rf27/capz-e2e-r4rf27-aks: [dialing public load balancer at capz-e2e-r4rf27-aks-bb6b8360.hcp.westeurope.azmk8s.io: dial tcp 52.149.73.254:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
Nov 15 19:44:43.469: INFO: INFO: Collecting logs for node aks-agentpool1-12379936-vmss000000 in cluster capz-e2e-r4rf27-aks in namespace capz-e2e-r4rf27

Nov 15 19:46:53.883: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-r4rf27/capz-e2e-r4rf27-aks: [dialing public load balancer at capz-e2e-r4rf27-aks-bb6b8360.hcp.westeurope.azmk8s.io: dial tcp 52.149.73.254:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-r4rf27/capz-e2e-r4rf27-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 4.307567982s
STEP: Dumping workload cluster capz-e2e-r4rf27/capz-e2e-r4rf27-aks Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-ch5g7, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-84d976c568-hm6lc, container coredns
STEP: Creating log watcher for controller kube-system/calico-typha-deployment-76cb9744d8-cnsqs, container calico-typha
... skipping 32 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Mon, 15 Nov 2021 19:44:07 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-ovodhq" for hosting the cluster
Nov 15 19:44:07.638: INFO: starting to create namespace for hosting the "capz-e2e-ovodhq" test spec
2021/11/15 19:44:07 failed trying to get namespace (capz-e2e-ovodhq):namespaces "capz-e2e-ovodhq" not found
INFO: Creating namespace capz-e2e-ovodhq
INFO: Creating event watcher for namespace "capz-e2e-ovodhq"
Nov 15 19:44:07.673: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-ovodhq-win-ha
INFO: Creating the workload cluster with name "capz-e2e-ovodhq-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 55 lines ...
STEP: waiting for job default/curl-to-elb-joblyrapzil1il to be complete
Nov 15 19:53:34.022: INFO: waiting for job default/curl-to-elb-joblyrapzil1il to be complete
Nov 15 19:53:44.256: INFO: job default/curl-to-elb-joblyrapzil1il is complete, took 10.233724073s
STEP: connecting directly to the external LB service
Nov 15 19:53:44.256: INFO: starting attempts to connect directly to the external LB service
2021/11/15 19:53:44 [DEBUG] GET http://20.54.238.226
2021/11/15 19:54:14 [ERR] GET http://20.54.238.226 request failed: Get "http://20.54.238.226": dial tcp 20.54.238.226:80: i/o timeout
2021/11/15 19:54:14 [DEBUG] GET http://20.54.238.226: retrying in 1s (4 left)
Nov 15 19:54:30.798: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 15 19:54:30.798: INFO: starting to delete external LB service webi64ley-elb
Nov 15 19:54:31.014: INFO: starting to delete deployment webi64ley
Nov 15 19:54:31.140: INFO: starting to delete job curl-to-elb-joblyrapzil1il
... skipping 85 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-ovodhq-win-ha-control-plane-pfqnv, container kube-scheduler
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-ovodhq-win-ha-control-plane-pfqnv, container etcd
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-ovodhq-win-ha-control-plane-wpt67, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-ovodhq-win-ha-control-plane-9spns, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-ovodhq-win-ha-control-plane-pfqnv, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-7qgjg, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-ovodhq-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000843766s
STEP: Dumping all the Cluster API resources in the "capz-e2e-ovodhq" namespace
STEP: Deleting all clusters in the capz-e2e-ovodhq namespace
STEP: Deleting cluster capz-e2e-ovodhq-win-ha
INFO: Waiting for the Cluster capz-e2e-ovodhq/capz-e2e-ovodhq-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-ovodhq-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-5nxw8, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ovodhq-win-ha-control-plane-pfqnv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ovodhq-win-ha-control-plane-pfqnv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-ww4x8, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ovodhq-win-ha-control-plane-9spns, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nscs4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ovodhq-win-ha-control-plane-pfqnv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-pcvr2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ovodhq-win-ha-control-plane-pfqnv, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ovodhq-win-ha-control-plane-9spns, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ovodhq-win-ha-control-plane-9spns, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ovodhq-win-ha-control-plane-9spns, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-ovodhq
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 25m10s on Ginkgo node 3 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows Enabled cluster with dockershim
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:530
    With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2021-11-15T20:33:22Z"}
++ early_exit_handler
++ '[' -n 166 ']'
++ kill -TERM 166
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 19 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:579

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Mon, 15 Nov 2021 19:51:42 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-jjb2u7" for hosting the cluster
Nov 15 19:51:42.980: INFO: starting to create namespace for hosting the "capz-e2e-jjb2u7" test spec
2021/11/15 19:51:42 failed trying to get namespace (capz-e2e-jjb2u7):namespaces "capz-e2e-jjb2u7" not found
INFO: Creating namespace capz-e2e-jjb2u7
INFO: Creating event watcher for namespace "capz-e2e-jjb2u7"
Nov 15 19:51:43.010: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-jjb2u7-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-jjb2u7-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 12 lines ...
kubeadmconfig.bootstrap.cluster.x-k8s.io/capz-e2e-jjb2u7-win-vmss-mp-win created
clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-jjb2u7-win-vmss-flannel created
configmap/cni-capz-e2e-jjb2u7-win-vmss-flannel created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1115 19:52:29.684690   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by capz-e2e-jjb2u7/capz-e2e-jjb2u7-win-vmss-control-plane to be provisioned
STEP: Waiting for one control plane node to exist
E1115 19:53:25.972461   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:54:21.827579   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:55:03.325027   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:55:42.842009   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
INFO: Waiting for control plane capz-e2e-jjb2u7/capz-e2e-jjb2u7-win-vmss-control-plane to be ready (implies underlying nodes to be ready as well)
STEP: Waiting for the control plane to be ready
E1115 19:56:39.872932   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for the machine deployments to be provisioned
INFO: Waiting for the machine pools to be provisioned
STEP: Waiting for the machine pool workload nodes to exist
E1115 19:57:38.079849   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:58:08.688255   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:58:42.607390   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:59:24.071894   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Waiting for the machine pool workload nodes to exist
E1115 20:00:12.987495   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:01:09.175224   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:01:48.504534   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:02:24.893433   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:03:06.051218   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:03:44.447310   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:04:40.251111   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:05:24.641238   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:06:09.056161   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:07:03.350479   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:07:57.612478   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:08:50.829562   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:09:41.056796   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:10:18.298173   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:11:13.022225   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:11:50.975909   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:12:25.469007   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:12:59.011026   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:13:36.013403   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:14:10.811185   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:14:46.301261   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Dumping logs from the "capz-e2e-jjb2u7-win-vmss" workload cluster
STEP: Dumping workload cluster capz-e2e-jjb2u7/capz-e2e-jjb2u7-win-vmss logs
Nov 15 20:14:55.029: INFO: INFO: Collecting logs for node capz-e2e-jjb2u7-win-vmss-control-plane-2xs4j in cluster capz-e2e-jjb2u7-win-vmss in namespace capz-e2e-jjb2u7

Nov 15 20:15:10.324: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-jjb2u7-win-vmss-control-plane-2xs4j

Nov 15 20:15:11.831: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-jjb2u7-win-vmss in namespace capz-e2e-jjb2u7

Nov 15 20:15:30.980: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-jjb2u7-win-vmss-mp-0

Failed to get logs for machine pool capz-e2e-jjb2u7-win-vmss-mp-0, cluster capz-e2e-jjb2u7/capz-e2e-jjb2u7-win-vmss: [running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-jjb2u7/capz-e2e-jjb2u7-win-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 969.112704ms
STEP: Dumping workload cluster capz-e2e-jjb2u7/capz-e2e-jjb2u7-win-vmss Azure activity log
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-wkr8l, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-jjb2u7-win-vmss-control-plane-2xs4j, container kube-apiserver
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-jjb2u7-win-vmss-control-plane-2xs4j, container etcd
... skipping 9 lines ...
STEP: Fetching activity logs took 1.00738473s
STEP: Dumping all the Cluster API resources in the "capz-e2e-jjb2u7" namespace
STEP: Deleting all clusters in the capz-e2e-jjb2u7 namespace
STEP: Deleting cluster capz-e2e-jjb2u7-win-vmss
INFO: Waiting for the Cluster capz-e2e-jjb2u7/capz-e2e-jjb2u7-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-jjb2u7-win-vmss to be deleted
E1115 20:15:39.615810   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:16:18.057336   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:17:12.875933   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:17:50.850903   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:18:29.027164   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:19:08.836727   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:19:52.253497   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:20:29.562805   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:21:07.218795   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:21:37.337138   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:22:36.968487   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:23:17.336674   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:23:52.575745   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:24:50.798472   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:25:42.277900   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:26:33.830912   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:27:09.855064   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:27:44.399262   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:28:38.851770   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:29:20.038556   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:30:00.881393   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:30:52.404532   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:31:29.472187   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:32:15.569393   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:32:48.002941   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:33:23.260963   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:34:19.991850   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:35:00.411131   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:35:59.119448   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:36:47.189650   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:37:46.462546   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:38:22.723341   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:39:15.456502   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:39:59.921447   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:40:35.272774   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:41:06.582663   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:41:54.860865   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:42:37.734103   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:43:13.563501   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:43:49.724253   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:44:26.006704   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:45:25.764330   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Redacting sensitive information from logs
E1115 20:46:14.590016   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:46:48.765818   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-dew694/events?resourceVersion=10145": dial tcp: lookup capz-e2e-dew694-public-custom-vnet-ba8a8d19.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host


• Failure [3313.334 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows enabled VMSS cluster with dockershim
... skipping 45 lines ...
    testing.tRunner(0xc000103c80, 0x23784b0)
    	/usr/local/go/src/testing/testing.go:1193 +0xef
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1238 +0x2b3
------------------------------
STEP: Tearing down the management cluster
INFO: Deleting the kind cluster "capz-e2e" failed. You may need to remove this by hand.



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a Windows enabled VMSS cluster with dockershim [It] with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/machinepool_helpers.go:85

Ran 9 of 23 Specs in 7736.950 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 14 Skipped


Ginkgo ran 1 suite in 2h10m16.897323724s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:176: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:184: test-e2e] Error 2
{"component":"entrypoint","file":"prow/entrypoint/run.go:252","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process gracefully exited before 15m0s grace period","severity":"error","time":"2021-11-15T20:46:56Z"}