This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 1 succeeded
Started2021-11-22 06:36
Elapsed2h15m
Revisionmain

Test Failures


capz-e2e Workload cluster creation Creating a GPU-enabled cluster with a single control plane node and 1 node 34m25s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sGPU\-enabled\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413
Timed out after 1200.001s.
Expected
    <bool>: false
to be true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:76
				
				Click to see stdout/stderrfrom junit.e2e_suite.3.xml

Filter through log files | View test history on testgrid


Show 1 Passed Tests

Error lines from build-log.txt

... skipping 433 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:288

INFO: "With ipv6 worker node" started at Mon, 22 Nov 2021 06:44:33 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-rutdhi" for hosting the cluster
Nov 22 06:44:33.603: INFO: starting to create namespace for hosting the "capz-e2e-rutdhi" test spec
2021/11/22 06:44:33 failed trying to get namespace (capz-e2e-rutdhi):namespaces "capz-e2e-rutdhi" not found
INFO: Creating namespace capz-e2e-rutdhi
INFO: Creating event watcher for namespace "capz-e2e-rutdhi"
Nov 22 06:44:33.654: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-rutdhi-ipv6
INFO: Creating the workload cluster with name "capz-e2e-rutdhi-ipv6" using the "ipv6" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 524.823667ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-rutdhi" namespace
STEP: Deleting all clusters in the capz-e2e-rutdhi namespace
STEP: Deleting cluster capz-e2e-rutdhi-ipv6
INFO: Waiting for the Cluster capz-e2e-rutdhi/capz-e2e-rutdhi-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-rutdhi-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-rutdhi-ipv6-control-plane-6gj8c, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-rutdhi-ipv6-control-plane-kgrzp, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-5wzd2, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-rutdhi-ipv6-control-plane-kgrzp, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-9692w, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-jl66h, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ddrkf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-gjths, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-rutdhi-ipv6-control-plane-kgrzp, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2h2tl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5f4xr, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-rutdhi-ipv6-control-plane-6gj8c, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-rutdhi-ipv6-control-plane-kgrzp, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-rutdhi-ipv6-control-plane-j7bw6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-wdnqq, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-g59rs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-rutdhi-ipv6-control-plane-j7bw6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-rutdhi-ipv6-control-plane-j7bw6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-rutdhi-ipv6-control-plane-6gj8c, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4ms7n, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-p7wnq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-rutdhi-ipv6-control-plane-j7bw6, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-rutdhi-ipv6-control-plane-6gj8c, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-rutdhi
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 18m7s on Ginkgo node 2 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:205

INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" started at Mon, 22 Nov 2021 06:44:33 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-8a9cb3" for hosting the cluster
Nov 22 06:44:33.522: INFO: starting to create namespace for hosting the "capz-e2e-8a9cb3" test spec
2021/11/22 06:44:33 failed trying to get namespace (capz-e2e-8a9cb3):namespaces "capz-e2e-8a9cb3" not found
INFO: Creating namespace capz-e2e-8a9cb3
INFO: Creating event watcher for namespace "capz-e2e-8a9cb3"
Nov 22 06:44:33.564: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-8a9cb3-ha
INFO: Creating the workload cluster with name "capz-e2e-8a9cb3-ha" using the "(default)" template (Kubernetes v1.22.4, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 67 lines ...
STEP: waiting for job default/curl-to-elb-jobxhxw23i04ca to be complete
Nov 22 06:55:32.232: INFO: waiting for job default/curl-to-elb-jobxhxw23i04ca to be complete
Nov 22 06:55:42.359: INFO: job default/curl-to-elb-jobxhxw23i04ca is complete, took 10.126168513s
STEP: connecting directly to the external LB service
Nov 22 06:55:42.359: INFO: starting attempts to connect directly to the external LB service
2021/11/22 06:55:42 [DEBUG] GET http://20.69.128.180
2021/11/22 06:56:12 [ERR] GET http://20.69.128.180 request failed: Get "http://20.69.128.180": dial tcp 20.69.128.180:80: i/o timeout
2021/11/22 06:56:12 [DEBUG] GET http://20.69.128.180: retrying in 1s (4 left)
Nov 22 06:56:16.501: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 22 06:56:16.501: INFO: starting to delete external LB service webgjd0y6-elb
Nov 22 06:56:16.608: INFO: starting to delete deployment webgjd0y6
Nov 22 06:56:16.671: INFO: starting to delete job curl-to-elb-jobxhxw23i04ca
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 22 06:56:16.807: INFO: starting to create dev deployment namespace
2021/11/22 06:56:16 failed trying to get namespace (development):namespaces "development" not found
2021/11/22 06:56:16 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 22 06:56:16.936: INFO: starting to create prod deployment namespace
2021/11/22 06:56:16 failed trying to get namespace (production):namespaces "production" not found
2021/11/22 06:56:16 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 22 06:56:17.066: INFO: starting to create frontend-prod deployments
Nov 22 06:56:17.136: INFO: starting to create frontend-dev deployments
Nov 22 06:56:17.205: INFO: starting to create backend deployments
Nov 22 06:56:17.292: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 22 06:56:41.456: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.81.4 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 22 06:58:51.654: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 22 06:58:51.935: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.81.4 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.81.4 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 22 07:03:13.849: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 22 07:03:14.087: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.184.66 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 22 07:05:24.922: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 22 07:05:25.156: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.184.67 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.184.66 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 22 07:09:47.066: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 22 07:09:47.333: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.81.4 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 22 07:11:58.084: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 22 07:11:58.319: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.81.4 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowsqfugny to be available
Nov 22 07:14:11.017: INFO: starting to wait for deployment to become available
Nov 22 07:15:01.395: INFO: Deployment default/web-windowsqfugny is now available, took 50.377623661s
... skipping 51 lines ...
Nov 22 07:18:53.776: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-8a9cb3-ha-md-0-mp9db

Nov 22 07:18:54.130: INFO: INFO: Collecting logs for node 10.1.0.7 in cluster capz-e2e-8a9cb3-ha in namespace capz-e2e-8a9cb3

Nov 22 07:19:28.103: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-8a9cb3-ha-md-win-dksgw

Failed to get logs for machine capz-e2e-8a9cb3-ha-md-win-6f69bf865c-g2mf5, cluster capz-e2e-8a9cb3/capz-e2e-8a9cb3-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 22 07:19:28.470: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster capz-e2e-8a9cb3-ha in namespace capz-e2e-8a9cb3

Nov 22 07:19:56.952: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-8a9cb3-ha-md-win-swlfj

Failed to get logs for machine capz-e2e-8a9cb3-ha-md-win-6f69bf865c-mvd72, cluster capz-e2e-8a9cb3/capz-e2e-8a9cb3-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-8a9cb3/capz-e2e-8a9cb3-ha kube-system pod logs
STEP: Fetching kube-system pod logs took 499.061241ms
STEP: Creating log watcher for controller kube-system/kube-proxy-cpgfq, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-hw9fj, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-bdxqc, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-8a9cb3-ha-control-plane-zv8t4, container kube-controller-manager
... skipping 22 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-8a9cb3-ha-control-plane-v2hzl, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-rkkps, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-lctxd, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-gr67w, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-8nhfk, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-8a9cb3-ha-control-plane-zv8t4, container kube-apiserver
STEP: Got error while iterating over activity logs for resource group capz-e2e-8a9cb3-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000547503s
STEP: Dumping all the Cluster API resources in the "capz-e2e-8a9cb3" namespace
STEP: Deleting all clusters in the capz-e2e-8a9cb3 namespace
STEP: Deleting cluster capz-e2e-8a9cb3-ha
INFO: Waiting for the Cluster capz-e2e-8a9cb3/capz-e2e-8a9cb3-ha to be deleted
STEP: Waiting for cluster capz-e2e-8a9cb3-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-l725q, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qrkb4, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cpgfq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-8a9cb3-ha-control-plane-zv8t4, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-jhkzr, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6gwg4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lctxd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-jwv7s, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-vvk6h, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-8nhfk, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-xjssw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-8nhfk, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5sskv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-8a9cb3-ha-control-plane-zv8t4, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-8a9cb3
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" ran for 45m10s on Ginkgo node 3 of 3

... skipping 8 lines ...
  with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:334

INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" started at Mon, 22 Nov 2021 07:02:40 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-5q1peu" for hosting the cluster
Nov 22 07:02:40.836: INFO: starting to create namespace for hosting the "capz-e2e-5q1peu" test spec
2021/11/22 07:02:40 failed trying to get namespace (capz-e2e-5q1peu):namespaces "capz-e2e-5q1peu" not found
INFO: Creating namespace capz-e2e-5q1peu
INFO: Creating event watcher for namespace "capz-e2e-5q1peu"
Nov 22 07:02:40.872: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-5q1peu-vmss
INFO: Creating the workload cluster with name "capz-e2e-5q1peu-vmss" using the "machine-pool" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 142 lines ...
Nov 22 07:22:50.451: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-5q1peu-vmss-mp-0

Nov 22 07:22:50.820: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-5q1peu-vmss in namespace capz-e2e-5q1peu

Nov 22 07:23:04.889: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-e2e-5q1peu-vmss-mp-0

Failed to get logs for machine pool capz-e2e-5q1peu-vmss-mp-0, cluster capz-e2e-5q1peu/capz-e2e-5q1peu-vmss: [[running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1], [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1]]
Nov 22 07:23:05.233: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-5q1peu-vmss in namespace capz-e2e-5q1peu

Nov 22 07:23:37.634: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

Nov 22 07:23:37.980: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-5q1peu-vmss in namespace capz-e2e-5q1peu

Nov 22 07:24:01.414: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set win-p-win

Failed to get logs for machine pool capz-e2e-5q1peu-vmss-mp-win, cluster capz-e2e-5q1peu/capz-e2e-5q1peu-vmss: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-5q1peu/capz-e2e-5q1peu-vmss kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-j9jzb, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-windows-v9n6z, container calico-node-felix
STEP: Creating log watcher for controller kube-system/calico-node-tdm99, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-4gjdp, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-mbcbf, container calico-node-felix
... skipping 10 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-mlbrn, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-5q1peu-vmss-control-plane-bzmxm, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-pp9fr, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-xcvp2, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-xjwhf, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-5q1peu-vmss-control-plane-bzmxm, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-5q1peu-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000759203s
STEP: Dumping all the Cluster API resources in the "capz-e2e-5q1peu" namespace
STEP: Deleting all clusters in the capz-e2e-5q1peu namespace
STEP: Deleting cluster capz-e2e-5q1peu-vmss
INFO: Waiting for the Cluster capz-e2e-5q1peu/capz-e2e-5q1peu-vmss to be deleted
STEP: Waiting for cluster capz-e2e-5q1peu-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mlbrn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-xcvp2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-v9n6z, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-mbdqb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4gjdp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-v9n6z, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-pp9fr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-mbcbf, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-mbcbf, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-k5v8n, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-5q1peu
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" ran for 29m17s on Ginkgo node 2 of 3

... skipping 10 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:144

INFO: "Creates a public management cluster in the same vnet" started at Mon, 22 Nov 2021 06:44:33 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-f0q5of" for hosting the cluster
Nov 22 06:44:33.270: INFO: starting to create namespace for hosting the "capz-e2e-f0q5of" test spec
2021/11/22 06:44:33 failed trying to get namespace (capz-e2e-f0q5of):namespaces "capz-e2e-f0q5of" not found
INFO: Creating namespace capz-e2e-f0q5of
INFO: Creating event watcher for namespace "capz-e2e-f0q5of"
Nov 22 06:44:33.304: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-f0q5of-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-bqhpf, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-ffkjv, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-f0q5of-public-custom-vnet-control-plane-hs4fj, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-wx7xs, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-4g49s, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-gcl4l, container calico-node
STEP: Got error while iterating over activity logs for resource group capz-e2e-f0q5of-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000423363s
STEP: Dumping all the Cluster API resources in the "capz-e2e-f0q5of" namespace
STEP: Deleting all clusters in the capz-e2e-f0q5of namespace
STEP: Deleting cluster capz-e2e-f0q5of-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-f0q5of/capz-e2e-f0q5of-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-f0q5of-public-custom-vnet to be deleted
W1122 07:29:52.960953   24448 reflector.go:441] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1122 07:30:24.520057   24448 trace.go:205] Trace[1627119531]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (22-Nov-2021 07:29:54.519) (total time: 30000ms):
Trace[1627119531]: [30.000731311s] [30.000731311s] END
E1122 07:30:24.520184   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp 20.99.130.16:6443: i/o timeout
I1122 07:30:56.920930   24448 trace.go:205] Trace[1122260242]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (22-Nov-2021 07:30:26.919) (total time: 30001ms):
Trace[1122260242]: [30.001091449s] [30.001091449s] END
E1122 07:30:56.920998   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp 20.99.130.16:6443: i/o timeout
I1122 07:31:32.518026   24448 trace.go:205] Trace[1668033841]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (22-Nov-2021 07:31:02.517) (total time: 30000ms):
Trace[1668033841]: [30.000879186s] [30.000879186s] END
E1122 07:31:32.518128   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp 20.99.130.16:6443: i/o timeout
I1122 07:32:11.878056   24448 trace.go:205] Trace[458162308]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (22-Nov-2021 07:31:41.876) (total time: 30001ms):
Trace[458162308]: [30.001145567s] [30.001145567s] END
E1122 07:32:11.878135   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp 20.99.130.16:6443: i/o timeout
I1122 07:32:57.620203   24448 trace.go:205] Trace[696524093]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (22-Nov-2021 07:32:27.619) (total time: 30000ms):
Trace[696524093]: [30.000838665s] [30.000838665s] END
E1122 07:32:57.620311   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp 20.99.130.16:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-f0q5of
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 22 07:33:11.381: INFO: deleting an existing virtual network "custom-vnet"
Nov 22 07:33:22.326: INFO: deleting an existing route table "node-routetable"
E1122 07:33:32.453161   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 22 07:33:32.866: INFO: deleting an existing network security group "node-nsg"
Nov 22 07:33:43.448: INFO: deleting an existing network security group "control-plane-nsg"
Nov 22 07:33:53.994: INFO: verifying the existing resource group "capz-e2e-f0q5of-public-custom-vnet" is empty
Nov 22 07:33:56.518: INFO: deleting the existing resource group "capz-e2e-f0q5of-public-custom-vnet"
E1122 07:34:08.461885   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1122 07:34:51.564877   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 50m27s on Ginkgo node 1 of 3


• [SLOW TEST:3026.674 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:490

INFO: "with a single control plane node and 1 node" started at Mon, 22 Nov 2021 07:34:59 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-4azdfx" for hosting the cluster
Nov 22 07:34:59.947: INFO: starting to create namespace for hosting the "capz-e2e-4azdfx" test spec
2021/11/22 07:34:59 failed trying to get namespace (capz-e2e-4azdfx):namespaces "capz-e2e-4azdfx" not found
INFO: Creating namespace capz-e2e-4azdfx
INFO: Creating event watcher for namespace "capz-e2e-4azdfx"
Nov 22 07:34:59.988: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-4azdfx-aks
INFO: Creating the workload cluster with name "capz-e2e-4azdfx-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1122 07:35:34.803008   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 07:36:25.805914   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 07:37:08.426190   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 07:37:55.375119   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 22 07:38:31.222: INFO: Waiting for the first control plane machine managed by capz-e2e-4azdfx/capz-e2e-4azdfx-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
E1122 07:38:39.775247   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
Nov 22 07:38:41.258: INFO: Waiting for the first control plane machine managed by capz-e2e-4azdfx/capz-e2e-4azdfx-aks to be provisioned
STEP: Waiting for all control plane nodes to exist
INFO: Waiting for the machine deployments to be provisioned
INFO: Waiting for the machine pools to be provisioned
STEP: Waiting for the machine pool workload nodes to exist
... skipping 4 lines ...
Nov 22 07:38:47.418: INFO: found host aks-agentpool0-85700672-vmss000000 with pod nsenter-hgs49
Nov 22 07:38:47.418: INFO: found host aks-agentpool1-85700672-vmss000000 with pod nsenter-nmmsl
STEP: checking that time synchronization is healthy on aks-agentpool1-85700672-vmss000000
STEP: checking that time synchronization is healthy on aks-agentpool1-85700672-vmss000000
STEP: checking that time synchronization is healthy on aks-agentpool1-85700672-vmss000000
STEP: checking that time synchronization is healthy on aks-agentpool1-85700672-vmss000000
E1122 07:39:24.279363   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Dumping logs from the "capz-e2e-4azdfx-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-4azdfx/capz-e2e-4azdfx-aks logs
Nov 22 07:39:54.347: INFO: INFO: Collecting logs for node aks-agentpool1-85700672-vmss000000 in cluster capz-e2e-4azdfx-aks in namespace capz-e2e-4azdfx

E1122 07:40:14.637788   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 07:40:59.897360   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 07:41:47.600498   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 22 07:42:05.223: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-4azdfx/capz-e2e-4azdfx-aks: [dialing public load balancer at capz-e2e-4azdfx-aks-7295eba7.hcp.westus2.azmk8s.io: dial tcp 52.149.55.121:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
Nov 22 07:42:05.755: INFO: INFO: Collecting logs for node aks-agentpool1-85700672-vmss000000 in cluster capz-e2e-4azdfx-aks in namespace capz-e2e-4azdfx

E1122 07:42:30.468939   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 07:43:26.727263   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 07:44:12.273416   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 22 07:44:16.299: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-4azdfx/capz-e2e-4azdfx-aks: [dialing public load balancer at capz-e2e-4azdfx-aks-7295eba7.hcp.westus2.azmk8s.io: dial tcp 52.149.55.121:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-4azdfx/capz-e2e-4azdfx-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 702.093768ms
STEP: Dumping workload cluster capz-e2e-4azdfx/capz-e2e-4azdfx-aks Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-p2bl6, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-84d976c568-sjlpg, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-shkqf, container kube-proxy
... skipping 8 lines ...
STEP: Fetching activity logs took 480.312405ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-4azdfx" namespace
STEP: Deleting all clusters in the capz-e2e-4azdfx namespace
STEP: Deleting cluster capz-e2e-4azdfx-aks
INFO: Waiting for the Cluster capz-e2e-4azdfx/capz-e2e-4azdfx-aks to be deleted
STEP: Waiting for cluster capz-e2e-4azdfx-aks to be deleted
E1122 07:45:03.786695   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 07:45:56.978899   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 07:46:39.148452   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 07:47:37.539041   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 07:48:14.691153   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-4azdfx
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1122 07:48:57.244432   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 14m35s on Ginkgo node 1 of 3


• Failure [874.945 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating an AKS cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:489
    with a single control plane node and 1 node [It]
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:490

    Timed out after 66.863s.
    Expected success, but got an error:
        <*errors.withStack | 0xc001f98dc8>: {
            error: <errors.aggregate | len:4, cap:4>[
                <*errors.errorString | 0xc001edd1e0>{
                    s: "failed to nsenter host aks-agentpool1-85700672-vmss000000, error: 'error dialing backend: dial tcp 10.240.0.35:10250: i/o timeout', stdout:  ''",
                },
                <*errors.errorString | 0xc001edd230>{
                    s: "failed to nsenter host aks-agentpool1-85700672-vmss000000, error: 'error dialing backend: dial tcp 10.240.0.35:10250: i/o timeout', stdout:  ''",
                },
                <*errors.errorString | 0xc001edd280>{
                    s: "failed to nsenter host aks-agentpool1-85700672-vmss000000, error: 'error dialing backend: dial tcp 10.240.0.35:10250: i/o timeout', stdout:  ''",
                },
                <*errors.errorString | 0xc001edd2f0>{
                    s: "failed to nsenter host aks-agentpool1-85700672-vmss000000, error: 'error dialing backend: dial tcp 10.240.0.35:10250: i/o timeout', stdout:  ''",
                },
            ],
            stack: [0x1907b02, 0x1907a94, 0x1907f1e, 0x1d0e698, 0x4e5e87, 0x4e5359, 0x8267aa, 0x824a4f, 0x82513b, 0x824794, 0x1cf4242, 0x1d167cc, 0x8149e3, 0x82256a, 0x1d16e5b, 0x7fd4c3, 0x7fd0dc, 0x7fc407, 0x8033af, 0x802a52, 0x812351, 0x811e67, 0x811657, 0x813d66, 0x821bf8, 0x821936, 0x1cff93a, 0x52a40f, 0x474781],
        }
        failed to nsenter host aks-agentpool1-85700672-vmss000000, error: 'error dialing backend: dial tcp 10.240.0.35:10250: i/o timeout', stdout:  ''

    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_timesync.go:244

    Full Stack Trace
    sigs.k8s.io/cluster-api-provider-azure/test/e2e.AzureDaemonsetTimeSyncSpec(0x2596820, 0xc0001a0018, 0xc000eeccc8)
    	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_timesync.go:244 +0x1242
... skipping 38 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:455

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Mon, 22 Nov 2021 07:31:58 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-mzlb1r" for hosting the cluster
Nov 22 07:31:58.245: INFO: starting to create namespace for hosting the "capz-e2e-mzlb1r" test spec
2021/11/22 07:31:58 failed trying to get namespace (capz-e2e-mzlb1r):namespaces "capz-e2e-mzlb1r" not found
INFO: Creating namespace capz-e2e-mzlb1r
INFO: Creating event watcher for namespace "capz-e2e-mzlb1r"
Nov 22 07:31:58.295: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-mzlb1r-oot
INFO: Creating the workload cluster with name "capz-e2e-mzlb1r-oot" using the "external-cloud-provider" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-job5nqss7gyzii to be complete
Nov 22 07:51:04.874: INFO: waiting for job default/curl-to-elb-job5nqss7gyzii to be complete
Nov 22 07:51:14.998: INFO: job default/curl-to-elb-job5nqss7gyzii is complete, took 10.124058643s
STEP: connecting directly to the external LB service
Nov 22 07:51:14.998: INFO: starting attempts to connect directly to the external LB service
2021/11/22 07:51:14 [DEBUG] GET http://20.120.200.136
2021/11/22 07:51:44 [ERR] GET http://20.120.200.136 request failed: Get "http://20.120.200.136": dial tcp 20.120.200.136:80: i/o timeout
2021/11/22 07:51:44 [DEBUG] GET http://20.120.200.136: retrying in 1s (4 left)
Nov 22 07:51:47.123: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 22 07:51:47.123: INFO: starting to delete external LB service webzvq6zf-elb
Nov 22 07:51:47.224: INFO: starting to delete deployment webzvq6zf
Nov 22 07:51:47.283: INFO: starting to delete job curl-to-elb-job5nqss7gyzii
... skipping 34 lines ...
STEP: Fetching activity logs took 963.512264ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-mzlb1r" namespace
STEP: Deleting all clusters in the capz-e2e-mzlb1r namespace
STEP: Deleting cluster capz-e2e-mzlb1r-oot
INFO: Waiting for the Cluster capz-e2e-mzlb1r/capz-e2e-mzlb1r-oot to be deleted
STEP: Waiting for cluster capz-e2e-mzlb1r-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vtfwn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-4gvhc, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4w9gs, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-mzlb1r
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 28m27s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413

INFO: "with a single control plane node and 1 node" started at Mon, 22 Nov 2021 07:29:43 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-gegsc6" for hosting the cluster
Nov 22 07:29:43.634: INFO: starting to create namespace for hosting the "capz-e2e-gegsc6" test spec
2021/11/22 07:29:43 failed trying to get namespace (capz-e2e-gegsc6):namespaces "capz-e2e-gegsc6" not found
INFO: Creating namespace capz-e2e-gegsc6
INFO: Creating event watcher for namespace "capz-e2e-gegsc6"
Nov 22 07:29:43.684: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-gegsc6-gpu
INFO: Creating the workload cluster with name "capz-e2e-gegsc6-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.4, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 47 lines ...
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-8pbkx, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-fpz4d, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-gegsc6-gpu-control-plane-f9jfg, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-gc4f4, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-gegsc6-gpu-control-plane-f9jfg, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-kvz45, container calico-kube-controllers
STEP: Got error while iterating over activity logs for resource group capz-e2e-gegsc6-gpu: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000965557s
STEP: Dumping all the Cluster API resources in the "capz-e2e-gegsc6" namespace
STEP: Deleting all clusters in the capz-e2e-gegsc6 namespace
STEP: Deleting cluster capz-e2e-gegsc6-gpu
INFO: Waiting for the Cluster capz-e2e-gegsc6/capz-e2e-gegsc6-gpu to be deleted
STEP: Waiting for cluster capz-e2e-gegsc6-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9nzlj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-gc4f4, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-gegsc6
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 34m26s on Ginkgo node 3 of 3

... skipping 57 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Mon, 22 Nov 2021 07:49:34 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-6i591c" for hosting the cluster
Nov 22 07:49:34.895: INFO: starting to create namespace for hosting the "capz-e2e-6i591c" test spec
2021/11/22 07:49:34 failed trying to get namespace (capz-e2e-6i591c):namespaces "capz-e2e-6i591c" not found
INFO: Creating namespace capz-e2e-6i591c
INFO: Creating event watcher for namespace "capz-e2e-6i591c"
Nov 22 07:49:34.931: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-6i591c-win-ha
INFO: Creating the workload cluster with name "capz-e2e-6i591c-win-ha" using the "windows" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 12 lines ...
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created
clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-6i591c-win-ha-flannel created
configmap/cni-capz-e2e-6i591c-win-ha-flannel created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1122 07:49:49.277999   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 07:50:39.094357   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by capz-e2e-6i591c/capz-e2e-6i591c-win-ha-control-plane to be provisioned
STEP: Waiting for one control plane node to exist
E1122 07:51:33.034183   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 07:52:05.833611   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
INFO: Waiting for the remaining control plane machines managed by capz-e2e-6i591c/capz-e2e-6i591c-win-ha-control-plane to be provisioned
STEP: Waiting for all control plane nodes to exist
E1122 07:53:05.213111   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 07:53:40.816051   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 07:54:36.525787   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 07:55:17.141136   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane capz-e2e-6i591c/capz-e2e-6i591c-win-ha-control-plane to be ready (implies underlying nodes to be ready as well)
STEP: Waiting for the control plane to be ready
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
STEP: Waiting for the workload nodes to exist
INFO: Waiting for the machine pools to be provisioned
... skipping 3 lines ...
Nov 22 07:55:46.991: INFO: starting to wait for deployment to become available
Nov 22 07:56:07.181: INFO: Deployment default/web9993rr is now available, took 20.190073013s
STEP: creating an internal Load Balancer service
Nov 22 07:56:07.181: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/web9993rr-ilb to be available
Nov 22 07:56:07.278: INFO: waiting for service default/web9993rr-ilb to be available
E1122 07:56:11.231142   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 07:57:03.598905   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 22 07:57:27.824: INFO: service default/web9993rr-ilb is available, took 1m20.545202806s
STEP: connecting to the internal LB service from a curl pod
Nov 22 07:57:27.881: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-jobp4a8s to be complete
Nov 22 07:57:27.962: INFO: waiting for job default/curl-to-ilb-jobp4a8s to be complete
Nov 22 07:57:38.079: INFO: job default/curl-to-ilb-jobp4a8s is complete, took 10.117571218s
STEP: deleting the ilb test resources
Nov 22 07:57:38.079: INFO: deleting the ilb service: web9993rr-ilb
Nov 22 07:57:38.176: INFO: deleting the ilb job: curl-to-ilb-jobp4a8s
STEP: creating an external Load Balancer service
Nov 22 07:57:38.238: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/web9993rr-elb to be available
Nov 22 07:57:38.325: INFO: waiting for service default/web9993rr-elb to be available
E1122 07:57:41.904274   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 07:58:22.678929   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 22 07:58:38.732: INFO: service default/web9993rr-elb is available, took 1m0.40698931s
STEP: connecting to the external LB service from a curl pod
Nov 22 07:58:38.788: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-jobzn4k6fcfcwq to be complete
Nov 22 07:58:38.853: INFO: waiting for job default/curl-to-elb-jobzn4k6fcfcwq to be complete
Nov 22 07:58:48.980: INFO: job default/curl-to-elb-jobzn4k6fcfcwq is complete, took 10.126287312s
... skipping 6 lines ...
Nov 22 07:58:49.197: INFO: starting to delete deployment web9993rr
Nov 22 07:58:49.259: INFO: starting to delete job curl-to-elb-jobzn4k6fcfcwq
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windows4wynn6 to be available
Nov 22 07:58:49.515: INFO: starting to wait for deployment to become available
E1122 07:58:55.316058   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 07:59:37.121849   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 22 08:00:00.018: INFO: Deployment default/web-windows4wynn6 is now available, took 1m10.502995883s
STEP: creating an internal Load Balancer service
Nov 22 08:00:00.018: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/web-windows4wynn6-ilb to be available
Nov 22 08:00:00.104: INFO: waiting for service default/web-windows4wynn6-ilb to be available
E1122 08:00:37.094061   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 08:01:09.486432   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 22 08:01:50.810: INFO: service default/web-windows4wynn6-ilb is available, took 1m50.705825329s
STEP: connecting to the internal LB service from a curl pod
Nov 22 08:01:50.867: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-jobi5u9p to be complete
Nov 22 08:01:50.930: INFO: waiting for job default/curl-to-ilb-jobi5u9p to be complete
Nov 22 08:02:01.047: INFO: job default/curl-to-ilb-jobi5u9p is complete, took 10.116742629s
STEP: deleting the ilb test resources
Nov 22 08:02:01.047: INFO: deleting the ilb service: web-windows4wynn6-ilb
Nov 22 08:02:01.159: INFO: deleting the ilb job: curl-to-ilb-jobi5u9p
STEP: creating an external Load Balancer service
Nov 22 08:02:01.227: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/web-windows4wynn6-elb to be available
Nov 22 08:02:01.315: INFO: waiting for service default/web-windows4wynn6-elb to be available
E1122 08:02:08.458433   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 08:02:49.731425   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 08:03:49.570700   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 08:04:43.513877   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 08:05:19.183227   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 08:06:06.908158   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 22 08:06:22.888: INFO: service default/web-windows4wynn6-elb is available, took 4m21.573350068s
STEP: connecting to the external LB service from a curl pod
Nov 22 08:06:22.945: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-job8qhqyvj5vgw to be complete
Nov 22 08:06:23.017: INFO: waiting for job default/curl-to-elb-job8qhqyvj5vgw to be complete
Nov 22 08:06:33.133: INFO: job default/curl-to-elb-job8qhqyvj5vgw is complete, took 10.116012146s
... skipping 6 lines ...
Nov 22 08:06:33.347: INFO: starting to delete deployment web-windows4wynn6
Nov 22 08:06:33.409: INFO: starting to delete job curl-to-elb-job8qhqyvj5vgw
STEP: Dumping logs from the "capz-e2e-6i591c-win-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-6i591c/capz-e2e-6i591c-win-ha logs
Nov 22 08:06:33.534: INFO: INFO: Collecting logs for node capz-e2e-6i591c-win-ha-control-plane-bcxz9 in cluster capz-e2e-6i591c-win-ha in namespace capz-e2e-6i591c

E1122 08:06:37.662569   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 22 08:06:45.890: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-6i591c-win-ha-control-plane-bcxz9

Nov 22 08:06:46.771: INFO: INFO: Collecting logs for node capz-e2e-6i591c-win-ha-control-plane-9gdpx in cluster capz-e2e-6i591c-win-ha in namespace capz-e2e-6i591c

Nov 22 08:06:57.426: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-6i591c-win-ha-control-plane-9gdpx

... skipping 4 lines ...
Nov 22 08:07:09.527: INFO: INFO: Collecting logs for node capz-e2e-6i591c-win-ha-md-0-c5qc8 in cluster capz-e2e-6i591c-win-ha in namespace capz-e2e-6i591c

Nov 22 08:07:19.799: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-6i591c-win-ha-md-0-c5qc8

Nov 22 08:07:20.210: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster capz-e2e-6i591c-win-ha in namespace capz-e2e-6i591c

E1122 08:07:36.808218   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 22 08:07:44.310: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-6i591c-win-ha-md-win-2d9zr

Nov 22 08:07:44.672: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster capz-e2e-6i591c-win-ha in namespace capz-e2e-6i591c

Nov 22 08:08:21.080: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-6i591c-win-ha-md-win-kv9d9

... skipping 23 lines ...
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-6i591c-win-ha-control-plane-22phz, container etcd
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-6i591c-win-ha-control-plane-22phz, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-6i591c-win-ha-control-plane-9gdpx, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-6i591c-win-ha-control-plane-bcxz9, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-hrn9l, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-6i591c-win-ha-control-plane-22phz, container kube-scheduler
E1122 08:08:29.492186   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Got error while iterating over activity logs for resource group capz-e2e-6i591c-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001239852s
STEP: Dumping all the Cluster API resources in the "capz-e2e-6i591c" namespace
STEP: Deleting all clusters in the capz-e2e-6i591c namespace
STEP: Deleting cluster capz-e2e-6i591c-win-ha
INFO: Waiting for the Cluster capz-e2e-6i591c/capz-e2e-6i591c-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-6i591c-win-ha to be deleted
E1122 08:09:03.260686   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 08:09:46.084698   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 08:10:34.283335   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7fpzs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-vd29x, container kube-flannel: http2: client connection lost
E1122 08:11:06.255304   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 08:11:59.602342   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 08:12:55.493594   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 08:13:52.307802   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 08:14:44.428928   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 08:15:38.640553   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 08:16:26.107928   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 08:17:22.040876   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-6i591c
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1122 08:18:04.140224   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 08:18:40.513614   24448 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-f0q5of/events?resourceVersion=9802": dial tcp: lookup capz-e2e-f0q5of-public-custom-vnet-5f5bcadd.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 29m40s on Ginkgo node 1 of 3


• [SLOW TEST:1780.161 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows Enabled cluster with dockershim
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:530
    With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2021-11-22T08:36:50Z"}
++ early_exit_handler
++ '[' -n 162 ']'
++ kill -TERM 162
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 12 lines ...
Cleaning up after docker
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
================================================================================
Done cleaning up after docker in docker.
All sensitive variables are redacted
{"component":"entrypoint","file":"prow/entrypoint/run.go:255","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2021-11-22T08:51:50Z"}
{"component":"entrypoint","error":"os: process already finished","file":"prow/entrypoint/run.go:257","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2021-11-22T08:51:50Z"}