This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 3 succeeded
Started2021-11-30 06:39
Elapsed2h15m
Revisionmain

No Test Failures!


Show 3 Passed Tests

Show 1 Skipped Tests

Error lines from build-log.txt

... skipping 433 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:288

INFO: "With ipv6 worker node" started at Tue, 30 Nov 2021 06:47:10 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-emoz9j" for hosting the cluster
Nov 30 06:47:10.249: INFO: starting to create namespace for hosting the "capz-e2e-emoz9j" test spec
2021/11/30 06:47:10 failed trying to get namespace (capz-e2e-emoz9j):namespaces "capz-e2e-emoz9j" not found
INFO: Creating namespace capz-e2e-emoz9j
INFO: Creating event watcher for namespace "capz-e2e-emoz9j"
Nov 30 06:47:10.290: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-emoz9j-ipv6
INFO: Creating the workload cluster with name "capz-e2e-emoz9j-ipv6" using the "ipv6" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 692.645208ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-emoz9j" namespace
STEP: Deleting all clusters in the capz-e2e-emoz9j namespace
STEP: Deleting cluster capz-e2e-emoz9j-ipv6
INFO: Waiting for the Cluster capz-e2e-emoz9j/capz-e2e-emoz9j-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-emoz9j-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-pccps, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-b5dqr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-d6g54, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5fxv9, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-emoz9j-ipv6-control-plane-5q8fb, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-emoz9j-ipv6-control-plane-2tc6h, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-emoz9j-ipv6-control-plane-5q8fb, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-p5zwc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-emoz9j-ipv6-control-plane-2tc6h, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-7k84k, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-p2x6l, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-emoz9j-ipv6-control-plane-5q8fb, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-emoz9j-ipv6-control-plane-2tc6h, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-95z7n, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-emoz9j-ipv6-control-plane-4hsr5, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-emoz9j-ipv6-control-plane-2tc6h, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rmf8z, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-emoz9j-ipv6-control-plane-4hsr5, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-emoz9j-ipv6-control-plane-4hsr5, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-drzq8, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-emoz9j-ipv6-control-plane-4hsr5, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-emoz9j-ipv6-control-plane-5q8fb, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ctgdh, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-emoz9j
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 18m4s on Ginkgo node 2 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:205

INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" started at Tue, 30 Nov 2021 06:47:09 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-scn1r6" for hosting the cluster
Nov 30 06:47:09.711: INFO: starting to create namespace for hosting the "capz-e2e-scn1r6" test spec
2021/11/30 06:47:09 failed trying to get namespace (capz-e2e-scn1r6):namespaces "capz-e2e-scn1r6" not found
INFO: Creating namespace capz-e2e-scn1r6
INFO: Creating event watcher for namespace "capz-e2e-scn1r6"
Nov 30 06:47:09.743: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-scn1r6-ha
INFO: Creating the workload cluster with name "capz-e2e-scn1r6-ha" using the "(default)" template (Kubernetes v1.22.4, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 75 lines ...
Nov 30 06:58:44.746: INFO: starting to delete external LB service webz99ucl-elb
Nov 30 06:58:44.827: INFO: starting to delete deployment webz99ucl
Nov 30 06:58:44.871: INFO: starting to delete job curl-to-elb-jobtr8lp0bvyp1
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 30 06:58:44.948: INFO: starting to create dev deployment namespace
2021/11/30 06:58:44 failed trying to get namespace (development):namespaces "development" not found
2021/11/30 06:58:44 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 30 06:58:45.028: INFO: starting to create prod deployment namespace
2021/11/30 06:58:45 failed trying to get namespace (production):namespaces "production" not found
2021/11/30 06:58:45 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 30 06:58:45.103: INFO: starting to create frontend-prod deployments
Nov 30 06:58:45.143: INFO: starting to create frontend-dev deployments
Nov 30 06:58:45.187: INFO: starting to create backend deployments
Nov 30 06:58:45.232: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 30 06:59:08.172: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.68.66 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 30 07:01:18.188: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 30 07:01:18.352: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.68.66 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.68.66 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 30 07:05:40.330: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 30 07:05:40.531: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.84.5 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 30 07:07:51.987: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 30 07:07:52.189: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.84.3 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.84.5 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 30 07:12:14.133: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 30 07:12:14.286: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.68.66 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 30 07:14:24.620: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 30 07:14:25.030: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.68.66 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windows0y8d08 to be available
Nov 30 07:16:36.274: INFO: starting to wait for deployment to become available
Nov 30 07:17:36.502: INFO: Deployment default/web-windows0y8d08 is now available, took 1m0.228380987s
... skipping 51 lines ...
Nov 30 07:22:52.162: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-scn1r6-ha-md-0-29dhl

Nov 30 07:22:52.455: INFO: INFO: Collecting logs for node 10.1.0.7 in cluster capz-e2e-scn1r6-ha in namespace capz-e2e-scn1r6

Nov 30 07:23:20.783: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-scn1r6-ha-md-win-78vqw

Failed to get logs for machine capz-e2e-scn1r6-ha-md-win-5ddbbcd464-gz57n, cluster capz-e2e-scn1r6/capz-e2e-scn1r6-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 30 07:23:21.056: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster capz-e2e-scn1r6-ha in namespace capz-e2e-scn1r6

Nov 30 07:23:53.458: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-scn1r6-ha-md-win-dc5bf

Failed to get logs for machine capz-e2e-scn1r6-ha-md-win-5ddbbcd464-l72bb, cluster capz-e2e-scn1r6/capz-e2e-scn1r6-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-scn1r6/capz-e2e-scn1r6-ha kube-system pod logs
STEP: Fetching kube-system pod logs took 315.551748ms
STEP: Dumping workload cluster capz-e2e-scn1r6/capz-e2e-scn1r6-ha Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-8s52g, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-windows-q7wk7, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-proxy-qgmrf, container kube-proxy
... skipping 22 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-r7zxn, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-scn1r6-ha-control-plane-kmwt6, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-scn1r6-ha-control-plane-7fwtn, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-4xxg6, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-scn1r6-ha-control-plane-v2h9l, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-dmvxz, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-scn1r6-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.0007253s
STEP: Dumping all the Cluster API resources in the "capz-e2e-scn1r6" namespace
STEP: Deleting all clusters in the capz-e2e-scn1r6 namespace
STEP: Deleting cluster capz-e2e-scn1r6-ha
INFO: Waiting for the Cluster capz-e2e-scn1r6/capz-e2e-scn1r6-ha to be deleted
STEP: Waiting for cluster capz-e2e-scn1r6-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-8s52g, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-scn1r6-ha-control-plane-7fwtn, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-k6bbw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dmvxz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-scn1r6-ha-control-plane-7fwtn, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4xxg6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-2759g, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-scn1r6-ha-control-plane-7fwtn, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-zh4xs, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-85kph, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-scn1r6-ha-control-plane-kmwt6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-scn1r6-ha-control-plane-7fwtn, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-nst2w, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-scn1r6-ha-control-plane-kmwt6, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-87znb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-jgxfx, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-scn1r6-ha-control-plane-kmwt6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-jgxfx, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-r7zxn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-scn1r6-ha-control-plane-kmwt6, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-scn1r6
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" ran for 47m27s on Ginkgo node 1 of 3

... skipping 8 lines ...
  with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:334

INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" started at Tue, 30 Nov 2021 07:05:14 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-m0dfgg" for hosting the cluster
Nov 30 07:05:14.087: INFO: starting to create namespace for hosting the "capz-e2e-m0dfgg" test spec
2021/11/30 07:05:14 failed trying to get namespace (capz-e2e-m0dfgg):namespaces "capz-e2e-m0dfgg" not found
INFO: Creating namespace capz-e2e-m0dfgg
INFO: Creating event watcher for namespace "capz-e2e-m0dfgg"
Nov 30 07:05:14.120: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-m0dfgg-vmss
INFO: Creating the workload cluster with name "capz-e2e-m0dfgg-vmss" using the "machine-pool" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 142 lines ...
Nov 30 07:26:55.378: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-m0dfgg-vmss-mp-0

Nov 30 07:26:55.718: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-m0dfgg-vmss in namespace capz-e2e-m0dfgg

Nov 30 07:27:11.714: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-e2e-m0dfgg-vmss-mp-0

Failed to get logs for machine pool capz-e2e-m0dfgg-vmss-mp-0, cluster capz-e2e-m0dfgg/capz-e2e-m0dfgg-vmss: [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1]
Nov 30 07:27:11.985: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-m0dfgg-vmss in namespace capz-e2e-m0dfgg

Nov 30 07:27:42.436: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

Nov 30 07:27:42.673: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-m0dfgg-vmss in namespace capz-e2e-m0dfgg

Nov 30 07:28:12.651: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set win-p-win

Failed to get logs for machine pool capz-e2e-m0dfgg-vmss-mp-win, cluster capz-e2e-m0dfgg/capz-e2e-m0dfgg-vmss: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-m0dfgg/capz-e2e-m0dfgg-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 323.076306ms
STEP: Dumping workload cluster capz-e2e-m0dfgg/capz-e2e-m0dfgg-vmss Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-jlg92, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-69nrj, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-windows-m9dbg, container calico-node-startup
... skipping 10 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-9pc8s, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-278pn, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-mtvdq, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-cz2lm, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-m0dfgg-vmss-control-plane-s7fbw, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-j2jgn, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-m0dfgg-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00118142s
STEP: Dumping all the Cluster API resources in the "capz-e2e-m0dfgg" namespace
STEP: Deleting all clusters in the capz-e2e-m0dfgg namespace
STEP: Deleting cluster capz-e2e-m0dfgg-vmss
INFO: Waiting for the Cluster capz-e2e-m0dfgg/capz-e2e-m0dfgg-vmss to be deleted
STEP: Waiting for cluster capz-e2e-m0dfgg-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-mncnb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-mtvdq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cz2lm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-nlk5x, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-9pc8s, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-m9dbg, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-th8sw, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-m9dbg, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-th8sw, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-j2jgn, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-m0dfgg
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" ran for 31m0s on Ginkgo node 2 of 3

... skipping 10 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:144

INFO: "Creates a public management cluster in the same vnet" started at Tue, 30 Nov 2021 06:47:09 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-set0j9" for hosting the cluster
Nov 30 06:47:09.546: INFO: starting to create namespace for hosting the "capz-e2e-set0j9" test spec
2021/11/30 06:47:09 failed trying to get namespace (capz-e2e-set0j9):namespaces "capz-e2e-set0j9" not found
INFO: Creating namespace capz-e2e-set0j9
INFO: Creating event watcher for namespace "capz-e2e-set0j9"
Nov 30 06:47:09.594: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-set0j9-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-hzclb, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-mdjtv, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-rgrjh, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-x6h5q, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-9c4jw, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-b24t9, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-set0j9-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001061836s
STEP: Dumping all the Cluster API resources in the "capz-e2e-set0j9" namespace
STEP: Deleting all clusters in the capz-e2e-set0j9 namespace
STEP: Deleting cluster capz-e2e-set0j9-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-set0j9/capz-e2e-set0j9-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-set0j9-public-custom-vnet to be deleted
W1130 07:32:05.387712   24480 reflector.go:441] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1130 07:32:36.482207   24480 trace.go:205] Trace[1914593388]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (30-Nov-2021 07:32:06.480) (total time: 30001ms):
Trace[1914593388]: [30.001398274s] [30.001398274s] END
E1130 07:32:36.482267   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp 52.151.236.245:6443: i/o timeout
I1130 07:33:09.593824   24480 trace.go:205] Trace[1914320972]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (30-Nov-2021 07:32:39.592) (total time: 30001ms):
Trace[1914320972]: [30.001262116s] [30.001262116s] END
E1130 07:33:09.593886   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp 52.151.236.245:6443: i/o timeout
I1130 07:33:42.822995   24480 trace.go:205] Trace[2048804598]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (30-Nov-2021 07:33:12.821) (total time: 30001ms):
Trace[2048804598]: [30.001375235s] [30.001375235s] END
E1130 07:33:42.823052   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp 52.151.236.245:6443: i/o timeout
I1130 07:34:19.597021   24480 trace.go:205] Trace[791626426]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (30-Nov-2021 07:33:49.595) (total time: 30001ms):
Trace[791626426]: [30.001025867s] [30.001025867s] END
E1130 07:34:19.597074   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp 52.151.236.245:6443: i/o timeout
I1130 07:35:08.239918   24480 trace.go:205] Trace[107539565]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (30-Nov-2021 07:34:38.238) (total time: 30001ms):
Trace[107539565]: [30.001009039s] [30.001009039s] END
E1130 07:35:08.239973   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp 52.151.236.245:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-set0j9
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 30 07:35:21.788: INFO: deleting an existing virtual network "custom-vnet"
Nov 30 07:35:32.297: INFO: deleting an existing route table "node-routetable"
Nov 30 07:35:42.684: INFO: deleting an existing network security group "node-nsg"
E1130 07:35:49.556037   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 07:35:54.155: INFO: deleting an existing network security group "control-plane-nsg"
Nov 30 07:36:04.518: INFO: verifying the existing resource group "capz-e2e-set0j9-public-custom-vnet" is empty
Nov 30 07:36:05.210: INFO: deleting the existing resource group "capz-e2e-set0j9-public-custom-vnet"
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1130 07:36:28.486033   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 49m52s on Ginkgo node 3 of 3


• [SLOW TEST:2992.077 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:490

INFO: "with a single control plane node and 1 node" started at Tue, 30 Nov 2021 07:37:01 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-9nsh3z" for hosting the cluster
Nov 30 07:37:01.626: INFO: starting to create namespace for hosting the "capz-e2e-9nsh3z" test spec
2021/11/30 07:37:01 failed trying to get namespace (capz-e2e-9nsh3z):namespaces "capz-e2e-9nsh3z" not found
INFO: Creating namespace capz-e2e-9nsh3z
INFO: Creating event watcher for namespace "capz-e2e-9nsh3z"
Nov 30 07:37:01.659: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-9nsh3z-aks
INFO: Creating the workload cluster with name "capz-e2e-9nsh3z-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1130 07:37:02.813889   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 07:37:58.307496   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 07:38:55.447711   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 07:39:36.580000   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 07:40:08.221101   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 07:40:56.231579   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 07:41:31.196536   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 30 07:41:33.802: INFO: Waiting for the first control plane machine managed by capz-e2e-9nsh3z/capz-e2e-9nsh3z-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
INFO: Waiting for control plane to be ready
Nov 30 07:41:43.863: INFO: Waiting for the first control plane machine managed by capz-e2e-9nsh3z/capz-e2e-9nsh3z-aks to be provisioned
STEP: Waiting for all control plane nodes to exist
... skipping 13 lines ...
STEP: time sync OK for host aks-agentpool1-37380610-vmss000000
STEP: time sync OK for host aks-agentpool1-37380610-vmss000000
STEP: Dumping logs from the "capz-e2e-9nsh3z-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-9nsh3z/capz-e2e-9nsh3z-aks logs
Nov 30 07:41:50.395: INFO: INFO: Collecting logs for node aks-agentpool1-37380610-vmss000000 in cluster capz-e2e-9nsh3z-aks in namespace capz-e2e-9nsh3z

E1130 07:42:17.661508   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 07:42:56.454860   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 07:43:43.562931   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 07:44:00.234: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-9nsh3z/capz-e2e-9nsh3z-aks: [dialing public load balancer at capz-e2e-9nsh3z-aks-7180a7cf.hcp.eastus.azmk8s.io: dial tcp 20.42.37.38:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
Nov 30 07:44:00.705: INFO: INFO: Collecting logs for node aks-agentpool1-37380610-vmss000000 in cluster capz-e2e-9nsh3z-aks in namespace capz-e2e-9nsh3z

E1130 07:44:15.310392   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 07:44:53.327669   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 07:45:53.451919   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 07:46:11.302: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-9nsh3z/capz-e2e-9nsh3z-aks: [dialing public load balancer at capz-e2e-9nsh3z-aks-7180a7cf.hcp.eastus.azmk8s.io: dial tcp 20.42.37.38:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-9nsh3z/capz-e2e-9nsh3z-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 435.881873ms
STEP: Dumping workload cluster capz-e2e-9nsh3z/capz-e2e-9nsh3z-aks Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-f48wp, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-84d976c568-95p4g, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-cwptb, container kube-proxy
... skipping 8 lines ...
STEP: Fetching activity logs took 457.784405ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-9nsh3z" namespace
STEP: Deleting all clusters in the capz-e2e-9nsh3z namespace
STEP: Deleting cluster capz-e2e-9nsh3z-aks
INFO: Waiting for the Cluster capz-e2e-9nsh3z/capz-e2e-9nsh3z-aks to be deleted
STEP: Waiting for cluster capz-e2e-9nsh3z-aks to be deleted
E1130 07:46:24.594982   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 07:47:14.083252   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 07:47:54.037996   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 07:48:28.426485   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 07:49:07.225629   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 07:49:53.727938   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 07:50:31.386006   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-9nsh3z
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1130 07:51:07.836312   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 14m35s on Ginkgo node 3 of 3


• [SLOW TEST:874.772 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:455

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Tue, 30 Nov 2021 07:36:14 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-dm78ww" for hosting the cluster
Nov 30 07:36:14.555: INFO: starting to create namespace for hosting the "capz-e2e-dm78ww" test spec
2021/11/30 07:36:14 failed trying to get namespace (capz-e2e-dm78ww):namespaces "capz-e2e-dm78ww" not found
INFO: Creating namespace capz-e2e-dm78ww
INFO: Creating event watcher for namespace "capz-e2e-dm78ww"
Nov 30 07:36:14.587: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-dm78ww-oot
INFO: Creating the workload cluster with name "capz-e2e-dm78ww-oot" using the "external-cloud-provider" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-jobhiqrwc4znkg to be complete
Nov 30 07:44:47.032: INFO: waiting for job default/curl-to-elb-jobhiqrwc4znkg to be complete
Nov 30 07:44:57.105: INFO: job default/curl-to-elb-jobhiqrwc4znkg is complete, took 10.072670677s
STEP: connecting directly to the external LB service
Nov 30 07:44:57.105: INFO: starting attempts to connect directly to the external LB service
2021/11/30 07:44:57 [DEBUG] GET http://20.81.86.200
2021/11/30 07:45:27 [ERR] GET http://20.81.86.200 request failed: Get "http://20.81.86.200": dial tcp 20.81.86.200:80: i/o timeout
2021/11/30 07:45:27 [DEBUG] GET http://20.81.86.200: retrying in 1s (4 left)
Nov 30 07:45:43.455: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 30 07:45:43.455: INFO: starting to delete external LB service webvdzgm6-elb
Nov 30 07:45:43.522: INFO: starting to delete deployment webvdzgm6
Nov 30 07:45:43.555: INFO: starting to delete job curl-to-elb-jobhiqrwc4znkg
... skipping 34 lines ...
STEP: Fetching activity logs took 521.782541ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-dm78ww" namespace
STEP: Deleting all clusters in the capz-e2e-dm78ww namespace
STEP: Deleting cluster capz-e2e-dm78ww-oot
INFO: Waiting for the Cluster capz-e2e-dm78ww/capz-e2e-dm78ww-oot to be deleted
STEP: Waiting for cluster capz-e2e-dm78ww-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-dm78ww-oot-control-plane-jzbdf, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-dm78ww-oot-control-plane-jzbdf, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-49mqp, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-dm78ww-oot-control-plane-jzbdf, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-dm78ww-oot-control-plane-jzbdf, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-n7gnv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-zd54g, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9777m, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-controller-manager, container cloud-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-wz5x5, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-fvnh4, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5hn6w, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-qnvzj, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-dm78ww
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 19m32s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413

INFO: "with a single control plane node and 1 node" started at Tue, 30 Nov 2021 07:34:36 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-2ftq2q" for hosting the cluster
Nov 30 07:34:36.657: INFO: starting to create namespace for hosting the "capz-e2e-2ftq2q" test spec
2021/11/30 07:34:36 failed trying to get namespace (capz-e2e-2ftq2q):namespaces "capz-e2e-2ftq2q" not found
INFO: Creating namespace capz-e2e-2ftq2q
INFO: Creating event watcher for namespace "capz-e2e-2ftq2q"
Nov 30 07:34:36.685: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-2ftq2q-gpu
INFO: Creating the workload cluster with name "capz-e2e-2ftq2q-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.4, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: Fetching activity logs took 949.712819ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-2ftq2q" namespace
STEP: Deleting all clusters in the capz-e2e-2ftq2q namespace
STEP: Deleting cluster capz-e2e-2ftq2q-gpu
INFO: Waiting for the Cluster capz-e2e-2ftq2q/capz-e2e-2ftq2q-gpu to be deleted
STEP: Waiting for cluster capz-e2e-2ftq2q-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-lr9jp, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-58qxt, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-566cx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-2ftq2q-gpu-control-plane-4czlf, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-2ftq2q-gpu-control-plane-4czlf, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ssrrc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-bqbv4, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-2ftq2q-gpu-control-plane-4czlf, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-2ftq2q-gpu-control-plane-4czlf, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-2ftq2q
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 34m9s on Ginkgo node 1 of 3

... skipping 59 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Tue, 30 Nov 2021 07:51:36 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-1s2fxl" for hosting the cluster
Nov 30 07:51:36.400: INFO: starting to create namespace for hosting the "capz-e2e-1s2fxl" test spec
2021/11/30 07:51:36 failed trying to get namespace (capz-e2e-1s2fxl):namespaces "capz-e2e-1s2fxl" not found
INFO: Creating namespace capz-e2e-1s2fxl
INFO: Creating event watcher for namespace "capz-e2e-1s2fxl"
Nov 30 07:51:36.425: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-1s2fxl-win-ha
INFO: Creating the workload cluster with name "capz-e2e-1s2fxl-win-ha" using the "windows" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 12 lines ...
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created
clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-1s2fxl-win-ha-flannel created
configmap/cni-capz-e2e-1s2fxl-win-ha-flannel created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1130 07:51:56.858420   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 07:52:36.484488   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by capz-e2e-1s2fxl/capz-e2e-1s2fxl-win-ha-control-plane to be provisioned
STEP: Waiting for one control plane node to exist
E1130 07:53:24.416525   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 07:54:12.558107   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
INFO: Waiting for the remaining control plane machines managed by capz-e2e-1s2fxl/capz-e2e-1s2fxl-win-ha-control-plane to be provisioned
STEP: Waiting for all control plane nodes to exist
E1130 07:54:55.118732   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 07:55:42.382629   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 07:56:33.904281   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 07:57:15.347674   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 07:58:13.148262   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane capz-e2e-1s2fxl/capz-e2e-1s2fxl-win-ha-control-plane to be ready (implies underlying nodes to be ready as well)
STEP: Waiting for the control plane to be ready
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
STEP: Waiting for the workload nodes to exist
E1130 07:58:45.981249   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for the machine pools to be provisioned
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/webyty94v to be available
Nov 30 07:58:59.694: INFO: starting to wait for deployment to become available
E1130 07:59:18.350973   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 07:59:19.802: INFO: Deployment default/webyty94v is now available, took 20.10826647s
STEP: creating an internal Load Balancer service
Nov 30 07:59:19.802: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/webyty94v-ilb to be available
Nov 30 07:59:19.893: INFO: waiting for service default/webyty94v-ilb to be available
E1130 07:59:49.172582   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 08:00:23.079266   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 08:00:40.210: INFO: service default/webyty94v-ilb is available, took 1m20.316872851s
STEP: connecting to the internal LB service from a curl pod
Nov 30 08:00:40.243: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-job6kzb8 to be complete
Nov 30 08:00:40.297: INFO: waiting for job default/curl-to-ilb-job6kzb8 to be complete
Nov 30 08:00:50.371: INFO: job default/curl-to-ilb-job6kzb8 is complete, took 10.074021995s
STEP: deleting the ilb test resources
Nov 30 08:00:50.371: INFO: deleting the ilb service: webyty94v-ilb
Nov 30 08:00:50.454: INFO: deleting the ilb job: curl-to-ilb-job6kzb8
STEP: creating an external Load Balancer service
Nov 30 08:00:50.492: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/webyty94v-elb to be available
Nov 30 08:00:50.553: INFO: waiting for service default/webyty94v-elb to be available
E1130 08:01:20.088289   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 08:02:07.962615   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 08:03:01.054: INFO: service default/webyty94v-elb is available, took 2m10.501089502s
STEP: connecting to the external LB service from a curl pod
Nov 30 08:03:01.087: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-jobrnryu15abvk to be complete
Nov 30 08:03:01.126: INFO: waiting for job default/curl-to-elb-jobrnryu15abvk to be complete
E1130 08:03:01.760202   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 08:03:11.202: INFO: job default/curl-to-elb-jobrnryu15abvk is complete, took 10.075294077s
STEP: connecting directly to the external LB service
Nov 30 08:03:11.202: INFO: starting attempts to connect directly to the external LB service
2021/11/30 08:03:11 [DEBUG] GET http://20.88.164.128
Nov 30 08:03:18.426: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 30 08:03:18.426: INFO: starting to delete external LB service webyty94v-elb
Nov 30 08:03:18.498: INFO: starting to delete deployment webyty94v
Nov 30 08:03:18.535: INFO: starting to delete job curl-to-elb-jobrnryu15abvk
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowsy0kgdt to be available
Nov 30 08:03:18.707: INFO: starting to wait for deployment to become available
E1130 08:03:52.882015   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 08:04:29.012: INFO: Deployment default/web-windowsy0kgdt is now available, took 1m10.305301359s
STEP: creating an internal Load Balancer service
Nov 30 08:04:29.012: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/web-windowsy0kgdt-ilb to be available
Nov 30 08:04:29.076: INFO: waiting for service default/web-windowsy0kgdt-ilb to be available
E1130 08:04:40.768087   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 08:05:16.401051   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 08:06:07.157363   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 08:06:19.479: INFO: service default/web-windowsy0kgdt-ilb is available, took 1m50.403099723s
STEP: connecting to the internal LB service from a curl pod
Nov 30 08:06:19.512: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-jobfdjfp to be complete
Nov 30 08:06:19.553: INFO: waiting for job default/curl-to-ilb-jobfdjfp to be complete
Nov 30 08:06:29.625: INFO: job default/curl-to-ilb-jobfdjfp is complete, took 10.0719663s
STEP: deleting the ilb test resources
Nov 30 08:06:29.625: INFO: deleting the ilb service: web-windowsy0kgdt-ilb
Nov 30 08:06:29.698: INFO: deleting the ilb job: curl-to-ilb-jobfdjfp
STEP: creating an external Load Balancer service
Nov 30 08:06:29.737: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/web-windowsy0kgdt-elb to be available
Nov 30 08:06:29.803: INFO: waiting for service default/web-windowsy0kgdt-elb to be available
E1130 08:06:43.235354   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 08:07:24.399976   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 08:07:59.866107   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 08:08:20.207: INFO: service default/web-windowsy0kgdt-elb is available, took 1m50.403892973s
STEP: connecting to the external LB service from a curl pod
Nov 30 08:08:20.239: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-jobedf3tf74c5i to be complete
Nov 30 08:08:20.278: INFO: waiting for job default/curl-to-elb-jobedf3tf74c5i to be complete
Nov 30 08:08:30.345: INFO: job default/curl-to-elb-jobedf3tf74c5i is complete, took 10.066767443s
... skipping 6 lines ...
Nov 30 08:08:31.498: INFO: starting to delete deployment web-windowsy0kgdt
Nov 30 08:08:31.540: INFO: starting to delete job curl-to-elb-jobedf3tf74c5i
STEP: Dumping logs from the "capz-e2e-1s2fxl-win-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-1s2fxl/capz-e2e-1s2fxl-win-ha logs
Nov 30 08:08:31.632: INFO: INFO: Collecting logs for node capz-e2e-1s2fxl-win-ha-control-plane-sd47h in cluster capz-e2e-1s2fxl-win-ha in namespace capz-e2e-1s2fxl

E1130 08:08:40.547847   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 08:08:43.441: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-1s2fxl-win-ha-control-plane-sd47h

Nov 30 08:08:44.231: INFO: INFO: Collecting logs for node capz-e2e-1s2fxl-win-ha-control-plane-hs6sh in cluster capz-e2e-1s2fxl-win-ha in namespace capz-e2e-1s2fxl

Nov 30 08:08:52.535: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-1s2fxl-win-ha-control-plane-hs6sh

... skipping 4 lines ...
Nov 30 08:09:00.001: INFO: INFO: Collecting logs for node capz-e2e-1s2fxl-win-ha-md-0-2z92z in cluster capz-e2e-1s2fxl-win-ha in namespace capz-e2e-1s2fxl

Nov 30 08:09:09.809: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-1s2fxl-win-ha-md-0-2z92z

Nov 30 08:09:10.124: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster capz-e2e-1s2fxl-win-ha in namespace capz-e2e-1s2fxl

E1130 08:09:14.538574   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 30 08:09:37.245: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-1s2fxl-win-ha-md-win-zq2m7

Nov 30 08:09:37.544: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster capz-e2e-1s2fxl-win-ha in namespace capz-e2e-1s2fxl

Nov 30 08:10:11.341: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-1s2fxl-win-ha-md-win-jtbl5

... skipping 23 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-4j2zh, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-gkh75, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-lznm9, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-1s2fxl-win-ha-control-plane-qp82n, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-1s2fxl-win-ha-control-plane-sd47h, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-cmkqf, container coredns
E1130 08:10:12.196157   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Got error while iterating over activity logs for resource group capz-e2e-1s2fxl-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000290409s
STEP: Dumping all the Cluster API resources in the "capz-e2e-1s2fxl" namespace
STEP: Deleting all clusters in the capz-e2e-1s2fxl namespace
STEP: Deleting cluster capz-e2e-1s2fxl-win-ha
INFO: Waiting for the Cluster capz-e2e-1s2fxl/capz-e2e-1s2fxl-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-1s2fxl-win-ha to be deleted
E1130 08:10:59.089394   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 08:11:49.885049   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-1s2fxl-win-ha-control-plane-qp82n, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-tgdcm, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-1s2fxl-win-ha-control-plane-qp82n, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-lznm9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-78mgc, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-j9wp6, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-1s2fxl-win-ha-control-plane-sd47h, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-kq24s, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-b475n, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-1s2fxl-win-ha-control-plane-sd47h, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-1s2fxl-win-ha-control-plane-sd47h, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-cmkqf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dxcvd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-1s2fxl-win-ha-control-plane-sd47h, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2w9fx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-1s2fxl-win-ha-control-plane-qp82n, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-zrfhx, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-1s2fxl-win-ha-control-plane-qp82n, container kube-scheduler: http2: client connection lost
E1130 08:12:36.076818   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 08:13:33.068019   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 08:14:30.202793   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 08:15:06.463155   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 08:15:37.174390   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 08:16:30.022818   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 08:17:26.371393   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 08:18:08.598478   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 08:18:57.816566   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-1s2fxl
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1130 08:19:30.914494   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 08:20:05.921725   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1130 08:20:51.635578   24480 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-set0j9/events?resourceVersion=10154": dial tcp: lookup capz-e2e-set0j9-public-custom-vnet-daf78545.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 29m16s on Ginkgo node 3 of 3


• [SLOW TEST:1756.281 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows Enabled cluster with dockershim
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:530
    With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2021-11-30T08:39:59Z"}
++ early_exit_handler
++ '[' -n 165 ']'
++ kill -TERM 165
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 12 lines ...
Cleaning up after docker
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
================================================================================
Done cleaning up after docker in docker.
All sensitive variables are redacted
{"component":"entrypoint","file":"prow/entrypoint/run.go:255","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2021-11-30T08:54:59Z"}
{"component":"entrypoint","error":"os: process already finished","file":"prow/entrypoint/run.go:257","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2021-11-30T08:54:59Z"}