This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 4 succeeded
Started2021-11-27 06:39
Elapsed2h15m
Revisionmain

No Test Failures!


Show 4 Passed Tests

Error lines from build-log.txt

... skipping 432 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:288

INFO: "With ipv6 worker node" started at Sat, 27 Nov 2021 06:48:46 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-lmquao" for hosting the cluster
Nov 27 06:48:46.206: INFO: starting to create namespace for hosting the "capz-e2e-lmquao" test spec
2021/11/27 06:48:46 failed trying to get namespace (capz-e2e-lmquao):namespaces "capz-e2e-lmquao" not found
INFO: Creating namespace capz-e2e-lmquao
INFO: Creating event watcher for namespace "capz-e2e-lmquao"
Nov 27 06:48:46.249: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-lmquao-ipv6
INFO: Creating the workload cluster with name "capz-e2e-lmquao-ipv6" using the "ipv6" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 527.370786ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-lmquao" namespace
STEP: Deleting all clusters in the capz-e2e-lmquao namespace
STEP: Deleting cluster capz-e2e-lmquao-ipv6
INFO: Waiting for the Cluster capz-e2e-lmquao/capz-e2e-lmquao-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-lmquao-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-lmquao-ipv6-control-plane-nnmlq, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-gf7ts, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5xlt2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-sd7d7, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-lmquao-ipv6-control-plane-vn528, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-lmquao-ipv6-control-plane-nnmlq, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-46hb7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-lmquao-ipv6-control-plane-vn528, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-ndplv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-lmquao-ipv6-control-plane-vn528, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-lmquao-ipv6-control-plane-nnmlq, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-lmquao-ipv6-control-plane-vn528, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-f85xx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-qqfzn, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-lmquao-ipv6-control-plane-nnmlq, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-lmquao
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 18m33s on Ginkgo node 2 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:205

INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" started at Sat, 27 Nov 2021 06:48:45 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-l9iw46" for hosting the cluster
Nov 27 06:48:45.839: INFO: starting to create namespace for hosting the "capz-e2e-l9iw46" test spec
2021/11/27 06:48:45 failed trying to get namespace (capz-e2e-l9iw46):namespaces "capz-e2e-l9iw46" not found
INFO: Creating namespace capz-e2e-l9iw46
INFO: Creating event watcher for namespace "capz-e2e-l9iw46"
Nov 27 06:48:45.880: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-l9iw46-ha
INFO: Creating the workload cluster with name "capz-e2e-l9iw46-ha" using the "(default)" template (Kubernetes v1.22.4, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 75 lines ...
Nov 27 06:59:24.685: INFO: starting to delete external LB service webppjf26-elb
Nov 27 06:59:24.860: INFO: starting to delete deployment webppjf26
Nov 27 06:59:24.979: INFO: starting to delete job curl-to-elb-job4gz1bxxvcav
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 27 06:59:25.176: INFO: starting to create dev deployment namespace
2021/11/27 06:59:25 failed trying to get namespace (development):namespaces "development" not found
2021/11/27 06:59:25 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 27 06:59:25.410: INFO: starting to create prod deployment namespace
2021/11/27 06:59:25 failed trying to get namespace (production):namespaces "production" not found
2021/11/27 06:59:25 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 27 06:59:25.642: INFO: starting to create frontend-prod deployments
Nov 27 06:59:25.761: INFO: starting to create frontend-dev deployments
Nov 27 06:59:25.878: INFO: starting to create backend deployments
Nov 27 06:59:25.994: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 27 06:59:52.914: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.176.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 27 07:02:04.968: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 27 07:02:05.371: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.176.196 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.176.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 27 07:06:26.271: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 27 07:06:26.663: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.138.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 27 07:08:39.392: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 27 07:08:39.797: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.176.194 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.138.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 27 07:13:03.584: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 27 07:13:03.988: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.176.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 27 07:15:15.495: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 27 07:15:15.926: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.176.196 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowsrslsdd to be available
Nov 27 07:17:29.252: INFO: starting to wait for deployment to become available
Nov 27 07:18:30.099: INFO: Deployment default/web-windowsrslsdd is now available, took 1m0.846594117s
... skipping 51 lines ...
Nov 27 07:22:36.554: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-l9iw46-ha-md-0-cdhj8

Nov 27 07:22:36.996: INFO: INFO: Collecting logs for node 10.1.0.7 in cluster capz-e2e-l9iw46-ha in namespace capz-e2e-l9iw46

Nov 27 07:23:04.743: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-l9iw46-ha-md-win-kthf5

Failed to get logs for machine capz-e2e-l9iw46-ha-md-win-75768cdc8b-hssfl, cluster capz-e2e-l9iw46/capz-e2e-l9iw46-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 27 07:23:05.193: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster capz-e2e-l9iw46-ha in namespace capz-e2e-l9iw46

Nov 27 07:23:33.156: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-l9iw46-ha-md-win-6cj42

Failed to get logs for machine capz-e2e-l9iw46-ha-md-win-75768cdc8b-xtz7w, cluster capz-e2e-l9iw46/capz-e2e-l9iw46-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-l9iw46/capz-e2e-l9iw46-ha kube-system pod logs
STEP: Fetching kube-system pod logs took 931.413954ms
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-rvq6p, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-l9iw46-ha-control-plane-598hz, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-l9iw46-ha-control-plane-kzt2p, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-d5n86, container kube-proxy
... skipping 22 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-s49ts, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-dmv49, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-wn22p, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-proxy-wzwz7, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-l9iw46-ha-control-plane-598hz, container kube-scheduler
STEP: Dumping workload cluster capz-e2e-l9iw46/capz-e2e-l9iw46-ha Azure activity log
STEP: Got error while iterating over activity logs for resource group capz-e2e-l9iw46-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000184679s
STEP: Dumping all the Cluster API resources in the "capz-e2e-l9iw46" namespace
STEP: Deleting all clusters in the capz-e2e-l9iw46 namespace
STEP: Deleting cluster capz-e2e-l9iw46-ha
INFO: Waiting for the Cluster capz-e2e-l9iw46/capz-e2e-l9iw46-ha to be deleted
STEP: Waiting for cluster capz-e2e-l9iw46-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-l9iw46-ha-control-plane-nsgbv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-s49ts, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-l9iw46-ha-control-plane-nsgbv, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-l9iw46-ha-control-plane-nsgbv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-zrg7t, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-l9iw46-ha-control-plane-nsgbv, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-l9iw46
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" ran for 46m21s on Ginkgo node 3 of 3

... skipping 8 lines ...
  with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:334

INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" started at Sat, 27 Nov 2021 07:07:19 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-hw6m04" for hosting the cluster
Nov 27 07:07:19.547: INFO: starting to create namespace for hosting the "capz-e2e-hw6m04" test spec
2021/11/27 07:07:19 failed trying to get namespace (capz-e2e-hw6m04):namespaces "capz-e2e-hw6m04" not found
INFO: Creating namespace capz-e2e-hw6m04
INFO: Creating event watcher for namespace "capz-e2e-hw6m04"
Nov 27 07:07:19.580: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-hw6m04-vmss
INFO: Creating the workload cluster with name "capz-e2e-hw6m04-vmss" using the "machine-pool" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 140 lines ...
Nov 27 07:26:13.340: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-hw6m04-vmss-mp-0

Nov 27 07:26:13.868: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-hw6m04-vmss in namespace capz-e2e-hw6m04

Nov 27 07:26:26.078: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-e2e-hw6m04-vmss-mp-0

Failed to get logs for machine pool capz-e2e-hw6m04-vmss-mp-0, cluster capz-e2e-hw6m04/capz-e2e-hw6m04-vmss: [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1]
Nov 27 07:26:26.495: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-hw6m04-vmss in namespace capz-e2e-hw6m04

Nov 27 07:26:52.064: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

Nov 27 07:26:52.451: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-hw6m04-vmss in namespace capz-e2e-hw6m04

Nov 27 07:27:25.168: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set win-p-win

Failed to get logs for machine pool capz-e2e-hw6m04-vmss-mp-win, cluster capz-e2e-hw6m04/capz-e2e-hw6m04-vmss: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-hw6m04/capz-e2e-hw6m04-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 1.112299625s
STEP: Creating log watcher for controller kube-system/calico-node-r4qws, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-rz5xt, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-9lt4h, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-v9n8r, container kube-proxy
... skipping 10 lines ...
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-xxnsm, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-ldfjr, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-windows-pc6xr, container calico-node-felix
STEP: Creating log watcher for controller kube-system/calico-node-lzxlw, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-hw6m04-vmss-control-plane-kxps6, container kube-scheduler
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-hw6m04-vmss-control-plane-kxps6, container etcd
STEP: Got error while iterating over activity logs for resource group capz-e2e-hw6m04-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000993957s
STEP: Dumping all the Cluster API resources in the "capz-e2e-hw6m04" namespace
STEP: Deleting all clusters in the capz-e2e-hw6m04 namespace
STEP: Deleting cluster capz-e2e-hw6m04-vmss
INFO: Waiting for the Cluster capz-e2e-hw6m04/capz-e2e-hw6m04-vmss to be deleted
STEP: Waiting for cluster capz-e2e-hw6m04-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rx64n, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-hw6m04-vmss-control-plane-kxps6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-hw6m04-vmss-control-plane-kxps6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-clzbk, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-hw6m04-vmss-control-plane-kxps6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-xxnsm, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-j9djw, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-ldfjr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-hw6m04-vmss-control-plane-kxps6, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-hw6m04
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" ran for 27m58s on Ginkgo node 2 of 3

... skipping 10 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:144

INFO: "Creates a public management cluster in the same vnet" started at Sat, 27 Nov 2021 06:48:45 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-rt8srh" for hosting the cluster
Nov 27 06:48:45.361: INFO: starting to create namespace for hosting the "capz-e2e-rt8srh" test spec
2021/11/27 06:48:45 failed trying to get namespace (capz-e2e-rt8srh):namespaces "capz-e2e-rt8srh" not found
INFO: Creating namespace capz-e2e-rt8srh
INFO: Creating event watcher for namespace "capz-e2e-rt8srh"
Nov 27 06:48:45.397: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-rt8srh-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-k2264, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-2r7vk, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-4wtfs, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-rt8srh-public-custom-vnet-control-plane-mw9lz, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-shbhw, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-rt8srh-public-custom-vnet-control-plane-mw9lz, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-rt8srh-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000970946s
STEP: Dumping all the Cluster API resources in the "capz-e2e-rt8srh" namespace
STEP: Deleting all clusters in the capz-e2e-rt8srh namespace
STEP: Deleting cluster capz-e2e-rt8srh-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-rt8srh/capz-e2e-rt8srh-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-rt8srh-public-custom-vnet to be deleted
W1127 07:41:47.379237   24266 reflector.go:441] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1127 07:42:18.841639   24266 trace.go:205] Trace[2047998814]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (27-Nov-2021 07:41:48.840) (total time: 30001ms):
Trace[2047998814]: [30.001286203s] [30.001286203s] END
E1127 07:42:18.841695   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp 51.105.221.28:6443: i/o timeout
I1127 07:42:50.789395   24266 trace.go:205] Trace[1055271160]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (27-Nov-2021 07:42:20.788) (total time: 30001ms):
Trace[1055271160]: [30.001082471s] [30.001082471s] END
E1127 07:42:50.789464   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp 51.105.221.28:6443: i/o timeout
I1127 07:43:24.496354   24266 trace.go:205] Trace[285218669]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (27-Nov-2021 07:42:54.495) (total time: 30001ms):
Trace[285218669]: [30.001256971s] [30.001256971s] END
E1127 07:43:24.496423   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp 51.105.221.28:6443: i/o timeout
I1127 07:44:00.910641   24266 trace.go:205] Trace[1046448023]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (27-Nov-2021 07:43:30.910) (total time: 30000ms):
Trace[1046448023]: [30.000596017s] [30.000596017s] END
E1127 07:44:00.910710   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp 51.105.221.28:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-rt8srh
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 27 07:44:20.525: INFO: deleting an existing virtual network "custom-vnet"
E1127 07:44:24.697959   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 27 07:44:31.691: INFO: deleting an existing route table "node-routetable"
Nov 27 07:44:42.645: INFO: deleting an existing network security group "node-nsg"
Nov 27 07:44:53.905: INFO: deleting an existing network security group "control-plane-nsg"
E1127 07:45:00.331044   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 27 07:45:04.829: INFO: verifying the existing resource group "capz-e2e-rt8srh-public-custom-vnet" is empty
Nov 27 07:45:04.897: INFO: deleting the existing resource group "capz-e2e-rt8srh-public-custom-vnet"
E1127 07:45:40.237644   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 07:46:23.442992   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1127 07:46:57.996854   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 58m28s on Ginkgo node 1 of 3


• [SLOW TEST:3508.118 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:455

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Sat, 27 Nov 2021 07:35:17 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-vu7htv" for hosting the cluster
Nov 27 07:35:17.249: INFO: starting to create namespace for hosting the "capz-e2e-vu7htv" test spec
2021/11/27 07:35:17 failed trying to get namespace (capz-e2e-vu7htv):namespaces "capz-e2e-vu7htv" not found
INFO: Creating namespace capz-e2e-vu7htv
INFO: Creating event watcher for namespace "capz-e2e-vu7htv"
Nov 27 07:35:17.294: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-vu7htv-oot
INFO: Creating the workload cluster with name "capz-e2e-vu7htv-oot" using the "external-cloud-provider" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 98 lines ...
STEP: Fetching activity logs took 546.599739ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-vu7htv" namespace
STEP: Deleting all clusters in the capz-e2e-vu7htv namespace
STEP: Deleting cluster capz-e2e-vu7htv-oot
INFO: Waiting for the Cluster capz-e2e-vu7htv/capz-e2e-vu7htv-oot to be deleted
STEP: Waiting for cluster capz-e2e-vu7htv-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-vu7htv-oot-control-plane-wwksl, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-controller-manager, container cloud-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-c45xx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-vh9gv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-vu7htv-oot-control-plane-wwksl, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-wbqww, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-vl6dx, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-59th4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mjtd6, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-s9rhz, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-vu7htv-oot-control-plane-wwksl, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-2h6kn, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-vu7htv-oot-control-plane-wwksl, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8tm5z, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-vu7htv
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 17m14s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413

INFO: "with a single control plane node and 1 node" started at Sat, 27 Nov 2021 07:35:06 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-pvm66j" for hosting the cluster
Nov 27 07:35:06.978: INFO: starting to create namespace for hosting the "capz-e2e-pvm66j" test spec
2021/11/27 07:35:06 failed trying to get namespace (capz-e2e-pvm66j):namespaces "capz-e2e-pvm66j" not found
INFO: Creating namespace capz-e2e-pvm66j
INFO: Creating event watcher for namespace "capz-e2e-pvm66j"
Nov 27 07:35:07.022: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-pvm66j-gpu
INFO: Creating the workload cluster with name "capz-e2e-pvm66j-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.4, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: Fetching activity logs took 999.148586ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-pvm66j" namespace
STEP: Deleting all clusters in the capz-e2e-pvm66j namespace
STEP: Deleting cluster capz-e2e-pvm66j-gpu
INFO: Waiting for the Cluster capz-e2e-pvm66j/capz-e2e-pvm66j-gpu to be deleted
STEP: Waiting for cluster capz-e2e-pvm66j-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4h5qg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4lfbm, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-pvm66j
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 34m42s on Ginkgo node 3 of 3

... skipping 57 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:490

INFO: "with a single control plane node and 1 node" started at Sat, 27 Nov 2021 07:47:13 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-hsnvdl" for hosting the cluster
Nov 27 07:47:13.480: INFO: starting to create namespace for hosting the "capz-e2e-hsnvdl" test spec
2021/11/27 07:47:13 failed trying to get namespace (capz-e2e-hsnvdl):namespaces "capz-e2e-hsnvdl" not found
INFO: Creating namespace capz-e2e-hsnvdl
INFO: Creating event watcher for namespace "capz-e2e-hsnvdl"
Nov 27 07:47:13.508: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-hsnvdl-aks
INFO: Creating the workload cluster with name "capz-e2e-hsnvdl-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1127 07:47:37.167805   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 07:48:21.549756   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 07:48:55.689105   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 07:49:34.909390   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 07:50:27.199090   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 27 07:50:55.009: INFO: Waiting for the first control plane machine managed by capz-e2e-hsnvdl/capz-e2e-hsnvdl-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
E1127 07:51:08.401701   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 07:51:56.506512   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 07:52:52.503136   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 07:53:34.248721   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 07:54:13.169357   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 07:54:47.601840   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 07:55:32.101591   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 07:56:27.881308   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 07:57:01.179722   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 07:57:54.824283   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 07:58:40.977019   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 07:59:40.271366   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 08:00:28.143603   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 08:01:15.726143   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 08:01:56.285013   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 08:02:33.233833   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 08:03:27.516025   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 08:04:00.552994   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 08:04:40.294370   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 08:05:27.106196   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 08:06:08.775100   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 08:06:49.776771   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 08:07:23.698829   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
Nov 27 08:07:28.419: INFO: Waiting for the first control plane machine managed by capz-e2e-hsnvdl/capz-e2e-hsnvdl-aks to be provisioned
STEP: Waiting for all control plane nodes to exist
INFO: Waiting for the machine deployments to be provisioned
INFO: Waiting for the machine pools to be provisioned
STEP: Waiting for the machine pool workload nodes to exist
... skipping 12 lines ...
STEP: Dumping logs from the "capz-e2e-hsnvdl-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-hsnvdl/capz-e2e-hsnvdl-aks logs
Nov 27 08:07:37.308: INFO: INFO: Collecting logs for node aks-agentpool1-24854948-vmss000000 in cluster capz-e2e-hsnvdl-aks in namespace capz-e2e-hsnvdl

Nov 27 08:07:37.401: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-hsnvdl/capz-e2e-hsnvdl-aks: [dialing public load balancer at capz-e2e-hsnvdl-aks-0ea761da.hcp.westeurope.azmk8s.io: dial tcp: lookup capz-e2e-hsnvdl-aks-0ea761da.hcp.westeurope.azmk8s.io on 10.63.240.10:53: no such host, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
Nov 27 08:07:37.961: INFO: INFO: Collecting logs for node aks-agentpool1-24854948-vmss000000 in cluster capz-e2e-hsnvdl-aks in namespace capz-e2e-hsnvdl

Nov 27 08:07:38.055: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-hsnvdl/capz-e2e-hsnvdl-aks: [dialing public load balancer at capz-e2e-hsnvdl-aks-0ea761da.hcp.westeurope.azmk8s.io: dial tcp: lookup capz-e2e-hsnvdl-aks-0ea761da.hcp.westeurope.azmk8s.io on 10.63.240.10:53: no such host, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-hsnvdl/capz-e2e-hsnvdl-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 809.298193ms
STEP: Dumping workload cluster capz-e2e-hsnvdl/capz-e2e-hsnvdl-aks Azure activity log
STEP: Creating log watcher for controller kube-system/calico-typha-deployment-76cb9744d8-swzvt, container calico-typha
STEP: Creating log watcher for controller kube-system/coredns-autoscaler-54d55c8b75-mtdh6, container autoscaler
STEP: Creating log watcher for controller kube-system/metrics-server-569f6547dd-9wzsh, container metrics-server
... skipping 8 lines ...
STEP: Fetching activity logs took 513.083095ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-hsnvdl" namespace
STEP: Deleting all clusters in the capz-e2e-hsnvdl namespace
STEP: Deleting cluster capz-e2e-hsnvdl-aks
INFO: Waiting for the Cluster capz-e2e-hsnvdl/capz-e2e-hsnvdl-aks to be deleted
STEP: Waiting for cluster capz-e2e-hsnvdl-aks to be deleted
E1127 08:08:21.515572   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 08:09:10.989545   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 08:10:08.299981   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 08:10:46.893523   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 08:11:19.062193   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 08:11:49.194350   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-hsnvdl
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1127 08:12:42.618974   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 08:13:16.286891   24266 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-rt8srh/events?resourceVersion=11740": dial tcp: lookup capz-e2e-rt8srh-public-custom-vnet-14fc5635.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 26m52s on Ginkgo node 1 of 3


• [SLOW TEST:1612.101 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 8 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Sat, 27 Nov 2021 07:52:30 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-2manm8" for hosting the cluster
Nov 27 07:52:30.992: INFO: starting to create namespace for hosting the "capz-e2e-2manm8" test spec
2021/11/27 07:52:30 failed trying to get namespace (capz-e2e-2manm8):namespaces "capz-e2e-2manm8" not found
INFO: Creating namespace capz-e2e-2manm8
INFO: Creating event watcher for namespace "capz-e2e-2manm8"
Nov 27 07:52:31.022: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-2manm8-win-ha
INFO: Creating the workload cluster with name "capz-e2e-2manm8-win-ha" using the "windows" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 55 lines ...
STEP: waiting for job default/curl-to-elb-jobj2ovrclmdir to be complete
Nov 27 08:04:49.132: INFO: waiting for job default/curl-to-elb-jobj2ovrclmdir to be complete
Nov 27 08:04:59.358: INFO: job default/curl-to-elb-jobj2ovrclmdir is complete, took 10.226235132s
STEP: connecting directly to the external LB service
Nov 27 08:04:59.358: INFO: starting attempts to connect directly to the external LB service
2021/11/27 08:04:59 [DEBUG] GET http://51.124.94.156
2021/11/27 08:05:29 [ERR] GET http://51.124.94.156 request failed: Get "http://51.124.94.156": dial tcp 51.124.94.156:80: i/o timeout
2021/11/27 08:05:29 [DEBUG] GET http://51.124.94.156: retrying in 1s (4 left)
Nov 27 08:05:37.725: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 27 08:05:37.725: INFO: starting to delete external LB service webqb3fbt-elb
Nov 27 08:05:37.894: INFO: starting to delete deployment webqb3fbt
Nov 27 08:05:38.011: INFO: starting to delete job curl-to-elb-jobj2ovrclmdir
... skipping 85 lines ...
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-97m9j, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-proxy-wgmjq, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-44d89, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-4cfv6, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-clxnf, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-2manm8-win-ha-control-plane-rhq6f, container kube-controller-manager
STEP: Got error while iterating over activity logs for resource group capz-e2e-2manm8-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000884071s
STEP: Dumping all the Cluster API resources in the "capz-e2e-2manm8" namespace
STEP: Deleting all clusters in the capz-e2e-2manm8 namespace
STEP: Deleting cluster capz-e2e-2manm8-win-ha
INFO: Waiting for the Cluster capz-e2e-2manm8/capz-e2e-2manm8-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-2manm8-win-ha to be deleted
... skipping 9 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows Enabled cluster with dockershim
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:530
    With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2021-11-27T08:39:38Z"}
++ early_exit_handler
++ '[' -n 162 ']'
++ kill -TERM 162
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 12 lines ...
Cleaning up after docker
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
================================================================================
Done cleaning up after docker in docker.
All sensitive variables are redacted
{"component":"entrypoint","file":"prow/entrypoint/run.go:255","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2021-11-27T08:54:38Z"}
{"component":"entrypoint","error":"os: process already finished","file":"prow/entrypoint/run.go:257","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2021-11-27T08:54:38Z"}