This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-05 18:28
Elapsed1h47m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node 32m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\san\sAKS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454
Timed out after 1200.001s.
System machine pools not ready
Expected
    <bool>: false
to equal
    <bool>: true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216
				
				Click to see stdout/stderrfrom junit.e2e_suite.2.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 434 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Fri, 05 Nov 2021 18:35:18 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-43h4ae" for hosting the cluster
Nov  5 18:35:18.924: INFO: starting to create namespace for hosting the "capz-e2e-43h4ae" test spec
2021/11/05 18:35:18 failed trying to get namespace (capz-e2e-43h4ae):namespaces "capz-e2e-43h4ae" not found
INFO: Creating namespace capz-e2e-43h4ae
INFO: Creating event watcher for namespace "capz-e2e-43h4ae"
Nov  5 18:35:18.991: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-43h4ae-ipv6
INFO: Creating the workload cluster with name "capz-e2e-43h4ae-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 591.206362ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-43h4ae" namespace
STEP: Deleting all clusters in the capz-e2e-43h4ae namespace
STEP: Deleting cluster capz-e2e-43h4ae-ipv6
INFO: Waiting for the Cluster capz-e2e-43h4ae/capz-e2e-43h4ae-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-43h4ae-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-43h4ae-ipv6-control-plane-mj648, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2whlm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-43h4ae-ipv6-control-plane-dx8jl, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-43h4ae-ipv6-control-plane-xrzvm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-43h4ae-ipv6-control-plane-mj648, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-pnlg7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-z2nz9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-pmzj9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-q2c6q, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-vk5hg, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-43h4ae-ipv6-control-plane-mj648, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-43h4ae-ipv6-control-plane-mj648, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-43h4ae-ipv6-control-plane-dx8jl, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-43h4ae-ipv6-control-plane-dx8jl, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-43h4ae-ipv6-control-plane-xrzvm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-43h4ae-ipv6-control-plane-xrzvm, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2nd6c, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-nh9b8, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-6x5ln, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-flwsm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-43h4ae-ipv6-control-plane-dx8jl, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-wgxbk, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-43h4ae-ipv6-control-plane-xrzvm, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-43h4ae
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 21m20s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Fri, 05 Nov 2021 18:56:39 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-y64te1" for hosting the cluster
Nov  5 18:56:39.110: INFO: starting to create namespace for hosting the "capz-e2e-y64te1" test spec
2021/11/05 18:56:39 failed trying to get namespace (capz-e2e-y64te1):namespaces "capz-e2e-y64te1" not found
INFO: Creating namespace capz-e2e-y64te1
INFO: Creating event watcher for namespace "capz-e2e-y64te1"
Nov  5 18:56:39.149: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-y64te1-vmss
INFO: Creating the workload cluster with name "capz-e2e-y64te1-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 106 lines ...
STEP: Fetching activity logs took 584.211568ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-y64te1" namespace
STEP: Deleting all clusters in the capz-e2e-y64te1 namespace
STEP: Deleting cluster capz-e2e-y64te1-vmss
INFO: Waiting for the Cluster capz-e2e-y64te1/capz-e2e-y64te1-vmss to be deleted
STEP: Waiting for cluster capz-e2e-y64te1-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-y64te1-vmss-control-plane-r88f2, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-plsfz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-6vhwv, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-y64te1-vmss-control-plane-r88f2, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-y64te1-vmss-control-plane-r88f2, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-4qjg7, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-svbzm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7ssw9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5p5br, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-582ht, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-pmnt5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-y64te1-vmss-control-plane-r88f2, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-29qf4, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-y64te1
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 18m42s on Ginkgo node 3 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Fri, 05 Nov 2021 18:35:18 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-twtssf" for hosting the cluster
Nov  5 18:35:18.923: INFO: starting to create namespace for hosting the "capz-e2e-twtssf" test spec
2021/11/05 18:35:18 failed trying to get namespace (capz-e2e-twtssf):namespaces "capz-e2e-twtssf" not found
INFO: Creating namespace capz-e2e-twtssf
INFO: Creating event watcher for namespace "capz-e2e-twtssf"
Nov  5 18:35:18.980: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-twtssf-ha
INFO: Creating the workload cluster with name "capz-e2e-twtssf-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 67 lines ...
Nov  5 18:47:17.351: INFO: starting to delete external LB service webwkln7q-elb
Nov  5 18:47:17.504: INFO: starting to delete deployment webwkln7q
Nov  5 18:47:17.620: INFO: starting to delete job curl-to-elb-jobx62qt7pxjlg
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov  5 18:47:17.771: INFO: starting to create dev deployment namespace
2021/11/05 18:47:17 failed trying to get namespace (development):namespaces "development" not found
2021/11/05 18:47:17 namespace development does not exist, creating...
STEP: Creating production namespace
Nov  5 18:47:17.997: INFO: starting to create prod deployment namespace
2021/11/05 18:47:18 failed trying to get namespace (production):namespaces "production" not found
2021/11/05 18:47:18 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov  5 18:47:18.222: INFO: starting to create frontend-prod deployments
Nov  5 18:47:18.335: INFO: starting to create frontend-dev deployments
Nov  5 18:47:18.449: INFO: starting to create backend deployments
Nov  5 18:47:18.561: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov  5 18:47:45.837: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.121.194 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  5 18:49:57.259: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov  5 18:49:57.669: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.121.194 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.121.194 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  5 18:54:19.401: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov  5 18:54:19.860: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.62.133 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  5 18:56:32.469: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov  5 18:56:32.862: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.62.131 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.62.133 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  5 19:00:56.659: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov  5 19:00:57.052: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.121.194 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  5 19:03:09.838: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov  5 19:03:11.262: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.121.194 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-twtssf-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-twtssf/capz-e2e-twtssf-ha logs
Nov  5 19:05:23.811: INFO: INFO: Collecting logs for node capz-e2e-twtssf-ha-control-plane-j9dwm in cluster capz-e2e-twtssf-ha in namespace capz-e2e-twtssf

Nov  5 19:05:36.219: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-twtssf-ha-control-plane-j9dwm
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-twtssf-ha-control-plane-j9dwm, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-twtssf-ha-control-plane-m98k4, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-jg8tt, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-kp5gl, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-f5zzn, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-xgrhs, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-twtssf-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001103691s
STEP: Dumping all the Cluster API resources in the "capz-e2e-twtssf" namespace
STEP: Deleting all clusters in the capz-e2e-twtssf namespace
STEP: Deleting cluster capz-e2e-twtssf-ha
INFO: Waiting for the Cluster capz-e2e-twtssf/capz-e2e-twtssf-ha to be deleted
STEP: Waiting for cluster capz-e2e-twtssf-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-rq8tk, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-zxr85, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-twtssf-ha-control-plane-j9dwm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-wzg7d, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-twtssf-ha-control-plane-nmjjh, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qtmsd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-twtssf-ha-control-plane-nmjjh, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-pgxrj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-sfmgl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-twtssf-ha-control-plane-nmjjh, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-twtssf-ha-control-plane-j9dwm, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-twtssf-ha-control-plane-m98k4, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jg8tt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-twtssf-ha-control-plane-nmjjh, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-f5zzn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-6xdc9, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-twtssf-ha-control-plane-j9dwm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-twtssf-ha-control-plane-j9dwm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-twtssf-ha-control-plane-m98k4, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-twtssf-ha-control-plane-m98k4, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-twtssf-ha-control-plane-m98k4, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xgrhs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-772j5, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-twtssf
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 44m44s on Ginkgo node 1 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Fri, 05 Nov 2021 18:35:18 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-1c9h4r" for hosting the cluster
Nov  5 18:35:18.922: INFO: starting to create namespace for hosting the "capz-e2e-1c9h4r" test spec
2021/11/05 18:35:18 failed trying to get namespace (capz-e2e-1c9h4r):namespaces "capz-e2e-1c9h4r" not found
INFO: Creating namespace capz-e2e-1c9h4r
INFO: Creating event watcher for namespace "capz-e2e-1c9h4r"
Nov  5 18:35:19.003: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-1c9h4r-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-prv78, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-wdbgp, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-1c9h4r-public-custom-vnet-control-plane-j6sgf, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-1c9h4r-public-custom-vnet-control-plane-j6sgf, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-grnv7, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-zsthj, container coredns
STEP: Got error while iterating over activity logs for resource group capz-e2e-1c9h4r-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000231667s
STEP: Dumping all the Cluster API resources in the "capz-e2e-1c9h4r" namespace
STEP: Deleting all clusters in the capz-e2e-1c9h4r namespace
STEP: Deleting cluster capz-e2e-1c9h4r-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-1c9h4r/capz-e2e-1c9h4r-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-1c9h4r-public-custom-vnet to be deleted
W1105 19:23:42.537609   24194 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1105 19:24:13.835172   24194 trace.go:205] Trace[2092038626]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (05-Nov-2021 19:23:43.834) (total time: 30000ms):
Trace[2092038626]: [30.000780273s] [30.000780273s] END
E1105 19:24:13.835243   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp 40.114.170.236:6443: i/o timeout
I1105 19:24:45.752838   24194 trace.go:205] Trace[1615822619]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (05-Nov-2021 19:24:15.751) (total time: 30000ms):
Trace[1615822619]: [30.000811899s] [30.000811899s] END
E1105 19:24:45.752892   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp 40.114.170.236:6443: i/o timeout
I1105 19:25:20.037144   24194 trace.go:205] Trace[793199656]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (05-Nov-2021 19:24:50.036) (total time: 30000ms):
Trace[793199656]: [30.000665406s] [30.000665406s] END
E1105 19:25:20.037200   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp 40.114.170.236:6443: i/o timeout
I1105 19:25:58.551735   24194 trace.go:205] Trace[1122055893]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (05-Nov-2021 19:25:28.551) (total time: 30000ms):
Trace[1122055893]: [30.000634941s] [30.000634941s] END
E1105 19:25:58.551798   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp 40.114.170.236:6443: i/o timeout
I1105 19:26:51.145953   24194 trace.go:205] Trace[809209622]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (05-Nov-2021 19:26:21.144) (total time: 30000ms):
Trace[809209622]: [30.000954063s] [30.000954063s] END
E1105 19:26:51.146003   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp 40.114.170.236:6443: i/o timeout
I1105 19:27:51.598936   24194 trace.go:205] Trace[844487515]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (05-Nov-2021 19:27:21.597) (total time: 30001ms):
Trace[844487515]: [30.0017276s] [30.0017276s] END
E1105 19:27:51.599011   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp 40.114.170.236:6443: i/o timeout
I1105 19:29:04.912299   24194 trace.go:205] Trace[222829954]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (05-Nov-2021 19:28:34.910) (total time: 30001ms):
Trace[222829954]: [30.001558633s] [30.001558633s] END
E1105 19:29:04.912361   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp 40.114.170.236:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-1c9h4r
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov  5 19:29:30.102: INFO: deleting an existing virtual network "custom-vnet"
Nov  5 19:29:41.120: INFO: deleting an existing route table "node-routetable"
Nov  5 19:29:51.721: INFO: deleting an existing network security group "node-nsg"
E1105 19:30:02.283612   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov  5 19:30:02.288: INFO: deleting an existing network security group "control-plane-nsg"
Nov  5 19:30:13.162: INFO: verifying the existing resource group "capz-e2e-1c9h4r-public-custom-vnet" is empty
Nov  5 19:30:14.446: INFO: deleting the existing resource group "capz-e2e-1c9h4r-public-custom-vnet"
E1105 19:30:47.205513   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1105 19:31:43.916777   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:32:40.459013   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 57m31s on Ginkgo node 2 of 3


• [SLOW TEST:3450.710 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Fri, 05 Nov 2021 19:20:03 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-hq0mum" for hosting the cluster
Nov  5 19:20:03.234: INFO: starting to create namespace for hosting the "capz-e2e-hq0mum" test spec
2021/11/05 19:20:03 failed trying to get namespace (capz-e2e-hq0mum):namespaces "capz-e2e-hq0mum" not found
INFO: Creating namespace capz-e2e-hq0mum
INFO: Creating event watcher for namespace "capz-e2e-hq0mum"
Nov  5 19:20:03.268: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-hq0mum-oot
INFO: Creating the workload cluster with name "capz-e2e-hq0mum-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 98 lines ...
STEP: Fetching activity logs took 532.510557ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-hq0mum" namespace
STEP: Deleting all clusters in the capz-e2e-hq0mum namespace
STEP: Deleting cluster capz-e2e-hq0mum-oot
INFO: Waiting for the Cluster capz-e2e-hq0mum/capz-e2e-hq0mum-oot to be deleted
STEP: Waiting for cluster capz-e2e-hq0mum-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-hq0mum-oot-control-plane-6zp9k, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fzjlb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-hq0mum-oot-control-plane-6zp9k, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-59rl4, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-xzggp, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-s2b58, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9g8d4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-h796q, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-controller-manager, container cloud-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-hq0mum-oot-control-plane-6zp9k, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-hq0mum-oot-control-plane-6zp9k, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-hq0mum
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 21m2s on Ginkgo node 1 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Fri, 05 Nov 2021 19:15:21 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-qlfl17" for hosting the cluster
Nov  5 19:15:21.370: INFO: starting to create namespace for hosting the "capz-e2e-qlfl17" test spec
2021/11/05 19:15:21 failed trying to get namespace (capz-e2e-qlfl17):namespaces "capz-e2e-qlfl17" not found
INFO: Creating namespace capz-e2e-qlfl17
INFO: Creating event watcher for namespace "capz-e2e-qlfl17"
Nov  5 19:15:21.399: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-qlfl17-gpu
INFO: Creating the workload cluster with name "capz-e2e-qlfl17-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 58 lines ...
STEP: Fetching activity logs took 809.106978ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-qlfl17" namespace
STEP: Deleting all clusters in the capz-e2e-qlfl17 namespace
STEP: Deleting cluster capz-e2e-qlfl17-gpu
INFO: Waiting for the Cluster capz-e2e-qlfl17/capz-e2e-qlfl17-gpu to be deleted
STEP: Waiting for cluster capz-e2e-qlfl17-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-m2vds, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-64v5d, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-qlfl17
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 27m6s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Fri, 05 Nov 2021 19:32:49 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-ry0khd" for hosting the cluster
Nov  5 19:32:49.636: INFO: starting to create namespace for hosting the "capz-e2e-ry0khd" test spec
2021/11/05 19:32:49 failed trying to get namespace (capz-e2e-ry0khd):namespaces "capz-e2e-ry0khd" not found
INFO: Creating namespace capz-e2e-ry0khd
INFO: Creating event watcher for namespace "capz-e2e-ry0khd"
Nov  5 19:32:49.665: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-ry0khd-aks
INFO: Creating the workload cluster with name "capz-e2e-ry0khd-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1105 19:33:40.132244   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:34:23.247206   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:34:55.494461   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:35:27.992071   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:36:09.656696   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:36:51.765275   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov  5 19:37:31.229: INFO: Waiting for the first control plane machine managed by capz-e2e-ry0khd/capz-e2e-ry0khd-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
E1105 19:37:45.588056   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:38:36.731724   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:39:31.892274   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:40:13.995259   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:40:59.678347   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:41:35.343263   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:42:30.986892   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:43:11.745839   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:43:56.171049   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:44:38.557277   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:45:14.516794   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:46:01.692207   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:46:39.189653   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:47:28.731207   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:47:59.825460   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:48:44.874020   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:49:19.245276   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:49:57.120121   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:50:50.882745   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:51:32.471183   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:52:28.068454   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:53:13.491664   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:53:49.342376   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:54:32.793742   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:55:20.578321   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:56:15.104876   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:56:46.826880   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Dumping logs from the "capz-e2e-ry0khd-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-ry0khd/capz-e2e-ry0khd-aks logs
STEP: Dumping workload cluster capz-e2e-ry0khd/capz-e2e-ry0khd-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 1.142165422s
STEP: Dumping workload cluster capz-e2e-ry0khd/capz-e2e-ry0khd-aks Azure activity log
STEP: Creating log watcher for controller kube-system/calico-typha-deployment-76cb9744d8-h829s, container calico-typha
... skipping 10 lines ...
STEP: Fetching activity logs took 772.682262ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-ry0khd" namespace
STEP: Deleting all clusters in the capz-e2e-ry0khd namespace
STEP: Deleting cluster capz-e2e-ry0khd-aks
INFO: Waiting for the Cluster capz-e2e-ry0khd/capz-e2e-ry0khd-aks to be deleted
STEP: Waiting for cluster capz-e2e-ry0khd-aks to be deleted
E1105 19:57:34.806586   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:58:23.403655   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:59:02.425695   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 19:59:58.559750   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 20:00:31.845190   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 20:01:24.185782   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 20:02:18.618438   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 20:03:08.637034   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 20:03:50.239569   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-ry0khd
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1105 20:04:49.699204   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1105 20:05:44.688969   24194 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1c9h4r/events?resourceVersion=8889": dial tcp: lookup capz-e2e-1c9h4r-public-custom-vnet-fbcd36a9.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 32m56s on Ginkgo node 2 of 3


• Failure [1976.452 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 59 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Fri, 05 Nov 2021 19:42:26 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-m34qv9" for hosting the cluster
Nov  5 19:42:26.984: INFO: starting to create namespace for hosting the "capz-e2e-m34qv9" test spec
2021/11/05 19:42:26 failed trying to get namespace (capz-e2e-m34qv9):namespaces "capz-e2e-m34qv9" not found
INFO: Creating namespace capz-e2e-m34qv9
INFO: Creating event watcher for namespace "capz-e2e-m34qv9"
Nov  5 19:42:27.025: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-m34qv9-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-m34qv9-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 89 lines ...
STEP: waiting for job default/curl-to-elb-jobeljp093uxol to be complete
Nov  5 19:59:38.749: INFO: waiting for job default/curl-to-elb-jobeljp093uxol to be complete
Nov  5 19:59:48.968: INFO: job default/curl-to-elb-jobeljp093uxol is complete, took 10.219029476s
STEP: connecting directly to the external LB service
Nov  5 19:59:48.968: INFO: starting attempts to connect directly to the external LB service
2021/11/05 19:59:48 [DEBUG] GET http://20.86.237.127
2021/11/05 20:00:18 [ERR] GET http://20.86.237.127 request failed: Get "http://20.86.237.127": dial tcp 20.86.237.127:80: i/o timeout
2021/11/05 20:00:18 [DEBUG] GET http://20.86.237.127: retrying in 1s (4 left)
Nov  5 20:00:20.186: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov  5 20:00:20.186: INFO: starting to delete external LB service web-windows1s4mnh-elb
Nov  5 20:00:20.318: INFO: starting to delete deployment web-windows1s4mnh
Nov  5 20:00:20.427: INFO: starting to delete job curl-to-elb-jobeljp093uxol
... skipping 23 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-xm994, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-m34qv9-win-vmss-control-plane-d587f, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-rjjgz, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-m34qv9-win-vmss-control-plane-d587f, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-m34qv9-win-vmss-control-plane-d587f, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-r5wr4, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-m34qv9-win-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000282517s
STEP: Dumping all the Cluster API resources in the "capz-e2e-m34qv9" namespace
STEP: Deleting all clusters in the capz-e2e-m34qv9 namespace
STEP: Deleting cluster capz-e2e-m34qv9-win-vmss
INFO: Waiting for the Cluster capz-e2e-m34qv9/capz-e2e-m34qv9-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-m34qv9-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-rjjgz, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-mhvqz, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-m34qv9
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 31m1s on Ginkgo node 3 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Fri, 05 Nov 2021 19:41:04 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-q3ixc3" for hosting the cluster
Nov  5 19:41:04.814: INFO: starting to create namespace for hosting the "capz-e2e-q3ixc3" test spec
2021/11/05 19:41:04 failed trying to get namespace (capz-e2e-q3ixc3):namespaces "capz-e2e-q3ixc3" not found
INFO: Creating namespace capz-e2e-q3ixc3
INFO: Creating event watcher for namespace "capz-e2e-q3ixc3"
Nov  5 19:41:04.845: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-q3ixc3-win-ha
INFO: Creating the workload cluster with name "capz-e2e-q3ixc3-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 91 lines ...
STEP: waiting for job default/curl-to-elb-jobllwp7hvxqvp to be complete
Nov  5 19:55:04.726: INFO: waiting for job default/curl-to-elb-jobllwp7hvxqvp to be complete
Nov  5 19:55:14.950: INFO: job default/curl-to-elb-jobllwp7hvxqvp is complete, took 10.224640351s
STEP: connecting directly to the external LB service
Nov  5 19:55:14.951: INFO: starting attempts to connect directly to the external LB service
2021/11/05 19:55:14 [DEBUG] GET http://20.86.232.41
2021/11/05 19:55:44 [ERR] GET http://20.86.232.41 request failed: Get "http://20.86.232.41": dial tcp 20.86.232.41:80: i/o timeout
2021/11/05 19:55:44 [DEBUG] GET http://20.86.232.41: retrying in 1s (4 left)
Nov  5 19:55:46.170: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov  5 19:55:46.170: INFO: starting to delete external LB service web-windowsdie6v6-elb
Nov  5 19:55:46.371: INFO: starting to delete deployment web-windowsdie6v6
Nov  5 19:55:46.487: INFO: starting to delete job curl-to-elb-jobllwp7hvxqvp
... skipping 43 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-qm8nw, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-q3ixc3-win-ha-control-plane-mbgrb, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-6m8xh, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-q3ixc3-win-ha-control-plane-xlj79, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-2lql4, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-q3ixc3-win-ha-control-plane-xlj79, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-q3ixc3-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000854948s
STEP: Dumping all the Cluster API resources in the "capz-e2e-q3ixc3" namespace
STEP: Deleting all clusters in the capz-e2e-q3ixc3 namespace
STEP: Deleting cluster capz-e2e-q3ixc3-win-ha
INFO: Waiting for the Cluster capz-e2e-q3ixc3/capz-e2e-q3ixc3-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-q3ixc3-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-cvvrh, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-q3ixc3-win-ha-control-plane-mbgrb, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-q3ixc3-win-ha-control-plane-xlj79, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-g8gs7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-q3ixc3-win-ha-control-plane-xlj79, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-q3ixc3-win-ha-control-plane-xlj79, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-q3ixc3-win-ha-control-plane-xlj79, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-q3ixc3-win-ha-control-plane-mbgrb, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-q3ixc3-win-ha-control-plane-mbgrb, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-q3ixc3-win-ha-control-plane-mbgrb, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nh8kr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-ldhp7, container kube-flannel: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-q3ixc3
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 33m22s on Ginkgo node 1 of 3

... skipping 9 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating an AKS cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216

Ran 9 of 22 Specs in 6062.238 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 1h42m21.603664225s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...