This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-24 18:35
Elapsed1h45m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node 30m19s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\san\sAKS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454
Timed out after 1200.000s.
System machine pools not ready
Expected
    <bool>: false
to equal
    <bool>: true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 440 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Wed, 24 Nov 2021 18:42:26 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-q4hrdl" for hosting the cluster
Nov 24 18:42:26.165: INFO: starting to create namespace for hosting the "capz-e2e-q4hrdl" test spec
2021/11/24 18:42:26 failed trying to get namespace (capz-e2e-q4hrdl):namespaces "capz-e2e-q4hrdl" not found
INFO: Creating namespace capz-e2e-q4hrdl
INFO: Creating event watcher for namespace "capz-e2e-q4hrdl"
Nov 24 18:42:26.252: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-q4hrdl-ipv6
INFO: Creating the workload cluster with name "capz-e2e-q4hrdl-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 584.317946ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-q4hrdl" namespace
STEP: Deleting all clusters in the capz-e2e-q4hrdl namespace
STEP: Deleting cluster capz-e2e-q4hrdl-ipv6
INFO: Waiting for the Cluster capz-e2e-q4hrdl/capz-e2e-q4hrdl-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-q4hrdl-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-q4hrdl-ipv6-control-plane-fq8rf, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2phmb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-q4hrdl-ipv6-control-plane-fq8rf, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-nsjwp, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-q4hrdl-ipv6-control-plane-w9qd7, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-knmsb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-x48bd, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-q4hrdl-ipv6-control-plane-fq8rf, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-q4hrdl-ipv6-control-plane-4lbth, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wzkst, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-q4hrdl-ipv6-control-plane-fq8rf, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-q4hrdl-ipv6-control-plane-4lbth, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5j7zq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-q4hrdl-ipv6-control-plane-w9qd7, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-q4hrdl-ipv6-control-plane-4lbth, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-q4hrdl-ipv6-control-plane-4lbth, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xl68n, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-kkz74, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5vljw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-q4hrdl-ipv6-control-plane-w9qd7, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-q4hrdl-ipv6-control-plane-w9qd7, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kp9rf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-22v4k, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-q4hrdl
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 17m40s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Wed, 24 Nov 2021 19:00:06 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-mdzr0e" for hosting the cluster
Nov 24 19:00:06.421: INFO: starting to create namespace for hosting the "capz-e2e-mdzr0e" test spec
2021/11/24 19:00:06 failed trying to get namespace (capz-e2e-mdzr0e):namespaces "capz-e2e-mdzr0e" not found
INFO: Creating namespace capz-e2e-mdzr0e
INFO: Creating event watcher for namespace "capz-e2e-mdzr0e"
Nov 24 19:00:06.463: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-mdzr0e-vmss
INFO: Creating the workload cluster with name "capz-e2e-mdzr0e-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 52 lines ...
STEP: waiting for job default/curl-to-elb-job1krtendizsb to be complete
Nov 24 19:08:36.992: INFO: waiting for job default/curl-to-elb-job1krtendizsb to be complete
Nov 24 19:08:47.053: INFO: job default/curl-to-elb-job1krtendizsb is complete, took 10.060626566s
STEP: connecting directly to the external LB service
Nov 24 19:08:47.053: INFO: starting attempts to connect directly to the external LB service
2021/11/24 19:08:47 [DEBUG] GET http://20.120.40.47
2021/11/24 19:09:17 [ERR] GET http://20.120.40.47 request failed: Get "http://20.120.40.47": dial tcp 20.120.40.47:80: i/o timeout
2021/11/24 19:09:17 [DEBUG] GET http://20.120.40.47: retrying in 1s (4 left)
Nov 24 19:09:33.356: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 24 19:09:33.356: INFO: starting to delete external LB service webz38oc9-elb
Nov 24 19:09:33.404: INFO: starting to delete deployment webz38oc9
Nov 24 19:09:33.434: INFO: starting to delete job curl-to-elb-job1krtendizsb
... skipping 43 lines ...
STEP: Fetching activity logs took 610.343125ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-mdzr0e" namespace
STEP: Deleting all clusters in the capz-e2e-mdzr0e namespace
STEP: Deleting cluster capz-e2e-mdzr0e-vmss
INFO: Waiting for the Cluster capz-e2e-mdzr0e/capz-e2e-mdzr0e-vmss to be deleted
STEP: Waiting for cluster capz-e2e-mdzr0e-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-mdzr0e-vmss-control-plane-tpwx7, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-mdzr0e-vmss-control-plane-tpwx7, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-mdzr0e-vmss-control-plane-tpwx7, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-97pg8, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-v9gtz, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bqfsf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-cwnwj, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8f7rb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-mdzr0e-vmss-control-plane-tpwx7, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-mdzr0e
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 20m24s on Ginkgo node 2 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Wed, 24 Nov 2021 18:42:26 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-ql0yy8" for hosting the cluster
Nov 24 18:42:26.164: INFO: starting to create namespace for hosting the "capz-e2e-ql0yy8" test spec
2021/11/24 18:42:26 failed trying to get namespace (capz-e2e-ql0yy8):namespaces "capz-e2e-ql0yy8" not found
INFO: Creating namespace capz-e2e-ql0yy8
INFO: Creating event watcher for namespace "capz-e2e-ql0yy8"
Nov 24 18:42:26.245: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-ql0yy8-ha
INFO: Creating the workload cluster with name "capz-e2e-ql0yy8-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 67 lines ...
Nov 24 18:52:33.692: INFO: starting to delete external LB service webb6knbn-elb
Nov 24 18:52:33.774: INFO: starting to delete deployment webb6knbn
Nov 24 18:52:33.813: INFO: starting to delete job curl-to-elb-jobtqd9vazhvr4
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 24 18:52:33.895: INFO: starting to create dev deployment namespace
2021/11/24 18:52:33 failed trying to get namespace (development):namespaces "development" not found
2021/11/24 18:52:33 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 24 18:52:33.980: INFO: starting to create prod deployment namespace
2021/11/24 18:52:34 failed trying to get namespace (production):namespaces "production" not found
2021/11/24 18:52:34 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 24 18:52:34.064: INFO: starting to create frontend-prod deployments
Nov 24 18:52:34.102: INFO: starting to create frontend-dev deployments
Nov 24 18:52:34.142: INFO: starting to create backend deployments
Nov 24 18:52:34.190: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 24 18:52:57.103: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.252.66 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 24 18:55:06.694: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 24 18:55:06.859: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.252.66 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.252.66 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 24 18:59:28.837: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 24 18:59:29.018: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.18.4 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 24 19:01:39.794: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 24 19:01:39.967: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.18.5 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.18.4 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 24 19:06:01.939: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 24 19:06:02.107: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.252.66 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 24 19:08:13.125: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 24 19:08:13.292: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.252.66 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-ql0yy8-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-ql0yy8/capz-e2e-ql0yy8-ha logs
Nov 24 19:10:24.586: INFO: INFO: Collecting logs for node capz-e2e-ql0yy8-ha-control-plane-6sgwg in cluster capz-e2e-ql0yy8-ha in namespace capz-e2e-ql0yy8

Nov 24 19:10:35.360: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-ql0yy8-ha-control-plane-6sgwg
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-ql0yy8-ha-control-plane-gwtwc, container etcd
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-f9kjw, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-dgc7j, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-ql0yy8-ha-control-plane-6sgwg, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-ql0yy8-ha-control-plane-6sgwg, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-ql0yy8-ha-control-plane-wf8sr, container kube-apiserver
STEP: Got error while iterating over activity logs for resource group capz-e2e-ql0yy8-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001193455s
STEP: Dumping all the Cluster API resources in the "capz-e2e-ql0yy8" namespace
STEP: Deleting all clusters in the capz-e2e-ql0yy8 namespace
STEP: Deleting cluster capz-e2e-ql0yy8-ha
INFO: Waiting for the Cluster capz-e2e-ql0yy8/capz-e2e-ql0yy8-ha to be deleted
STEP: Waiting for cluster capz-e2e-ql0yy8-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ql0yy8-ha-control-plane-wf8sr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hngr5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ql0yy8-ha-control-plane-wf8sr, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ql0yy8-ha-control-plane-wf8sr, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-xqwwv, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-f9kjw, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9czc9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-966pk, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-f7vdc, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-nv4lg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ql0yy8-ha-control-plane-wf8sr, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-ql0yy8
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 44m40s on Ginkgo node 3 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Wed, 24 Nov 2021 18:42:26 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-m2ui2s" for hosting the cluster
Nov 24 18:42:26.114: INFO: starting to create namespace for hosting the "capz-e2e-m2ui2s" test spec
2021/11/24 18:42:26 failed trying to get namespace (capz-e2e-m2ui2s):namespaces "capz-e2e-m2ui2s" not found
INFO: Creating namespace capz-e2e-m2ui2s
INFO: Creating event watcher for namespace "capz-e2e-m2ui2s"
Nov 24 18:42:26.145: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-m2ui2s-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-rs9hb, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-t5qtn, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-m2ui2s-public-custom-vnet-control-plane-jrjrf, container kube-apiserver
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-m2ui2s-public-custom-vnet-control-plane-jrjrf, container etcd
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-m2ui2s-public-custom-vnet-control-plane-jrjrf, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-7k59h, container coredns
STEP: Got error while iterating over activity logs for resource group capz-e2e-m2ui2s-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000648345s
STEP: Dumping all the Cluster API resources in the "capz-e2e-m2ui2s" namespace
STEP: Deleting all clusters in the capz-e2e-m2ui2s namespace
STEP: Deleting cluster capz-e2e-m2ui2s-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-m2ui2s/capz-e2e-m2ui2s-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-m2ui2s-public-custom-vnet to be deleted
W1124 19:27:58.365488   24251 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1124 19:28:29.587470   24251 trace.go:205] Trace[1073934939]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (24-Nov-2021 19:27:59.585) (total time: 30001ms):
Trace[1073934939]: [30.001447041s] [30.001447041s] END
E1124 19:28:29.587544   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp 20.88.173.224:6443: i/o timeout
I1124 19:29:02.610989   24251 trace.go:205] Trace[1506165863]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (24-Nov-2021 19:28:32.609) (total time: 30001ms):
Trace[1506165863]: [30.001573089s] [30.001573089s] END
E1124 19:29:02.611067   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp 20.88.173.224:6443: i/o timeout
I1124 19:29:37.558416   24251 trace.go:205] Trace[950112699]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (24-Nov-2021 19:29:07.556) (total time: 30001ms):
Trace[950112699]: [30.001475297s] [30.001475297s] END
E1124 19:29:37.558493   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp 20.88.173.224:6443: i/o timeout
I1124 19:30:18.138056   24251 trace.go:205] Trace[426786727]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (24-Nov-2021 19:29:48.136) (total time: 30001ms):
Trace[426786727]: [30.001791644s] [30.001791644s] END
E1124 19:30:18.138132   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp 20.88.173.224:6443: i/o timeout
I1124 19:31:03.975148   24251 trace.go:205] Trace[151260664]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (24-Nov-2021 19:30:33.973) (total time: 30001ms):
Trace[151260664]: [30.001352777s] [30.001352777s] END
E1124 19:31:03.975228   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp 20.88.173.224:6443: i/o timeout
I1124 19:32:04.738758   24251 trace.go:205] Trace[1049424925]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (24-Nov-2021 19:31:34.737) (total time: 30001ms):
Trace[1049424925]: [30.001225625s] [30.001225625s] END
E1124 19:32:04.738831   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp 20.88.173.224:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-m2ui2s
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 24 19:32:23.045: INFO: deleting an existing virtual network "custom-vnet"
Nov 24 19:32:33.510: INFO: deleting an existing route table "node-routetable"
Nov 24 19:32:43.822: INFO: deleting an existing network security group "node-nsg"
Nov 24 19:32:54.119: INFO: deleting an existing network security group "control-plane-nsg"
E1124 19:33:02.389897   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 24 19:33:04.391: INFO: verifying the existing resource group "capz-e2e-m2ui2s-public-custom-vnet" is empty
Nov 24 19:33:04.689: INFO: deleting the existing resource group "capz-e2e-m2ui2s-public-custom-vnet"
E1124 19:33:44.738282   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1124 19:34:43.259641   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 52m57s on Ginkgo node 1 of 3


• [SLOW TEST:3177.172 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Wed, 24 Nov 2021 19:20:30 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-6x2to9" for hosting the cluster
Nov 24 19:20:30.924: INFO: starting to create namespace for hosting the "capz-e2e-6x2to9" test spec
2021/11/24 19:20:30 failed trying to get namespace (capz-e2e-6x2to9):namespaces "capz-e2e-6x2to9" not found
INFO: Creating namespace capz-e2e-6x2to9
INFO: Creating event watcher for namespace "capz-e2e-6x2to9"
Nov 24 19:20:30.967: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-6x2to9-gpu
INFO: Creating the workload cluster with name "capz-e2e-6x2to9-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 58 lines ...
STEP: Fetching activity logs took 493.619918ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-6x2to9" namespace
STEP: Deleting all clusters in the capz-e2e-6x2to9 namespace
STEP: Deleting cluster capz-e2e-6x2to9-gpu
INFO: Waiting for the Cluster capz-e2e-6x2to9/capz-e2e-6x2to9-gpu to be deleted
STEP: Waiting for cluster capz-e2e-6x2to9-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hcn9b, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-6x2to9-gpu-control-plane-74cjk, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-rlfsf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-6x2to9-gpu-control-plane-74cjk, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-65mkm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-w2c65, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-dl5k4, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-6x2to9-gpu-control-plane-74cjk, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-6x2to9-gpu-control-plane-74cjk, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-6x2to9
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 19m59s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Wed, 24 Nov 2021 19:27:06 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-st8e73" for hosting the cluster
Nov 24 19:27:06.225: INFO: starting to create namespace for hosting the "capz-e2e-st8e73" test spec
2021/11/24 19:27:06 failed trying to get namespace (capz-e2e-st8e73):namespaces "capz-e2e-st8e73" not found
INFO: Creating namespace capz-e2e-st8e73
INFO: Creating event watcher for namespace "capz-e2e-st8e73"
Nov 24 19:27:06.261: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-st8e73-oot
INFO: Creating the workload cluster with name "capz-e2e-st8e73-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-job3jldxb21g04 to be complete
Nov 24 19:36:39.136: INFO: waiting for job default/curl-to-elb-job3jldxb21g04 to be complete
Nov 24 19:36:49.204: INFO: job default/curl-to-elb-job3jldxb21g04 is complete, took 10.068092047s
STEP: connecting directly to the external LB service
Nov 24 19:36:49.204: INFO: starting attempts to connect directly to the external LB service
2021/11/24 19:36:49 [DEBUG] GET http://20.102.37.195
2021/11/24 19:37:19 [ERR] GET http://20.102.37.195 request failed: Get "http://20.102.37.195": dial tcp 20.102.37.195:80: i/o timeout
2021/11/24 19:37:19 [DEBUG] GET http://20.102.37.195: retrying in 1s (4 left)
Nov 24 19:37:20.260: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 24 19:37:20.260: INFO: starting to delete external LB service webji1nr2-elb
Nov 24 19:37:20.328: INFO: starting to delete deployment webji1nr2
Nov 24 19:37:20.358: INFO: starting to delete job curl-to-elb-job3jldxb21g04
... skipping 34 lines ...
STEP: Fetching activity logs took 591.355145ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-st8e73" namespace
STEP: Deleting all clusters in the capz-e2e-st8e73 namespace
STEP: Deleting cluster capz-e2e-st8e73-oot
INFO: Waiting for the Cluster capz-e2e-st8e73/capz-e2e-st8e73-oot to be deleted
STEP: Waiting for cluster capz-e2e-st8e73-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-lcrct, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-zppph, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-st8e73-oot-control-plane-s44b2, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-n6wpd, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lz9vl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-st8e73-oot-control-plane-s44b2, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wgxns, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-45xkz, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-st8e73-oot-control-plane-s44b2, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-st8e73-oot-control-plane-s44b2, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-xqdvs, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-controller-manager, container cloud-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-9cg65, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-8dmks, container cloud-node-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-st8e73
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 22m41s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Wed, 24 Nov 2021 19:35:23 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-43epyr" for hosting the cluster
Nov 24 19:35:23.290: INFO: starting to create namespace for hosting the "capz-e2e-43epyr" test spec
2021/11/24 19:35:23 failed trying to get namespace (capz-e2e-43epyr):namespaces "capz-e2e-43epyr" not found
INFO: Creating namespace capz-e2e-43epyr
INFO: Creating event watcher for namespace "capz-e2e-43epyr"
Nov 24 19:35:23.338: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-43epyr-aks
INFO: Creating the workload cluster with name "capz-e2e-43epyr-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1124 19:35:38.734233   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:36:21.215193   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:37:03.966358   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:37:55.563061   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:38:38.572917   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 24 19:38:54.829: INFO: Waiting for the first control plane machine managed by capz-e2e-43epyr/capz-e2e-43epyr-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
E1124 19:39:18.178733   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:40:11.110235   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:40:43.427365   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:41:27.981197   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:42:10.440166   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:43:03.998863   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:43:54.848948   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:44:30.062892   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:45:04.939322   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:46:03.035934   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:46:53.624023   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:47:25.188775   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:48:01.021336   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:48:35.789497   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:49:27.765599   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:50:23.948282   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:51:00.565570   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:51:40.011382   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:52:36.558324   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:53:19.066468   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:53:50.084806   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:54:48.978357   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:55:20.028876   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:56:01.337368   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:56:51.002380   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:57:48.884215   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 19:58:37.512652   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Dumping logs from the "capz-e2e-43epyr-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-43epyr/capz-e2e-43epyr-aks logs
STEP: Dumping workload cluster capz-e2e-43epyr/capz-e2e-43epyr-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 458.765796ms
STEP: Creating log watcher for controller kube-system/calico-node-xw4lb, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-autoscaler-54d55c8b75-v4vgq, container autoscaler
... skipping 10 lines ...
STEP: Fetching activity logs took 763.917895ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-43epyr" namespace
STEP: Deleting all clusters in the capz-e2e-43epyr namespace
STEP: Deleting cluster capz-e2e-43epyr-aks
INFO: Waiting for the Cluster capz-e2e-43epyr/capz-e2e-43epyr-aks to be deleted
STEP: Waiting for cluster capz-e2e-43epyr-aks to be deleted
E1124 19:59:13.384031   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:00:12.206387   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:00:55.254949   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:01:32.344981   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:02:07.233228   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:02:47.346277   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:03:30.603635   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-43epyr
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1124 20:04:22.541362   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:05:22.275322   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 30m20s on Ginkgo node 1 of 3


• Failure [1819.782 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 59 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Wed, 24 Nov 2021 19:40:29 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-lpjw6g" for hosting the cluster
Nov 24 19:40:29.635: INFO: starting to create namespace for hosting the "capz-e2e-lpjw6g" test spec
2021/11/24 19:40:29 failed trying to get namespace (capz-e2e-lpjw6g):namespaces "capz-e2e-lpjw6g" not found
INFO: Creating namespace capz-e2e-lpjw6g
INFO: Creating event watcher for namespace "capz-e2e-lpjw6g"
Nov 24 19:40:29.677: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-lpjw6g-win-ha
INFO: Creating the workload cluster with name "capz-e2e-lpjw6g-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 151 lines ...
STEP: Fetching activity logs took 950.461583ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-lpjw6g" namespace
STEP: Deleting all clusters in the capz-e2e-lpjw6g namespace
STEP: Deleting cluster capz-e2e-lpjw6g-win-ha
INFO: Waiting for the Cluster capz-e2e-lpjw6g/capz-e2e-lpjw6g-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-lpjw6g-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nkwmj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-sk5s2, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-pmxb7, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-lpjw6g-win-ha-control-plane-lj7kq, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-lpjw6g-win-ha-control-plane-lj7kq, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-9skqg, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-lpjw6g-win-ha-control-plane-lj7kq, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-hv82q, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-jx99f, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rtvnh, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-lpjw6g-win-ha-control-plane-lj7kq, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-v7wcr, container kube-flannel: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-lpjw6g
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 35m51s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Wed, 24 Nov 2021 19:49:47 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-ngu1u6" for hosting the cluster
Nov 24 19:49:47.613: INFO: starting to create namespace for hosting the "capz-e2e-ngu1u6" test spec
2021/11/24 19:49:47 failed trying to get namespace (capz-e2e-ngu1u6):namespaces "capz-e2e-ngu1u6" not found
INFO: Creating namespace capz-e2e-ngu1u6
INFO: Creating event watcher for namespace "capz-e2e-ngu1u6"
Nov 24 19:49:47.645: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-ngu1u6-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-ngu1u6-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 89 lines ...
STEP: waiting for job default/curl-to-elb-job75ec8mq264c to be complete
Nov 24 20:06:13.600: INFO: waiting for job default/curl-to-elb-job75ec8mq264c to be complete
Nov 24 20:06:23.674: INFO: job default/curl-to-elb-job75ec8mq264c is complete, took 10.074052885s
STEP: connecting directly to the external LB service
Nov 24 20:06:23.674: INFO: starting attempts to connect directly to the external LB service
2021/11/24 20:06:23 [DEBUG] GET http://20.83.142.73
2021/11/24 20:06:53 [ERR] GET http://20.83.142.73 request failed: Get "http://20.83.142.73": dial tcp 20.83.142.73:80: i/o timeout
2021/11/24 20:06:53 [DEBUG] GET http://20.83.142.73: retrying in 1s (4 left)
Nov 24 20:06:54.731: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 24 20:06:54.731: INFO: starting to delete external LB service web-windowsg28exj-elb
Nov 24 20:06:54.805: INFO: starting to delete deployment web-windowsg28exj
Nov 24 20:06:54.839: INFO: starting to delete job curl-to-elb-job75ec8mq264c
... skipping 29 lines ...
STEP: Fetching activity logs took 1.022187569s
STEP: Dumping all the Cluster API resources in the "capz-e2e-ngu1u6" namespace
STEP: Deleting all clusters in the capz-e2e-ngu1u6 namespace
STEP: Deleting cluster capz-e2e-ngu1u6-win-vmss
INFO: Waiting for the Cluster capz-e2e-ngu1u6/capz-e2e-ngu1u6-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-ngu1u6-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-m5qd4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-4r5d5, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ngu1u6-win-vmss-control-plane-h2s2f, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-2ktn2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7hc82, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-c64bw, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ngu1u6-win-vmss-control-plane-h2s2f, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ngu1u6-win-vmss-control-plane-h2s2f, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-xnkjt, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ngu1u6-win-vmss-control-plane-h2s2f, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-ngu1u6
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 29m29s on Ginkgo node 3 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows enabled VMSS cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:542
    with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543
------------------------------
E1124 20:06:13.380079   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:06:50.508197   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:07:31.089591   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:08:25.886579   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:09:09.239413   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:09:54.852017   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:10:32.997528   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:11:12.642649   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:11:59.370362   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:12:58.654678   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:13:34.061461   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:14:16.336011   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:15:16.272942   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:16:13.783910   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:17:13.653706   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:17:59.461808   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:18:30.375393   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1124 20:19:13.365015   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-m2ui2s/events?resourceVersion=8478": dial tcp: lookup capz-e2e-m2ui2s-public-custom-vnet-4ee5637.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating an AKS cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216

Ran 9 of 22 Specs in 5928.840 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 1h40m13.881573335s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...