This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-08 18:29
Elapsed1h50m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node 41m7s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\san\sAKS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454
Timed out after 1200.000s.
System machine pools not ready
Expected
    <bool>: false
to equal
    <bool>: true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 433 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Mon, 08 Nov 2021 18:36:43 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-3l4fbr" for hosting the cluster
Nov  8 18:36:43.641: INFO: starting to create namespace for hosting the "capz-e2e-3l4fbr" test spec
2021/11/08 18:36:43 failed trying to get namespace (capz-e2e-3l4fbr):namespaces "capz-e2e-3l4fbr" not found
INFO: Creating namespace capz-e2e-3l4fbr
INFO: Creating event watcher for namespace "capz-e2e-3l4fbr"
Nov  8 18:36:43.715: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-3l4fbr-ipv6
INFO: Creating the workload cluster with name "capz-e2e-3l4fbr-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 528.705107ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-3l4fbr" namespace
STEP: Deleting all clusters in the capz-e2e-3l4fbr namespace
STEP: Deleting cluster capz-e2e-3l4fbr-ipv6
INFO: Waiting for the Cluster capz-e2e-3l4fbr/capz-e2e-3l4fbr-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-3l4fbr-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qcwzp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-3l4fbr-ipv6-control-plane-8s99f, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4q8cp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-3l4fbr-ipv6-control-plane-m98n8, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-3l4fbr-ipv6-control-plane-m98n8, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-3l4fbr-ipv6-control-plane-m98n8, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-3l4fbr-ipv6-control-plane-8s99f, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-3l4fbr-ipv6-control-plane-lwbdv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-tcjkm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-3l4fbr-ipv6-control-plane-m98n8, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bc2mp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-3l4fbr-ipv6-control-plane-lwbdv, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-dp487, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-3l4fbr-ipv6-control-plane-8s99f, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-95smk, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-txzvt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-ssspb, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-vwq7n, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-72c64, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-3l4fbr-ipv6-control-plane-lwbdv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-3l4fbr-ipv6-control-plane-8s99f, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-p48hj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-3l4fbr-ipv6-control-plane-lwbdv, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-3l4fbr
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 16m27s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Mon, 08 Nov 2021 18:53:10 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-bf8fn3" for hosting the cluster
Nov  8 18:53:10.383: INFO: starting to create namespace for hosting the "capz-e2e-bf8fn3" test spec
2021/11/08 18:53:10 failed trying to get namespace (capz-e2e-bf8fn3):namespaces "capz-e2e-bf8fn3" not found
INFO: Creating namespace capz-e2e-bf8fn3
INFO: Creating event watcher for namespace "capz-e2e-bf8fn3"
Nov  8 18:53:10.433: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-bf8fn3-vmss
INFO: Creating the workload cluster with name "capz-e2e-bf8fn3-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 52 lines ...
STEP: waiting for job default/curl-to-elb-jobghtsz8y8lyz to be complete
Nov  8 19:00:41.111: INFO: waiting for job default/curl-to-elb-jobghtsz8y8lyz to be complete
Nov  8 19:00:51.191: INFO: job default/curl-to-elb-jobghtsz8y8lyz is complete, took 10.079995356s
STEP: connecting directly to the external LB service
Nov  8 19:00:51.191: INFO: starting attempts to connect directly to the external LB service
2021/11/08 19:00:51 [DEBUG] GET http://20.80.222.56
2021/11/08 19:01:21 [ERR] GET http://20.80.222.56 request failed: Get "http://20.80.222.56": dial tcp 20.80.222.56:80: i/o timeout
2021/11/08 19:01:21 [DEBUG] GET http://20.80.222.56: retrying in 1s (4 left)
Nov  8 19:01:22.258: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov  8 19:01:22.258: INFO: starting to delete external LB service webkqe8bi-elb
Nov  8 19:01:22.331: INFO: starting to delete deployment webkqe8bi
Nov  8 19:01:22.369: INFO: starting to delete job curl-to-elb-jobghtsz8y8lyz
... skipping 43 lines ...
STEP: Fetching activity logs took 649.086261ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-bf8fn3" namespace
STEP: Deleting all clusters in the capz-e2e-bf8fn3 namespace
STEP: Deleting cluster capz-e2e-bf8fn3-vmss
INFO: Waiting for the Cluster capz-e2e-bf8fn3/capz-e2e-bf8fn3-vmss to be deleted
STEP: Waiting for cluster capz-e2e-bf8fn3-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wv4n2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qscw4, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dtxxq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mmn98, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-bf8fn3
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 23m38s on Ginkgo node 3 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Mon, 08 Nov 2021 18:36:43 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-pj3obr" for hosting the cluster
Nov  8 18:36:43.638: INFO: starting to create namespace for hosting the "capz-e2e-pj3obr" test spec
2021/11/08 18:36:43 failed trying to get namespace (capz-e2e-pj3obr):namespaces "capz-e2e-pj3obr" not found
INFO: Creating namespace capz-e2e-pj3obr
INFO: Creating event watcher for namespace "capz-e2e-pj3obr"
Nov  8 18:36:43.715: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-pj3obr-ha
INFO: Creating the workload cluster with name "capz-e2e-pj3obr-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 67 lines ...
Nov  8 18:47:50.994: INFO: starting to delete external LB service webfztubk-elb
Nov  8 18:47:51.074: INFO: starting to delete deployment webfztubk
Nov  8 18:47:51.116: INFO: starting to delete job curl-to-elb-jobvpa388m5q80
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov  8 18:47:51.215: INFO: starting to create dev deployment namespace
2021/11/08 18:47:51 failed trying to get namespace (development):namespaces "development" not found
2021/11/08 18:47:51 namespace development does not exist, creating...
STEP: Creating production namespace
Nov  8 18:47:51.296: INFO: starting to create prod deployment namespace
2021/11/08 18:47:51 failed trying to get namespace (production):namespaces "production" not found
2021/11/08 18:47:51 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov  8 18:47:51.383: INFO: starting to create frontend-prod deployments
Nov  8 18:47:51.424: INFO: starting to create frontend-dev deployments
Nov  8 18:47:51.471: INFO: starting to create backend deployments
Nov  8 18:47:51.519: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov  8 18:48:14.476: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.111.195 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  8 18:50:24.537: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov  8 18:50:24.733: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.111.195 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.111.195 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  8 18:54:46.903: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov  8 18:54:47.082: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.111.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  8 18:56:57.753: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov  8 18:56:57.941: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.111.194 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.111.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  8 19:01:19.895: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov  8 19:01:20.241: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.111.195 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  8 19:03:30.973: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov  8 19:03:31.147: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.111.195 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-pj3obr-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-pj3obr/capz-e2e-pj3obr-ha logs
Nov  8 19:05:42.466: INFO: INFO: Collecting logs for node capz-e2e-pj3obr-ha-control-plane-6t4nb in cluster capz-e2e-pj3obr-ha in namespace capz-e2e-pj3obr

Nov  8 19:05:55.294: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-pj3obr-ha-control-plane-6t4nb
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-pj3obr-ha-control-plane-9vvzw, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-pj3obr-ha-control-plane-6t4nb, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-pj3obr-ha-control-plane-9vvzw, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-pj3obr-ha-control-plane-xv7b9, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-pj3obr-ha-control-plane-xv7b9, container kube-scheduler
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-pj3obr-ha-control-plane-6t4nb, container etcd
STEP: Got error while iterating over activity logs for resource group capz-e2e-pj3obr-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001035751s
STEP: Dumping all the Cluster API resources in the "capz-e2e-pj3obr" namespace
STEP: Deleting all clusters in the capz-e2e-pj3obr namespace
STEP: Deleting cluster capz-e2e-pj3obr-ha
INFO: Waiting for the Cluster capz-e2e-pj3obr/capz-e2e-pj3obr-ha to be deleted
STEP: Waiting for cluster capz-e2e-pj3obr-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-69jxh, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-87frj, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-ztn8k, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-hhxtf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-n2czm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-pj3obr-ha-control-plane-9vvzw, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8x44p, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-n7qqw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-pj3obr-ha-control-plane-9vvzw, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-pj3obr-ha-control-plane-9vvzw, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-pj3obr-ha-control-plane-9vvzw, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-pj3obr
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 46m47s on Ginkgo node 2 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Mon, 08 Nov 2021 18:36:43 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-w6k6k1" for hosting the cluster
Nov  8 18:36:43.604: INFO: starting to create namespace for hosting the "capz-e2e-w6k6k1" test spec
2021/11/08 18:36:43 failed trying to get namespace (capz-e2e-w6k6k1):namespaces "capz-e2e-w6k6k1" not found
INFO: Creating namespace capz-e2e-w6k6k1
INFO: Creating event watcher for namespace "capz-e2e-w6k6k1"
Nov  8 18:36:43.643: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-w6k6k1-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-w6k6k1-public-custom-vnet-control-plane-q9sxn, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-97c76, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-phjj6, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-x2nhk, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-w6k6k1-public-custom-vnet-control-plane-q9sxn, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-lw8pl, container calico-node
STEP: Got error while iterating over activity logs for resource group capz-e2e-w6k6k1-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000897683s
STEP: Dumping all the Cluster API resources in the "capz-e2e-w6k6k1" namespace
STEP: Deleting all clusters in the capz-e2e-w6k6k1 namespace
STEP: Deleting cluster capz-e2e-w6k6k1-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-w6k6k1/capz-e2e-w6k6k1-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-w6k6k1-public-custom-vnet to be deleted
W1108 19:20:33.115644   24251 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1108 19:21:04.157844   24251 trace.go:205] Trace[1713036267]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (08-Nov-2021 19:20:34.157) (total time: 30000ms):
Trace[1713036267]: [30.000588395s] [30.000588395s] END
E1108 19:21:04.157904   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp 20.36.234.94:6443: i/o timeout
I1108 19:21:36.165901   24251 trace.go:205] Trace[39181932]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (08-Nov-2021 19:21:06.164) (total time: 30001ms):
Trace[39181932]: [30.001022562s] [30.001022562s] END
E1108 19:21:36.165969   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp 20.36.234.94:6443: i/o timeout
I1108 19:22:09.770434   24251 trace.go:205] Trace[703798704]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (08-Nov-2021 19:21:39.768) (total time: 30001ms):
Trace[703798704]: [30.001482107s] [30.001482107s] END
E1108 19:22:09.770492   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp 20.36.234.94:6443: i/o timeout
I1108 19:22:47.784390   24251 trace.go:205] Trace[1343619074]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (08-Nov-2021 19:22:17.783) (total time: 30000ms):
Trace[1343619074]: [30.000909107s] [30.000909107s] END
E1108 19:22:47.784455   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp 20.36.234.94:6443: i/o timeout
I1108 19:23:35.059118   24251 trace.go:205] Trace[227291941]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (08-Nov-2021 19:23:05.058) (total time: 30000ms):
Trace[227291941]: [30.000757483s] [30.000757483s] END
E1108 19:23:35.059176   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp 20.36.234.94:6443: i/o timeout
I1108 19:24:44.575734   24251 trace.go:205] Trace[151480741]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (08-Nov-2021 19:24:14.574) (total time: 30001ms):
Trace[151480741]: [30.001121812s] [30.001121812s] END
E1108 19:24:44.575789   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp 20.36.234.94:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-w6k6k1
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov  8 19:25:55.047: INFO: deleting an existing virtual network "custom-vnet"
I1108 19:25:56.593965   24251 trace.go:205] Trace[14964883]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (08-Nov-2021 19:25:26.592) (total time: 30001ms):
Trace[14964883]: [30.001255564s] [30.001255564s] END
E1108 19:25:56.594021   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp 20.36.234.94:6443: i/o timeout
Nov  8 19:26:05.526: INFO: deleting an existing route table "node-routetable"
Nov  8 19:26:15.843: INFO: deleting an existing network security group "node-nsg"
Nov  8 19:26:26.120: INFO: deleting an existing network security group "control-plane-nsg"
Nov  8 19:26:36.415: INFO: verifying the existing resource group "capz-e2e-w6k6k1-public-custom-vnet" is empty
Nov  8 19:26:36.786: INFO: deleting the existing resource group "capz-e2e-w6k6k1-public-custom-vnet"
E1108 19:26:37.287182   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:27:15.385124   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1108 19:28:00.352652   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:28:57.054448   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 52m27s on Ginkgo node 1 of 3


• [SLOW TEST:3146.828 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Mon, 08 Nov 2021 19:16:48 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-w50z7v" for hosting the cluster
Nov  8 19:16:48.615: INFO: starting to create namespace for hosting the "capz-e2e-w50z7v" test spec
2021/11/08 19:16:48 failed trying to get namespace (capz-e2e-w50z7v):namespaces "capz-e2e-w50z7v" not found
INFO: Creating namespace capz-e2e-w50z7v
INFO: Creating event watcher for namespace "capz-e2e-w50z7v"
Nov  8 19:16:48.656: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-w50z7v-gpu
INFO: Creating the workload cluster with name "capz-e2e-w50z7v-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 58 lines ...
STEP: Fetching activity logs took 567.107086ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-w50z7v" namespace
STEP: Deleting all clusters in the capz-e2e-w50z7v namespace
STEP: Deleting cluster capz-e2e-w50z7v-gpu
INFO: Waiting for the Cluster capz-e2e-w50z7v/capz-e2e-w50z7v-gpu to be deleted
STEP: Waiting for cluster capz-e2e-w50z7v-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-w50z7v-gpu-control-plane-wvzlc, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-pxlfr, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-m5jsx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-w50z7v-gpu-control-plane-wvzlc, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-w50z7v-gpu-control-plane-wvzlc, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-w50z7v-gpu-control-plane-wvzlc, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-4dmjz, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-hn2t5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kl4d4, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-w50z7v
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 19m54s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Mon, 08 Nov 2021 19:23:30 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-0j3h19" for hosting the cluster
Nov  8 19:23:30.811: INFO: starting to create namespace for hosting the "capz-e2e-0j3h19" test spec
2021/11/08 19:23:30 failed trying to get namespace (capz-e2e-0j3h19):namespaces "capz-e2e-0j3h19" not found
INFO: Creating namespace capz-e2e-0j3h19
INFO: Creating event watcher for namespace "capz-e2e-0j3h19"
Nov  8 19:23:30.855: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-0j3h19-oot
INFO: Creating the workload cluster with name "capz-e2e-0j3h19-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-jobgk8rg0nw7c4 to be complete
Nov  8 19:33:54.280: INFO: waiting for job default/curl-to-elb-jobgk8rg0nw7c4 to be complete
Nov  8 19:34:04.361: INFO: job default/curl-to-elb-jobgk8rg0nw7c4 is complete, took 10.080165818s
STEP: connecting directly to the external LB service
Nov  8 19:34:04.361: INFO: starting attempts to connect directly to the external LB service
2021/11/08 19:34:04 [DEBUG] GET http://40.65.233.54
2021/11/08 19:34:34 [ERR] GET http://40.65.233.54 request failed: Get "http://40.65.233.54": dial tcp 40.65.233.54:80: i/o timeout
2021/11/08 19:34:34 [DEBUG] GET http://40.65.233.54: retrying in 1s (4 left)
Nov  8 19:34:35.429: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov  8 19:34:35.429: INFO: starting to delete external LB service webfxbn9i-elb
Nov  8 19:34:35.493: INFO: starting to delete deployment webfxbn9i
Nov  8 19:34:35.531: INFO: starting to delete job curl-to-elb-jobgk8rg0nw7c4
... skipping 56 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Mon, 08 Nov 2021 19:36:42 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-l24umo" for hosting the cluster
Nov  8 19:36:42.271: INFO: starting to create namespace for hosting the "capz-e2e-l24umo" test spec
2021/11/08 19:36:42 failed trying to get namespace (capz-e2e-l24umo):namespaces "capz-e2e-l24umo" not found
INFO: Creating namespace capz-e2e-l24umo
INFO: Creating event watcher for namespace "capz-e2e-l24umo"
Nov  8 19:36:42.310: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-l24umo-win-ha
INFO: Creating the workload cluster with name "capz-e2e-l24umo-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 151 lines ...
STEP: Fetching activity logs took 1.086749607s
STEP: Dumping all the Cluster API resources in the "capz-e2e-l24umo" namespace
STEP: Deleting all clusters in the capz-e2e-l24umo namespace
STEP: Deleting cluster capz-e2e-l24umo-win-ha
INFO: Waiting for the Cluster capz-e2e-l24umo/capz-e2e-l24umo-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-l24umo-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-l24umo-win-ha-control-plane-s28bt, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-fztm2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-l24umo-win-ha-control-plane-s28bt, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-l24umo-win-ha-control-plane-6w2pk, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-f7fwp, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-q6jjs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7mmkj, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-l24umo-win-ha-control-plane-6w2pk, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wjnmc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-76fgz, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-l24umo-win-ha-control-plane-6w2pk, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-l24umo-win-ha-control-plane-6w2pk, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-j9b4h, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-l24umo-win-ha-control-plane-s28bt, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-l24umo-win-ha-control-plane-s28bt, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-8m8ld, container kube-flannel: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-l24umo
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 30m0s on Ginkgo node 3 of 3

... skipping 12 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Mon, 08 Nov 2021 19:29:10 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-vktz1g" for hosting the cluster
Nov  8 19:29:10.437: INFO: starting to create namespace for hosting the "capz-e2e-vktz1g" test spec
2021/11/08 19:29:10 failed trying to get namespace (capz-e2e-vktz1g):namespaces "capz-e2e-vktz1g" not found
INFO: Creating namespace capz-e2e-vktz1g
INFO: Creating event watcher for namespace "capz-e2e-vktz1g"
Nov  8 19:29:10.480: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-vktz1g-aks
INFO: Creating the workload cluster with name "capz-e2e-vktz1g-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1108 19:29:44.908355   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:30:37.847109   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:31:26.874269   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:32:06.098937   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:33:00.199131   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:33:42.565158   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov  8 19:33:42.902: INFO: Waiting for the first control plane machine managed by capz-e2e-vktz1g/capz-e2e-vktz1g-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
E1108 19:34:40.812218   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:35:27.523438   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:36:04.604845   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:36:52.649435   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:37:52.275628   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:38:25.497511   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:39:22.454908   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:40:18.181292   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:41:17.363165   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:41:55.520543   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:42:53.546662   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:43:38.364642   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:44:37.662796   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:45:16.876024   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:46:10.923710   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:46:47.756056   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:47:45.742351   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:48:25.961738   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:49:24.278256   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:50:12.562466   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:51:03.722532   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:52:02.481527   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:53:01.113224   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:53:34.792430   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Dumping logs from the "capz-e2e-vktz1g-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-vktz1g/capz-e2e-vktz1g-aks logs
STEP: Dumping workload cluster capz-e2e-vktz1g/capz-e2e-vktz1g-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 442.633172ms
STEP: Creating log watcher for controller kube-system/calico-node-58jd6, container calico-node
STEP: Creating log watcher for controller kube-system/metrics-server-569f6547dd-f9797, container metrics-server
... skipping 10 lines ...
STEP: Fetching activity logs took 823.073772ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-vktz1g" namespace
STEP: Deleting all clusters in the capz-e2e-vktz1g namespace
STEP: Deleting cluster capz-e2e-vktz1g-aks
INFO: Waiting for the Cluster capz-e2e-vktz1g/capz-e2e-vktz1g-aks to be deleted
STEP: Waiting for cluster capz-e2e-vktz1g-aks to be deleted
E1108 19:54:34.639103   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:55:25.793535   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:56:19.573218   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:57:07.119215   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:57:48.356764   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:58:38.876640   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 19:59:31.684097   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 20:00:05.948023   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 20:00:46.543267   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 20:01:35.564084   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 20:02:22.258068   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 20:03:12.059622   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 20:03:54.427542   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 20:04:41.072449   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 20:05:15.783322   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 20:05:52.854654   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 20:06:50.860840   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 20:07:33.283236   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 20:08:28.869303   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-vktz1g
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1108 20:09:28.518855   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 41m8s on Ginkgo node 1 of 3


• Failure [2467.644 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 57 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Mon, 08 Nov 2021 19:48:05 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-0qptnm" for hosting the cluster
Nov  8 19:48:05.171: INFO: starting to create namespace for hosting the "capz-e2e-0qptnm" test spec
2021/11/08 19:48:05 failed trying to get namespace (capz-e2e-0qptnm):namespaces "capz-e2e-0qptnm" not found
INFO: Creating namespace capz-e2e-0qptnm
INFO: Creating event watcher for namespace "capz-e2e-0qptnm"
Nov  8 19:48:05.203: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-0qptnm-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-0qptnm-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-jobvrf212961wo to be complete
Nov  8 19:58:28.082: INFO: waiting for job default/curl-to-elb-jobvrf212961wo to be complete
Nov  8 19:58:38.154: INFO: job default/curl-to-elb-jobvrf212961wo is complete, took 10.072108092s
STEP: connecting directly to the external LB service
Nov  8 19:58:38.154: INFO: starting attempts to connect directly to the external LB service
2021/11/08 19:58:38 [DEBUG] GET http://20.62.30.94
2021/11/08 19:59:08 [ERR] GET http://20.62.30.94 request failed: Get "http://20.62.30.94": dial tcp 20.62.30.94:80: i/o timeout
2021/11/08 19:59:08 [DEBUG] GET http://20.62.30.94: retrying in 1s (4 left)
Nov  8 19:59:09.227: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov  8 19:59:09.227: INFO: starting to delete external LB service webi6clkd-elb
Nov  8 19:59:09.296: INFO: starting to delete deployment webi6clkd
Nov  8 19:59:09.330: INFO: starting to delete job curl-to-elb-jobvrf212961wo
... skipping 25 lines ...
STEP: waiting for job default/curl-to-elb-jobr8h3kw1gzvb to be complete
Nov  8 20:03:00.836: INFO: waiting for job default/curl-to-elb-jobr8h3kw1gzvb to be complete
Nov  8 20:03:10.908: INFO: job default/curl-to-elb-jobr8h3kw1gzvb is complete, took 10.072123246s
STEP: connecting directly to the external LB service
Nov  8 20:03:10.908: INFO: starting attempts to connect directly to the external LB service
2021/11/08 20:03:10 [DEBUG] GET http://20.62.31.7
2021/11/08 20:03:40 [ERR] GET http://20.62.31.7 request failed: Get "http://20.62.31.7": dial tcp 20.62.31.7:80: i/o timeout
2021/11/08 20:03:40 [DEBUG] GET http://20.62.31.7: retrying in 1s (4 left)
Nov  8 20:03:41.977: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov  8 20:03:41.977: INFO: starting to delete external LB service web-windowsp4f3yf-elb
Nov  8 20:03:42.052: INFO: starting to delete deployment web-windowsp4f3yf
Nov  8 20:03:42.087: INFO: starting to delete job curl-to-elb-jobr8h3kw1gzvb
... skipping 29 lines ...
STEP: Fetching activity logs took 554.742369ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-0qptnm" namespace
STEP: Deleting all clusters in the capz-e2e-0qptnm namespace
STEP: Deleting cluster capz-e2e-0qptnm-win-vmss
INFO: Waiting for the Cluster capz-e2e-0qptnm/capz-e2e-0qptnm-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-0qptnm-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-jnwvt, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-0qptnm-win-vmss-control-plane-9cghj, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2m25b, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-wjn2l, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-5v6k2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-q6sm5, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8rr4w, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-0qptnm-win-vmss-control-plane-9cghj, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-0qptnm-win-vmss-control-plane-9cghj, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-0qptnm-win-vmss-control-plane-9cghj, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-gttzw, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-hgd5b, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-0qptnm
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 30m43s on Ginkgo node 2 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows enabled VMSS cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:542
    with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543
------------------------------
E1108 20:10:21.103015   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 20:11:16.297000   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 20:11:56.912423   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 20:12:42.204202   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 20:13:20.974269   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 20:14:17.804744   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 20:15:13.076077   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 20:15:47.271942   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 20:16:28.221128   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 20:17:10.361209   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 20:17:51.239484   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1108 20:18:34.759707   24251 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w6k6k1/events?resourceVersion=8170": dial tcp: lookup capz-e2e-w6k6k1-public-custom-vnet-75581b23.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating an AKS cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216

Ran 9 of 22 Specs in 6240.961 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 1h45m30.204219149s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...