This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-07 18:29
Elapsed1h54m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node 46m30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\san\sAKS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454
Timed out after 1200.000s.
System machine pools not ready
Expected
    <bool>: false
to equal
    <bool>: true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 433 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Sun, 07 Nov 2021 18:36:10 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-0cglwf" for hosting the cluster
Nov  7 18:36:10.608: INFO: starting to create namespace for hosting the "capz-e2e-0cglwf" test spec
2021/11/07 18:36:10 failed trying to get namespace (capz-e2e-0cglwf):namespaces "capz-e2e-0cglwf" not found
INFO: Creating namespace capz-e2e-0cglwf
INFO: Creating event watcher for namespace "capz-e2e-0cglwf"
Nov  7 18:36:10.671: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-0cglwf-ipv6
INFO: Creating the workload cluster with name "capz-e2e-0cglwf-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 586.046547ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-0cglwf" namespace
STEP: Deleting all clusters in the capz-e2e-0cglwf namespace
STEP: Deleting cluster capz-e2e-0cglwf-ipv6
INFO: Waiting for the Cluster capz-e2e-0cglwf/capz-e2e-0cglwf-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-0cglwf-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6jd4z, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-0cglwf-ipv6-control-plane-p2x57, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-292nq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-6rm22, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-2fkdv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-0cglwf-ipv6-control-plane-4rlrr, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2tvj2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-0cglwf-ipv6-control-plane-4rlrr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-c48b8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-0cglwf-ipv6-control-plane-4rlrr, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-b5l2f, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-0cglwf-ipv6-control-plane-zw2g4, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-0cglwf-ipv6-control-plane-p2x57, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-0cglwf-ipv6-control-plane-zw2g4, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fq9cb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-0cglwf-ipv6-control-plane-zw2g4, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-0cglwf-ipv6-control-plane-4rlrr, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ll4qs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-0cglwf-ipv6-control-plane-p2x57, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-0cglwf-ipv6-control-plane-p2x57, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-g8tjl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xljv9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-0cglwf-ipv6-control-plane-zw2g4, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-0cglwf
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 17m17s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Sun, 07 Nov 2021 18:53:27 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-ncxfg4" for hosting the cluster
Nov  7 18:53:27.512: INFO: starting to create namespace for hosting the "capz-e2e-ncxfg4" test spec
2021/11/07 18:53:27 failed trying to get namespace (capz-e2e-ncxfg4):namespaces "capz-e2e-ncxfg4" not found
INFO: Creating namespace capz-e2e-ncxfg4
INFO: Creating event watcher for namespace "capz-e2e-ncxfg4"
Nov  7 18:53:27.543: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-ncxfg4-vmss
INFO: Creating the workload cluster with name "capz-e2e-ncxfg4-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 106 lines ...
STEP: Fetching activity logs took 608.497549ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-ncxfg4" namespace
STEP: Deleting all clusters in the capz-e2e-ncxfg4 namespace
STEP: Deleting cluster capz-e2e-ncxfg4-vmss
INFO: Waiting for the Cluster capz-e2e-ncxfg4/capz-e2e-ncxfg4-vmss to be deleted
STEP: Waiting for cluster capz-e2e-ncxfg4-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ncxfg4-vmss-control-plane-xdjzr, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ncxfg4-vmss-control-plane-xdjzr, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ncxfg4-vmss-control-plane-xdjzr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ncxfg4-vmss-control-plane-xdjzr, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7whfb, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-ncxfg4
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 18m8s on Ginkgo node 3 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Sun, 07 Nov 2021 18:36:10 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-q8sslm" for hosting the cluster
Nov  7 18:36:10.605: INFO: starting to create namespace for hosting the "capz-e2e-q8sslm" test spec
2021/11/07 18:36:10 failed trying to get namespace (capz-e2e-q8sslm):namespaces "capz-e2e-q8sslm" not found
INFO: Creating namespace capz-e2e-q8sslm
INFO: Creating event watcher for namespace "capz-e2e-q8sslm"
Nov  7 18:36:10.653: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-q8sslm-ha
INFO: Creating the workload cluster with name "capz-e2e-q8sslm-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 59 lines ...
STEP: waiting for job default/curl-to-elb-jobspi9k5soumb to be complete
Nov  7 18:46:28.331: INFO: waiting for job default/curl-to-elb-jobspi9k5soumb to be complete
Nov  7 18:46:38.538: INFO: job default/curl-to-elb-jobspi9k5soumb is complete, took 10.206681659s
STEP: connecting directly to the external LB service
Nov  7 18:46:38.538: INFO: starting attempts to connect directly to the external LB service
2021/11/07 18:46:38 [DEBUG] GET http://20.67.183.204
2021/11/07 18:47:08 [ERR] GET http://20.67.183.204 request failed: Get "http://20.67.183.204": dial tcp 20.67.183.204:80: i/o timeout
2021/11/07 18:47:08 [DEBUG] GET http://20.67.183.204: retrying in 1s (4 left)
Nov  7 18:47:09.742: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov  7 18:47:09.742: INFO: starting to delete external LB service web0f2e56-elb
Nov  7 18:47:09.887: INFO: starting to delete deployment web0f2e56
Nov  7 18:47:09.994: INFO: starting to delete job curl-to-elb-jobspi9k5soumb
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov  7 18:47:10.134: INFO: starting to create dev deployment namespace
2021/11/07 18:47:10 failed trying to get namespace (development):namespaces "development" not found
2021/11/07 18:47:10 namespace development does not exist, creating...
STEP: Creating production namespace
Nov  7 18:47:10.347: INFO: starting to create prod deployment namespace
2021/11/07 18:47:10 failed trying to get namespace (production):namespaces "production" not found
2021/11/07 18:47:10 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov  7 18:47:10.557: INFO: starting to create frontend-prod deployments
Nov  7 18:47:10.663: INFO: starting to create frontend-dev deployments
Nov  7 18:47:10.784: INFO: starting to create backend deployments
Nov  7 18:47:10.890: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov  7 18:47:37.260: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.129.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  7 18:49:48.052: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov  7 18:49:48.419: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.129.131 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.129.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  7 18:54:10.721: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov  7 18:54:11.095: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.64.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  7 18:56:23.841: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov  7 18:56:24.215: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.129.129 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.64.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  7 19:00:48.032: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov  7 19:00:48.422: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.129.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  7 19:03:00.615: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov  7 19:03:00.996: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.129.131 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-q8sslm-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-q8sslm/capz-e2e-q8sslm-ha logs
Nov  7 19:05:13.059: INFO: INFO: Collecting logs for node capz-e2e-q8sslm-ha-control-plane-tgv4l in cluster capz-e2e-q8sslm-ha in namespace capz-e2e-q8sslm

Nov  7 19:05:25.894: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-q8sslm-ha-control-plane-tgv4l
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-q8sslm-ha-control-plane-vrpzp, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-q8sslm-ha-control-plane-8g452, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-hxdpg, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-4hbf4, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-q8sslm-ha-control-plane-tgv4l, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-mvfkn, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-q8sslm-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000343914s
STEP: Dumping all the Cluster API resources in the "capz-e2e-q8sslm" namespace
STEP: Deleting all clusters in the capz-e2e-q8sslm namespace
STEP: Deleting cluster capz-e2e-q8sslm-ha
INFO: Waiting for the Cluster capz-e2e-q8sslm/capz-e2e-q8sslm-ha to be deleted
STEP: Waiting for cluster capz-e2e-q8sslm-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-pk7gz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mccx5, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-q8sslm-ha-control-plane-vrpzp, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ckbqm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-q8sslm-ha-control-plane-vrpzp, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-q8sslm-ha-control-plane-vrpzp, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-jgnml, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-mgg8f, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hxdpg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mdb2q, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-q8sslm-ha-control-plane-vrpzp, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-q8sslm
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 47m29s on Ginkgo node 2 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Sun, 07 Nov 2021 18:36:10 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-jil946" for hosting the cluster
Nov  7 18:36:10.563: INFO: starting to create namespace for hosting the "capz-e2e-jil946" test spec
2021/11/07 18:36:10 failed trying to get namespace (capz-e2e-jil946):namespaces "capz-e2e-jil946" not found
INFO: Creating namespace capz-e2e-jil946
INFO: Creating event watcher for namespace "capz-e2e-jil946"
Nov  7 18:36:10.595: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-jil946-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-jgt4q, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-jil946-public-custom-vnet-control-plane-lp72j, container kube-controller-manager
STEP: Fetching kube-system pod logs took 575.905804ms
STEP: Dumping workload cluster capz-e2e-jil946/capz-e2e-jil946-public-custom-vnet Azure activity log
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-jil946-public-custom-vnet-control-plane-lp72j, container etcd
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-jil946-public-custom-vnet-control-plane-lp72j, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-jil946-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000675958s
STEP: Dumping all the Cluster API resources in the "capz-e2e-jil946" namespace
STEP: Deleting all clusters in the capz-e2e-jil946 namespace
STEP: Deleting cluster capz-e2e-jil946-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-jil946/capz-e2e-jil946-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-jil946-public-custom-vnet to be deleted
W1107 19:20:48.376325   24263 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1107 19:21:19.485509   24263 trace.go:205] Trace[954062253]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (07-Nov-2021 19:20:49.484) (total time: 30001ms):
Trace[954062253]: [30.001125827s] [30.001125827s] END
E1107 19:21:19.485557   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp 20.67.183.223:6443: i/o timeout
I1107 19:21:52.565038   24263 trace.go:205] Trace[2114945480]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (07-Nov-2021 19:21:22.563) (total time: 30001ms):
Trace[2114945480]: [30.001180638s] [30.001180638s] END
E1107 19:21:52.565083   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp 20.67.183.223:6443: i/o timeout
I1107 19:22:28.112449   24263 trace.go:205] Trace[319192756]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (07-Nov-2021 19:21:58.111) (total time: 30001ms):
Trace[319192756]: [30.001064753s] [30.001064753s] END
E1107 19:22:28.112496   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp 20.67.183.223:6443: i/o timeout
I1107 19:23:07.499707   24263 trace.go:205] Trace[1897975855]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (07-Nov-2021 19:22:37.498) (total time: 30000ms):
Trace[1897975855]: [30.000990086s] [30.000990086s] END
E1107 19:23:07.499769   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp 20.67.183.223:6443: i/o timeout
I1107 19:23:58.208396   24263 trace.go:205] Trace[1244494347]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (07-Nov-2021 19:23:28.207) (total time: 30001ms):
Trace[1244494347]: [30.001261769s] [30.001261769s] END
E1107 19:23:58.208447   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp 20.67.183.223:6443: i/o timeout
I1107 19:25:12.682847   24263 trace.go:205] Trace[1273061027]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (07-Nov-2021 19:24:42.681) (total time: 30001ms):
Trace[1273061027]: [30.00119423s] [30.00119423s] END
E1107 19:25:12.682902   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp 20.67.183.223:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-jil946
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov  7 19:26:01.779: INFO: deleting an existing virtual network "custom-vnet"
Nov  7 19:26:12.767: INFO: deleting an existing route table "node-routetable"
I1107 19:26:21.769749   24263 trace.go:205] Trace[522708511]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (07-Nov-2021 19:25:51.768) (total time: 30001ms):
Trace[522708511]: [30.001218095s] [30.001218095s] END
E1107 19:26:21.769797   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp 20.67.183.223:6443: i/o timeout
Nov  7 19:26:23.308: INFO: deleting an existing network security group "node-nsg"
Nov  7 19:26:33.858: INFO: deleting an existing network security group "control-plane-nsg"
Nov  7 19:26:44.388: INFO: verifying the existing resource group "capz-e2e-jil946-public-custom-vnet" is empty
Nov  7 19:26:45.023: INFO: deleting the existing resource group "capz-e2e-jil946-public-custom-vnet"
E1107 19:27:14.613926   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1107 19:28:12.938009   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:28:53.236828   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 53m5s on Ginkgo node 1 of 3


• [SLOW TEST:3185.028 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Sun, 07 Nov 2021 19:11:35 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-jmpzg0" for hosting the cluster
Nov  7 19:11:35.503: INFO: starting to create namespace for hosting the "capz-e2e-jmpzg0" test spec
2021/11/07 19:11:35 failed trying to get namespace (capz-e2e-jmpzg0):namespaces "capz-e2e-jmpzg0" not found
INFO: Creating namespace capz-e2e-jmpzg0
INFO: Creating event watcher for namespace "capz-e2e-jmpzg0"
Nov  7 19:11:35.528: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-jmpzg0-gpu
INFO: Creating the workload cluster with name "capz-e2e-jmpzg0-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 58 lines ...
STEP: Fetching activity logs took 522.823337ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-jmpzg0" namespace
STEP: Deleting all clusters in the capz-e2e-jmpzg0 namespace
STEP: Deleting cluster capz-e2e-jmpzg0-gpu
INFO: Waiting for the Cluster capz-e2e-jmpzg0/capz-e2e-jmpzg0-gpu to be deleted
STEP: Waiting for cluster capz-e2e-jmpzg0-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-jmpzg0-gpu-control-plane-xts2t, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-jmpzg0-gpu-control-plane-xts2t, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5wt9j, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-jmpzg0-gpu-control-plane-xts2t, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-m59pr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kxgfx, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-2g4ct, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-jmpzg0-gpu-control-plane-xts2t, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-gxs8c, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-jmpzg0
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 19m34s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Sun, 07 Nov 2021 19:23:39 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-5c8679" for hosting the cluster
Nov  7 19:23:39.795: INFO: starting to create namespace for hosting the "capz-e2e-5c8679" test spec
2021/11/07 19:23:39 failed trying to get namespace (capz-e2e-5c8679):namespaces "capz-e2e-5c8679" not found
INFO: Creating namespace capz-e2e-5c8679
INFO: Creating event watcher for namespace "capz-e2e-5c8679"
Nov  7 19:23:39.836: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-5c8679-oot
INFO: Creating the workload cluster with name "capz-e2e-5c8679-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 98 lines ...
STEP: Fetching activity logs took 710.269598ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-5c8679" namespace
STEP: Deleting all clusters in the capz-e2e-5c8679 namespace
STEP: Deleting cluster capz-e2e-5c8679-oot
INFO: Waiting for the Cluster capz-e2e-5c8679/capz-e2e-5c8679-oot to be deleted
STEP: Waiting for cluster capz-e2e-5c8679-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-5c8679-oot-control-plane-tfgxz, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-74snl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-c48qr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-k8tdg, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-9xt6v, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-s8xpx, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-dgszt, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-controller-manager, container cloud-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-5c8679-oot-control-plane-tfgxz, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-5c8679-oot-control-plane-tfgxz, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-5c8679-oot-control-plane-tfgxz, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-5c8679
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 21m13s on Ginkgo node 2 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Sun, 07 Nov 2021 19:31:09 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-y4c6hs" for hosting the cluster
Nov  7 19:31:09.846: INFO: starting to create namespace for hosting the "capz-e2e-y4c6hs" test spec
2021/11/07 19:31:09 failed trying to get namespace (capz-e2e-y4c6hs):namespaces "capz-e2e-y4c6hs" not found
INFO: Creating namespace capz-e2e-y4c6hs
INFO: Creating event watcher for namespace "capz-e2e-y4c6hs"
Nov  7 19:31:09.873: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-y4c6hs-win-ha
INFO: Creating the workload cluster with name "capz-e2e-y4c6hs-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 55 lines ...
STEP: waiting for job default/curl-to-elb-jobyep9wjjblna to be complete
Nov  7 19:42:24.250: INFO: waiting for job default/curl-to-elb-jobyep9wjjblna to be complete
Nov  7 19:42:34.460: INFO: job default/curl-to-elb-jobyep9wjjblna is complete, took 10.210457423s
STEP: connecting directly to the external LB service
Nov  7 19:42:34.460: INFO: starting attempts to connect directly to the external LB service
2021/11/07 19:42:34 [DEBUG] GET http://20.93.15.138
2021/11/07 19:43:04 [ERR] GET http://20.93.15.138 request failed: Get "http://20.93.15.138": dial tcp 20.93.15.138:80: i/o timeout
2021/11/07 19:43:04 [DEBUG] GET http://20.93.15.138: retrying in 1s (4 left)
Nov  7 19:43:08.692: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov  7 19:43:08.692: INFO: starting to delete external LB service web7k7ioy-elb
Nov  7 19:43:08.841: INFO: starting to delete deployment web7k7ioy
Nov  7 19:43:08.948: INFO: starting to delete job curl-to-elb-jobyep9wjjblna
... skipping 79 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-y4c6hs-win-ha-control-plane-csc6z, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-9vtbh, container kube-flannel
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-y4c6hs-win-ha-control-plane-mk8km, container etcd
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-545wh, container kube-flannel
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-y4c6hs-win-ha-control-plane-csc6z, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-q9vdj, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-y4c6hs-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000646103s
STEP: Dumping all the Cluster API resources in the "capz-e2e-y4c6hs" namespace
STEP: Deleting all clusters in the capz-e2e-y4c6hs namespace
STEP: Deleting cluster capz-e2e-y4c6hs-win-ha
INFO: Waiting for the Cluster capz-e2e-y4c6hs/capz-e2e-y4c6hs-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-y4c6hs-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-pc8lx, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-7gtwx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-q9vdj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-ftthg, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-y4c6hs-win-ha-control-plane-mk8km, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-y4c6hs-win-ha-control-plane-mk8km, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-y4c6hs-win-ha-control-plane-mk8km, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-y4c6hs-win-ha-control-plane-mk8km, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-y4c6hs
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 39m14s on Ginkgo node 3 of 3

... skipping 12 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Sun, 07 Nov 2021 19:29:15 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-awy91e" for hosting the cluster
Nov  7 19:29:15.593: INFO: starting to create namespace for hosting the "capz-e2e-awy91e" test spec
2021/11/07 19:29:15 failed trying to get namespace (capz-e2e-awy91e):namespaces "capz-e2e-awy91e" not found
INFO: Creating namespace capz-e2e-awy91e
INFO: Creating event watcher for namespace "capz-e2e-awy91e"
Nov  7 19:29:15.625: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-awy91e-aks
INFO: Creating the workload cluster with name "capz-e2e-awy91e-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1107 19:29:51.747508   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:30:42.792951   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:31:36.766980   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:32:23.606129   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:33:22.068480   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:33:54.393968   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov  7 19:33:57.145: INFO: Waiting for the first control plane machine managed by capz-e2e-awy91e/capz-e2e-awy91e-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
E1107 19:34:35.175083   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:35:10.776118   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:36:07.843625   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:37:03.441243   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:37:43.910082   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:38:15.819499   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:39:15.407369   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:39:51.179437   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:40:49.437718   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:41:36.413194   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:42:19.780195   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:43:18.547754   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:44:12.163127   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:44:58.557762   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:45:36.994520   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:46:25.208305   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:47:07.516563   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:47:53.225344   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:48:24.225065   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:49:17.372351   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:50:13.048474   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:50:44.894977   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:51:20.093317   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:52:05.378098   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:52:35.787093   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:53:21.426003   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Dumping logs from the "capz-e2e-awy91e-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-awy91e/capz-e2e-awy91e-aks logs
Nov  7 19:53:57.211: INFO: INFO: Collecting logs for node aks-agentpool1-93749862-vmss000000 in cluster capz-e2e-awy91e-aks in namespace capz-e2e-awy91e

E1107 19:54:01.364792   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:54:45.198757   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:55:31.746196   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov  7 19:56:07.407: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-awy91e/capz-e2e-awy91e-aks: [dialing public load balancer at capz-e2e-awy91e-aks-053da306.hcp.northeurope.azmk8s.io: dial tcp 52.155.232.97:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-awy91e/capz-e2e-awy91e-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 951.726879ms
STEP: Creating log watcher for controller kube-system/calico-node-hzqbs, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-f792h, container kube-proxy
STEP: Creating log watcher for controller kube-system/metrics-server-569f6547dd-pdq67, container metrics-server
STEP: Creating log watcher for controller kube-system/kube-proxy-nfd4v, container kube-proxy
... skipping 8 lines ...
STEP: Fetching activity logs took 544.300973ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-awy91e" namespace
STEP: Deleting all clusters in the capz-e2e-awy91e namespace
STEP: Deleting cluster capz-e2e-awy91e-aks
INFO: Waiting for the Cluster capz-e2e-awy91e/capz-e2e-awy91e-aks to be deleted
STEP: Waiting for cluster capz-e2e-awy91e-aks to be deleted
E1107 19:56:21.351637   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:57:17.237144   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:57:58.612021   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:58:55.168051   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 19:59:30.102599   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:00:06.673984   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:00:59.116849   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:01:38.000496   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:02:23.822623   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:03:07.113183   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:03:53.930672   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:04:53.523508   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:05:42.514242   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:06:23.757545   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:07:15.888864   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:07:47.155180   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:08:24.166146   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:09:02.820662   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:09:46.000909   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:10:37.724576   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:11:34.361413   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:12:19.683131   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:12:50.214540   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:13:31.591796   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-awy91e
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1107 20:14:29.874633   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:15:28.902708   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 46m30s on Ginkgo node 1 of 3


• Failure [2790.187 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 57 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Sun, 07 Nov 2021 19:44:52 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-y4nqtw" for hosting the cluster
Nov  7 19:44:52.429: INFO: starting to create namespace for hosting the "capz-e2e-y4nqtw" test spec
2021/11/07 19:44:52 failed trying to get namespace (capz-e2e-y4nqtw):namespaces "capz-e2e-y4nqtw" not found
INFO: Creating namespace capz-e2e-y4nqtw
INFO: Creating event watcher for namespace "capz-e2e-y4nqtw"
Nov  7 19:44:52.458: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-y4nqtw-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-y4nqtw-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 89 lines ...
STEP: waiting for job default/curl-to-elb-job8hv4bbzwtxv to be complete
Nov  7 20:03:05.066: INFO: waiting for job default/curl-to-elb-job8hv4bbzwtxv to be complete
Nov  7 20:03:15.270: INFO: job default/curl-to-elb-job8hv4bbzwtxv is complete, took 10.204549902s
STEP: connecting directly to the external LB service
Nov  7 20:03:15.270: INFO: starting attempts to connect directly to the external LB service
2021/11/07 20:03:15 [DEBUG] GET http://20.67.213.63
2021/11/07 20:03:45 [ERR] GET http://20.67.213.63 request failed: Get "http://20.67.213.63": dial tcp 20.67.213.63:80: i/o timeout
2021/11/07 20:03:45 [DEBUG] GET http://20.67.213.63: retrying in 1s (4 left)
Nov  7 20:04:01.723: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov  7 20:04:01.723: INFO: starting to delete external LB service web-windowsec7uwf-elb
Nov  7 20:04:01.854: INFO: starting to delete deployment web-windowsec7uwf
Nov  7 20:04:01.959: INFO: starting to delete job curl-to-elb-job8hv4bbzwtxv
... skipping 23 lines ...
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-c62dc, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-y4nqtw-win-vmss-control-plane-28vvn, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-9jgrp, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-x7rx6, container coredns
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-y4nqtw-win-vmss-control-plane-28vvn, container etcd
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-y4nqtw-win-vmss-control-plane-28vvn, container kube-controller-manager
STEP: Got error while iterating over activity logs for resource group capz-e2e-y4nqtw-win-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000746939s
STEP: Dumping all the Cluster API resources in the "capz-e2e-y4nqtw" namespace
STEP: Deleting all clusters in the capz-e2e-y4nqtw namespace
STEP: Deleting cluster capz-e2e-y4nqtw-win-vmss
INFO: Waiting for the Cluster capz-e2e-y4nqtw/capz-e2e-y4nqtw-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-y4nqtw-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-y4nqtw-win-vmss-control-plane-28vvn, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-y4nqtw-win-vmss-control-plane-28vvn, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mz47b, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-y4nqtw-win-vmss-control-plane-28vvn, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-c62dc, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-x7rx6, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-wgh84, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-y4nqtw-win-vmss-control-plane-28vvn, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-y4nqtw
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 37m6s on Ginkgo node 2 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows enabled VMSS cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:542
    with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543
------------------------------
E1107 20:16:08.226481   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:17:06.178225   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:17:53.313598   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:18:26.795446   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:18:57.224206   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:19:30.597117   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:20:05.351810   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:21:02.974980   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1107 20:21:55.222529   24263 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jil946/events?resourceVersion=8326": dial tcp: lookup capz-e2e-jil946-public-custom-vnet-14bd3e5f.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating an AKS cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216

Ran 9 of 22 Specs in 6464.701 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 1h49m1.313776403s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...