This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-10 18:31
Elapsed1h57m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating a GPU-enabled cluster with a single control plane node and 1 node 37m18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sGPU\-enabled\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377
Timed out after 1200.000s.
Expected
    <bool>: false
to be true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:76
				
				Click to see stdout/stderrfrom junit.e2e_suite.2.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 433 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Wed, 10 Nov 2021 18:37:37 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-441at2" for hosting the cluster
Nov 10 18:37:37.509: INFO: starting to create namespace for hosting the "capz-e2e-441at2" test spec
2021/11/10 18:37:37 failed trying to get namespace (capz-e2e-441at2):namespaces "capz-e2e-441at2" not found
INFO: Creating namespace capz-e2e-441at2
INFO: Creating event watcher for namespace "capz-e2e-441at2"
Nov 10 18:37:37.573: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-441at2-ipv6
INFO: Creating the workload cluster with name "capz-e2e-441at2-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 587.368251ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-441at2" namespace
STEP: Deleting all clusters in the capz-e2e-441at2 namespace
STEP: Deleting cluster capz-e2e-441at2-ipv6
INFO: Waiting for the Cluster capz-e2e-441at2/capz-e2e-441at2-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-441at2-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-441at2-ipv6-control-plane-r2zsz, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-gpksk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-441at2-ipv6-control-plane-z4qg5, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-nwcc2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-441at2-ipv6-control-plane-scpkf, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-pwf8v, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-441at2-ipv6-control-plane-r2zsz, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-441at2-ipv6-control-plane-r2zsz, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-h4kf7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5w4sm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-441at2-ipv6-control-plane-scpkf, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-gjqmt, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-tcwvf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-441at2-ipv6-control-plane-scpkf, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-441at2-ipv6-control-plane-r2zsz, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-9q9vz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-441at2-ipv6-control-plane-z4qg5, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-78wbd, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-4c2sk, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-441at2-ipv6-control-plane-z4qg5, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-441at2-ipv6-control-plane-z4qg5, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8gflm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-441at2-ipv6-control-plane-scpkf, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-441at2
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 19m50s on Ginkgo node 3 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Wed, 10 Nov 2021 18:37:37 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-qxu6se" for hosting the cluster
Nov 10 18:37:37.507: INFO: starting to create namespace for hosting the "capz-e2e-qxu6se" test spec
2021/11/10 18:37:37 failed trying to get namespace (capz-e2e-qxu6se):namespaces "capz-e2e-qxu6se" not found
INFO: Creating namespace capz-e2e-qxu6se
INFO: Creating event watcher for namespace "capz-e2e-qxu6se"
Nov 10 18:37:37.568: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-qxu6se-ha
INFO: Creating the workload cluster with name "capz-e2e-qxu6se-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 59 lines ...
STEP: waiting for job default/curl-to-elb-job0le5w65zc17 to be complete
Nov 10 18:47:30.446: INFO: waiting for job default/curl-to-elb-job0le5w65zc17 to be complete
Nov 10 18:47:40.660: INFO: job default/curl-to-elb-job0le5w65zc17 is complete, took 10.213124694s
STEP: connecting directly to the external LB service
Nov 10 18:47:40.660: INFO: starting attempts to connect directly to the external LB service
2021/11/10 18:47:40 [DEBUG] GET http://20.108.56.228
2021/11/10 18:48:10 [ERR] GET http://20.108.56.228 request failed: Get "http://20.108.56.228": dial tcp 20.108.56.228:80: i/o timeout
2021/11/10 18:48:10 [DEBUG] GET http://20.108.56.228: retrying in 1s (4 left)
Nov 10 18:48:11.869: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 10 18:48:11.870: INFO: starting to delete external LB service webgi37sm-elb
Nov 10 18:48:12.022: INFO: starting to delete deployment webgi37sm
Nov 10 18:48:12.134: INFO: starting to delete job curl-to-elb-job0le5w65zc17
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 10 18:48:12.311: INFO: starting to create dev deployment namespace
2021/11/10 18:48:12 failed trying to get namespace (development):namespaces "development" not found
2021/11/10 18:48:12 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 10 18:48:12.532: INFO: starting to create prod deployment namespace
2021/11/10 18:48:12 failed trying to get namespace (production):namespaces "production" not found
2021/11/10 18:48:12 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 10 18:48:12.752: INFO: starting to create frontend-prod deployments
Nov 10 18:48:12.863: INFO: starting to create frontend-dev deployments
Nov 10 18:48:12.981: INFO: starting to create backend deployments
Nov 10 18:48:13.090: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 10 18:48:40.415: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.7.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 10 18:50:52.260: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 10 18:50:52.646: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.7.3 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.7.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 10 18:55:13.429: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 10 18:55:13.863: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.12.68 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 10 18:57:26.552: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 10 18:57:26.990: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.7.1 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.12.68 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 10 19:01:50.737: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 10 19:01:51.131: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.7.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 10 19:04:02.788: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 10 19:04:03.172: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.7.3 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-qxu6se-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-qxu6se/capz-e2e-qxu6se-ha logs
Nov 10 19:06:15.791: INFO: INFO: Collecting logs for node capz-e2e-qxu6se-ha-control-plane-p9p7b in cluster capz-e2e-qxu6se-ha in namespace capz-e2e-qxu6se

Nov 10 19:06:26.717: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-qxu6se-ha-control-plane-p9p7b
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-qxu6se-ha-control-plane-gjhgv, container etcd
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-qxu6se-ha-control-plane-p9p7b, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-g7lzq, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-qxu6se-ha-control-plane-gjhgv, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-qxu6se-ha-control-plane-p9p7b, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-qxu6se-ha-control-plane-ckhwb, container kube-apiserver
STEP: Got error while iterating over activity logs for resource group capz-e2e-qxu6se-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000266109s
STEP: Dumping all the Cluster API resources in the "capz-e2e-qxu6se" namespace
STEP: Deleting all clusters in the capz-e2e-qxu6se namespace
STEP: Deleting cluster capz-e2e-qxu6se-ha
INFO: Waiting for the Cluster capz-e2e-qxu6se/capz-e2e-qxu6se-ha to be deleted
STEP: Waiting for cluster capz-e2e-qxu6se-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-qxu6se-ha-control-plane-p9p7b, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-qxu6se-ha-control-plane-p9p7b, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nhd5k, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-qxu6se-ha-control-plane-p9p7b, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-r28dp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xwvg9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-qxu6se-ha-control-plane-gjhgv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-g7lzq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-kvl52, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-92hzz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-qxu6se-ha-control-plane-ckhwb, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lfhgx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-hgl7d, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-qxu6se-ha-control-plane-p9p7b, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-v246q, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-vnd4z, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-qxu6se-ha-control-plane-gjhgv, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xf7px, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-qxu6se-ha-control-plane-gjhgv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-qxu6se-ha-control-plane-gjhgv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-qxu6se-ha-control-plane-ckhwb, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5vgw8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lz725, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-qxu6se-ha-control-plane-ckhwb, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-qxu6se-ha-control-plane-ckhwb, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-qxu6se
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 42m46s on Ginkgo node 2 of 3

... skipping 8 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Wed, 10 Nov 2021 18:57:27 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-05a9z6" for hosting the cluster
Nov 10 18:57:27.088: INFO: starting to create namespace for hosting the "capz-e2e-05a9z6" test spec
2021/11/10 18:57:27 failed trying to get namespace (capz-e2e-05a9z6):namespaces "capz-e2e-05a9z6" not found
INFO: Creating namespace capz-e2e-05a9z6
INFO: Creating event watcher for namespace "capz-e2e-05a9z6"
Nov 10 18:57:27.118: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-05a9z6-vmss
INFO: Creating the workload cluster with name "capz-e2e-05a9z6-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 106 lines ...
STEP: Fetching activity logs took 601.233103ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-05a9z6" namespace
STEP: Deleting all clusters in the capz-e2e-05a9z6 namespace
STEP: Deleting cluster capz-e2e-05a9z6-vmss
INFO: Waiting for the Cluster capz-e2e-05a9z6/capz-e2e-05a9z6-vmss to be deleted
STEP: Waiting for cluster capz-e2e-05a9z6-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-05a9z6-vmss-control-plane-mwc56, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-05a9z6-vmss-control-plane-mwc56, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2dzzw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-dv9hw, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-05a9z6-vmss-control-plane-mwc56, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-z9vsb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-jm546, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bnrmx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-05a9z6-vmss-control-plane-mwc56, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-htqw2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-hr4df, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-sbtkz, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-pg79r, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-05a9z6
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 23m20s on Ginkgo node 3 of 3

... skipping 10 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Wed, 10 Nov 2021 18:37:37 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-xofyv0" for hosting the cluster
Nov 10 18:37:37.485: INFO: starting to create namespace for hosting the "capz-e2e-xofyv0" test spec
2021/11/10 18:37:37 failed trying to get namespace (capz-e2e-xofyv0):namespaces "capz-e2e-xofyv0" not found
INFO: Creating namespace capz-e2e-xofyv0
INFO: Creating event watcher for namespace "capz-e2e-xofyv0"
Nov 10 18:37:37.520: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-xofyv0-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-gxhdd, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-pf85j, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-xofyv0-public-custom-vnet-control-plane-rqtvw, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-xofyv0-public-custom-vnet-control-plane-rqtvw, container etcd
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-bp5fm, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-v9xq4, container calico-node
STEP: Got error while iterating over activity logs for resource group capz-e2e-xofyv0-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000803831s
STEP: Dumping all the Cluster API resources in the "capz-e2e-xofyv0" namespace
STEP: Deleting all clusters in the capz-e2e-xofyv0 namespace
STEP: Deleting cluster capz-e2e-xofyv0-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-xofyv0/capz-e2e-xofyv0-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-xofyv0-public-custom-vnet to be deleted
W1110 19:24:52.017948   24207 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1110 19:25:23.591739   24207 trace.go:205] Trace[770227388]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (10-Nov-2021 19:24:53.590) (total time: 30001ms):
Trace[770227388]: [30.001497992s] [30.001497992s] END
E1110 19:25:23.591856   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp 20.90.138.20:6443: i/o timeout
I1110 19:25:55.827954   24207 trace.go:205] Trace[199038122]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (10-Nov-2021 19:25:25.827) (total time: 30000ms):
Trace[199038122]: [30.000582016s] [30.000582016s] END
E1110 19:25:55.828017   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp 20.90.138.20:6443: i/o timeout
I1110 19:26:29.789404   24207 trace.go:205] Trace[1682509780]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (10-Nov-2021 19:25:59.788) (total time: 30000ms):
Trace[1682509780]: [30.000636467s] [30.000636467s] END
E1110 19:26:29.789469   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp 20.90.138.20:6443: i/o timeout
I1110 19:27:10.673259   24207 trace.go:205] Trace[1872538591]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (10-Nov-2021 19:26:40.670) (total time: 30002ms):
Trace[1872538591]: [30.002427731s] [30.002427731s] END
E1110 19:27:10.673325   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp 20.90.138.20:6443: i/o timeout
I1110 19:27:55.696156   24207 trace.go:205] Trace[2122256220]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (10-Nov-2021 19:27:25.693) (total time: 30002ms):
Trace[2122256220]: [30.002125885s] [30.002125885s] END
E1110 19:27:55.696216   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp 20.90.138.20:6443: i/o timeout
I1110 19:29:02.992427   24207 trace.go:205] Trace[2009141836]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (10-Nov-2021 19:28:32.991) (total time: 30000ms):
Trace[2009141836]: [30.000585191s] [30.000585191s] END
E1110 19:29:02.992493   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp 20.90.138.20:6443: i/o timeout
I1110 19:30:04.564500   24207 trace.go:205] Trace[1546360590]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (10-Nov-2021 19:29:34.563) (total time: 30001ms):
Trace[1546360590]: [30.001364602s] [30.001364602s] END
E1110 19:30:04.564560   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp 20.90.138.20:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-xofyv0
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 10 19:30:07.334: INFO: deleting an existing virtual network "custom-vnet"
Nov 10 19:30:18.503: INFO: deleting an existing route table "node-routetable"
Nov 10 19:30:29.109: INFO: deleting an existing network security group "node-nsg"
Nov 10 19:30:39.924: INFO: deleting an existing network security group "control-plane-nsg"
E1110 19:30:47.645469   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 10 19:30:50.552: INFO: verifying the existing resource group "capz-e2e-xofyv0-public-custom-vnet" is empty
Nov 10 19:30:51.188: INFO: deleting the existing resource group "capz-e2e-xofyv0-public-custom-vnet"
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1110 19:31:34.008416   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 54m34s on Ginkgo node 1 of 3


• [SLOW TEST:3273.974 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Wed, 10 Nov 2021 19:20:47 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-r3d38r" for hosting the cluster
Nov 10 19:20:47.185: INFO: starting to create namespace for hosting the "capz-e2e-r3d38r" test spec
2021/11/10 19:20:47 failed trying to get namespace (capz-e2e-r3d38r):namespaces "capz-e2e-r3d38r" not found
INFO: Creating namespace capz-e2e-r3d38r
INFO: Creating event watcher for namespace "capz-e2e-r3d38r"
Nov 10 19:20:47.218: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-r3d38r-oot
INFO: Creating the workload cluster with name "capz-e2e-r3d38r-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-jobbm1tzfsqugm to be complete
Nov 10 19:29:12.371: INFO: waiting for job default/curl-to-elb-jobbm1tzfsqugm to be complete
Nov 10 19:29:22.582: INFO: job default/curl-to-elb-jobbm1tzfsqugm is complete, took 10.210577449s
STEP: connecting directly to the external LB service
Nov 10 19:29:22.582: INFO: starting attempts to connect directly to the external LB service
2021/11/10 19:29:22 [DEBUG] GET http://20.108.58.147
2021/11/10 19:29:52 [ERR] GET http://20.108.58.147 request failed: Get "http://20.108.58.147": dial tcp 20.108.58.147:80: i/o timeout
2021/11/10 19:29:52 [DEBUG] GET http://20.108.58.147: retrying in 1s (4 left)
Nov 10 19:29:53.790: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 10 19:29:53.790: INFO: starting to delete external LB service webc5wv6c-elb
Nov 10 19:29:53.922: INFO: starting to delete deployment webc5wv6c
Nov 10 19:29:54.028: INFO: starting to delete job curl-to-elb-jobbm1tzfsqugm
... skipping 34 lines ...
STEP: Fetching activity logs took 579.80903ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-r3d38r" namespace
STEP: Deleting all clusters in the capz-e2e-r3d38r namespace
STEP: Deleting cluster capz-e2e-r3d38r-oot
INFO: Waiting for the Cluster capz-e2e-r3d38r/capz-e2e-r3d38r-oot to be deleted
STEP: Waiting for cluster capz-e2e-r3d38r-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-4jbrx, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-vvsd2, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-jljn9, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-whtrr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mfk6k, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-c4j8f, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-r3d38r
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 21m17s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Wed, 10 Nov 2021 19:32:11 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-sus98y" for hosting the cluster
Nov 10 19:32:11.462: INFO: starting to create namespace for hosting the "capz-e2e-sus98y" test spec
2021/11/10 19:32:11 failed trying to get namespace (capz-e2e-sus98y):namespaces "capz-e2e-sus98y" not found
INFO: Creating namespace capz-e2e-sus98y
INFO: Creating event watcher for namespace "capz-e2e-sus98y"
Nov 10 19:32:11.506: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-sus98y-aks
INFO: Creating the workload cluster with name "capz-e2e-sus98y-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1110 19:32:27.217308   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:33:05.429890   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:33:39.335407   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:34:26.599978   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:35:00.118086   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:35:48.097160   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:36:34.119223   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 10 19:36:53.005: INFO: Waiting for the first control plane machine managed by capz-e2e-sus98y/capz-e2e-sus98y-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
INFO: Waiting for control plane to be ready
Nov 10 19:37:03.041: INFO: Waiting for the first control plane machine managed by capz-e2e-sus98y/capz-e2e-sus98y-aks to be provisioned
STEP: Waiting for all control plane nodes to exist
... skipping 13 lines ...
STEP: time sync OK for host aks-agentpool1-16742328-vmss000000
STEP: time sync OK for host aks-agentpool1-16742328-vmss000000
STEP: Dumping logs from the "capz-e2e-sus98y-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-sus98y/capz-e2e-sus98y-aks logs
Nov 10 19:37:11.270: INFO: INFO: Collecting logs for node aks-agentpool1-16742328-vmss000000 in cluster capz-e2e-sus98y-aks in namespace capz-e2e-sus98y

E1110 19:37:32.414231   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:38:31.006699   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:39:01.318926   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 10 19:39:20.697: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-sus98y/capz-e2e-sus98y-aks: [dialing public load balancer at capz-e2e-sus98y-aks-bd118bd7.hcp.uksouth.azmk8s.io: dial tcp 20.50.111.2:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
Nov 10 19:39:21.879: INFO: INFO: Collecting logs for node aks-agentpool1-16742328-vmss000000 in cluster capz-e2e-sus98y-aks in namespace capz-e2e-sus98y

E1110 19:39:36.474386   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:40:14.543950   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:40:53.848014   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 10 19:41:31.765: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-sus98y/capz-e2e-sus98y-aks: [dialing public load balancer at capz-e2e-sus98y-aks-bd118bd7.hcp.uksouth.azmk8s.io: dial tcp 20.50.111.2:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-sus98y/capz-e2e-sus98y-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 962.758936ms
STEP: Dumping workload cluster capz-e2e-sus98y/capz-e2e-sus98y-aks Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-8mtzf, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-bc548, container calico-node
STEP: Creating log watcher for controller kube-system/calico-typha-horizontal-autoscaler-599c7bb664-zm6wp, container autoscaler
... skipping 8 lines ...
STEP: Fetching activity logs took 618.280838ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-sus98y" namespace
STEP: Deleting all clusters in the capz-e2e-sus98y namespace
STEP: Deleting cluster capz-e2e-sus98y-aks
INFO: Waiting for the Cluster capz-e2e-sus98y/capz-e2e-sus98y-aks to be deleted
STEP: Waiting for cluster capz-e2e-sus98y-aks to be deleted
E1110 19:41:38.662005   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:42:10.241139   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:42:40.568826   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:43:13.683318   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:44:11.747888   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:44:46.321777   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:45:45.522427   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:46:36.418768   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:47:11.984283   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:47:58.291692   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-sus98y
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1110 19:48:35.113709   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:49:15.398277   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:50:04.797725   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 18m3s on Ginkgo node 1 of 3


• [SLOW TEST:1082.865 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Wed, 10 Nov 2021 19:20:23 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-dr9v42" for hosting the cluster
Nov 10 19:20:23.373: INFO: starting to create namespace for hosting the "capz-e2e-dr9v42" test spec
2021/11/10 19:20:23 failed trying to get namespace (capz-e2e-dr9v42):namespaces "capz-e2e-dr9v42" not found
INFO: Creating namespace capz-e2e-dr9v42
INFO: Creating event watcher for namespace "capz-e2e-dr9v42"
Nov 10 19:20:23.408: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-dr9v42-gpu
INFO: Creating the workload cluster with name "capz-e2e-dr9v42-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: Fetching activity logs took 1.139270566s
STEP: Dumping all the Cluster API resources in the "capz-e2e-dr9v42" namespace
STEP: Deleting all clusters in the capz-e2e-dr9v42 namespace
STEP: Deleting cluster capz-e2e-dr9v42-gpu
INFO: Waiting for the Cluster capz-e2e-dr9v42/capz-e2e-dr9v42-gpu to be deleted
STEP: Waiting for cluster capz-e2e-dr9v42-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-dr9v42-gpu-control-plane-jlfg7, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-96w5j, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-lzwcl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-84vgr, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-p4vds, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-djtfq, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-dr9v42-gpu-control-plane-jlfg7, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-dr9v42-gpu-control-plane-jlfg7, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-dr9v42-gpu-control-plane-jlfg7, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-dr9v42
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 37m18s on Ginkgo node 2 of 3

... skipping 59 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Wed, 10 Nov 2021 19:42:03 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-pi34jd" for hosting the cluster
Nov 10 19:42:03.729: INFO: starting to create namespace for hosting the "capz-e2e-pi34jd" test spec
2021/11/10 19:42:03 failed trying to get namespace (capz-e2e-pi34jd):namespaces "capz-e2e-pi34jd" not found
INFO: Creating namespace capz-e2e-pi34jd
INFO: Creating event watcher for namespace "capz-e2e-pi34jd"
Nov 10 19:42:03.763: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-pi34jd-win-ha
INFO: Creating the workload cluster with name "capz-e2e-pi34jd-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 55 lines ...
STEP: waiting for job default/curl-to-elb-jobmhymzwczr21 to be complete
Nov 10 19:51:19.038: INFO: waiting for job default/curl-to-elb-jobmhymzwczr21 to be complete
Nov 10 19:51:29.249: INFO: job default/curl-to-elb-jobmhymzwczr21 is complete, took 10.210926559s
STEP: connecting directly to the external LB service
Nov 10 19:51:29.249: INFO: starting attempts to connect directly to the external LB service
2021/11/10 19:51:29 [DEBUG] GET http://20.108.73.97
2021/11/10 19:51:59 [ERR] GET http://20.108.73.97 request failed: Get "http://20.108.73.97": dial tcp 20.108.73.97:80: i/o timeout
2021/11/10 19:51:59 [DEBUG] GET http://20.108.73.97: retrying in 1s (4 left)
Nov 10 19:52:00.461: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 10 19:52:00.461: INFO: starting to delete external LB service web1xvvms-elb
Nov 10 19:52:00.616: INFO: starting to delete deployment web1xvvms
Nov 10 19:52:00.727: INFO: starting to delete job curl-to-elb-jobmhymzwczr21
... skipping 25 lines ...
STEP: waiting for job default/curl-to-elb-jobd15l2weglvo to be complete
Nov 10 19:54:13.908: INFO: waiting for job default/curl-to-elb-jobd15l2weglvo to be complete
Nov 10 19:54:24.118: INFO: job default/curl-to-elb-jobd15l2weglvo is complete, took 10.210259017s
STEP: connecting directly to the external LB service
Nov 10 19:54:24.118: INFO: starting attempts to connect directly to the external LB service
2021/11/10 19:54:24 [DEBUG] GET http://20.108.76.42
2021/11/10 19:54:54 [ERR] GET http://20.108.76.42 request failed: Get "http://20.108.76.42": dial tcp 20.108.76.42:80: i/o timeout
2021/11/10 19:54:54 [DEBUG] GET http://20.108.76.42: retrying in 1s (4 left)
Nov 10 19:55:10.664: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 10 19:55:10.664: INFO: starting to delete external LB service web-windowsn3ww8v-elb
Nov 10 19:55:10.805: INFO: starting to delete deployment web-windowsn3ww8v
Nov 10 19:55:10.925: INFO: starting to delete job curl-to-elb-jobd15l2weglvo
... skipping 43 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-cz6m4, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-gkgjp, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-9grwq, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-proxy-p9ch2, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-rb5c9, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-frcpr, container coredns
STEP: Got error while iterating over activity logs for resource group capz-e2e-pi34jd-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00115222s
STEP: Dumping all the Cluster API resources in the "capz-e2e-pi34jd" namespace
STEP: Deleting all clusters in the capz-e2e-pi34jd namespace
STEP: Deleting cluster capz-e2e-pi34jd-win-ha
INFO: Waiting for the Cluster capz-e2e-pi34jd/capz-e2e-pi34jd-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-pi34jd-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-frcpr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-pi34jd-win-ha-control-plane-2l79g, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-tt2jw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-pi34jd-win-ha-control-plane-2l79g, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-pvs7d, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-cz6m4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-pi34jd-win-ha-control-plane-9g42m, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-pi34jd-win-ha-control-plane-9g42m, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mkmcx, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-gxnvs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-pi34jd-win-ha-control-plane-2l79g, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-9grwq, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-p9ch2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-pi34jd-win-ha-control-plane-9g42m, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-sqx68, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-pi34jd-win-ha-control-plane-9g42m, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-pi34jd-win-ha-control-plane-2l79g, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-pi34jd
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 36m7s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Wed, 10 Nov 2021 19:50:14 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-vdb4mm" for hosting the cluster
Nov 10 19:50:14.331: INFO: starting to create namespace for hosting the "capz-e2e-vdb4mm" test spec
2021/11/10 19:50:14 failed trying to get namespace (capz-e2e-vdb4mm):namespaces "capz-e2e-vdb4mm" not found
INFO: Creating namespace capz-e2e-vdb4mm
INFO: Creating event watcher for namespace "capz-e2e-vdb4mm"
Nov 10 19:50:14.375: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-vdb4mm-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-vdb4mm-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 12 lines ...
kubeadmconfig.bootstrap.cluster.x-k8s.io/capz-e2e-vdb4mm-win-vmss-mp-win created
clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-vdb4mm-win-vmss-flannel created
configmap/cni-capz-e2e-vdb4mm-win-vmss-flannel created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1110 19:50:41.437200   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:51:27.855044   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by capz-e2e-vdb4mm/capz-e2e-vdb4mm-win-vmss-control-plane to be provisioned
STEP: Waiting for one control plane node to exist
E1110 19:52:25.945786   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:53:06.270810   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
INFO: Waiting for control plane capz-e2e-vdb4mm/capz-e2e-vdb4mm-win-vmss-control-plane to be ready (implies underlying nodes to be ready as well)
STEP: Waiting for the control plane to be ready
E1110 19:53:45.479130   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for the machine deployments to be provisioned
INFO: Waiting for the machine pools to be provisioned
STEP: Waiting for the machine pool workload nodes to exist
E1110 19:54:30.233980   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:55:16.243754   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Waiting for the machine pool workload nodes to exist
E1110 19:55:46.613900   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:56:39.096622   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:57:22.017861   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:57:52.226735   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:58:22.777704   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web8ml9wf to be available
Nov 10 19:58:37.259: INFO: starting to wait for deployment to become available
Nov 10 19:58:57.596: INFO: Deployment default/web8ml9wf is now available, took 20.337735749s
STEP: creating an internal Load Balancer service
Nov 10 19:58:57.596: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/web8ml9wf-ilb to be available
Nov 10 19:58:57.713: INFO: waiting for service default/web8ml9wf-ilb to be available
E1110 19:58:58.115852   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 19:59:42.322117   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 10 19:59:58.454: INFO: service default/web8ml9wf-ilb is available, took 1m0.740448979s
STEP: connecting to the internal LB service from a curl pod
Nov 10 19:59:58.558: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-jobbd5oo to be complete
Nov 10 19:59:58.676: INFO: waiting for job default/curl-to-ilb-jobbd5oo to be complete
Nov 10 20:00:08.886: INFO: job default/curl-to-ilb-jobbd5oo is complete, took 10.209917117s
STEP: deleting the ilb test resources
Nov 10 20:00:08.886: INFO: deleting the ilb service: web8ml9wf-ilb
Nov 10 20:00:09.023: INFO: deleting the ilb job: curl-to-ilb-jobbd5oo
STEP: creating an external Load Balancer service
Nov 10 20:00:09.130: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/web8ml9wf-elb to be available
Nov 10 20:00:09.248: INFO: waiting for service default/web8ml9wf-elb to be available
E1110 20:00:23.518302   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:01:07.146796   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:01:37.813078   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 10 20:01:50.408: INFO: service default/web8ml9wf-elb is available, took 1m41.159639592s
STEP: connecting to the external LB service from a curl pod
Nov 10 20:01:50.513: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-job6nb0cb9733w to be complete
Nov 10 20:01:50.621: INFO: waiting for job default/curl-to-elb-job6nb0cb9733w to be complete
Nov 10 20:02:00.830: INFO: job default/curl-to-elb-job6nb0cb9733w is complete, took 10.209404197s
STEP: connecting directly to the external LB service
Nov 10 20:02:00.830: INFO: starting attempts to connect directly to the external LB service
2021/11/10 20:02:00 [DEBUG] GET http://20.108.60.166
E1110 20:02:10.735270   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
2021/11/10 20:02:30 [ERR] GET http://20.108.60.166 request failed: Get "http://20.108.60.166": dial tcp 20.108.60.166:80: i/o timeout
2021/11/10 20:02:30 [DEBUG] GET http://20.108.60.166: retrying in 1s (4 left)
E1110 20:02:53.740468   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
2021/11/10 20:03:01 [ERR] GET http://20.108.60.166 request failed: Get "http://20.108.60.166": dial tcp 20.108.60.166:80: i/o timeout
2021/11/10 20:03:01 [DEBUG] GET http://20.108.60.166: retrying in 2s (3 left)
Nov 10 20:03:04.038: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 10 20:03:04.038: INFO: starting to delete external LB service web8ml9wf-elb
Nov 10 20:03:04.181: INFO: starting to delete deployment web8ml9wf
Nov 10 20:03:04.286: INFO: starting to delete job curl-to-elb-job6nb0cb9733w
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowshuk30j to be available
Nov 10 20:03:04.639: INFO: starting to wait for deployment to become available
E1110 20:03:45.717977   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:04:28.101582   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 10 20:04:55.997: INFO: Deployment default/web-windowshuk30j is now available, took 1m51.358017462s
STEP: creating an internal Load Balancer service
Nov 10 20:04:55.997: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/web-windowshuk30j-ilb to be available
Nov 10 20:04:56.119: INFO: waiting for service default/web-windowshuk30j-ilb to be available
E1110 20:05:07.493801   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:05:43.428578   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 10 20:05:56.857: INFO: service default/web-windowshuk30j-ilb is available, took 1m0.737903346s
STEP: connecting to the internal LB service from a curl pod
Nov 10 20:05:56.962: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-jobajuzi to be complete
Nov 10 20:05:57.070: INFO: waiting for job default/curl-to-ilb-jobajuzi to be complete
Nov 10 20:06:07.280: INFO: job default/curl-to-ilb-jobajuzi is complete, took 10.209963717s
STEP: deleting the ilb test resources
Nov 10 20:06:07.280: INFO: deleting the ilb service: web-windowshuk30j-ilb
Nov 10 20:06:07.516: INFO: deleting the ilb job: curl-to-ilb-jobajuzi
STEP: creating an external Load Balancer service
Nov 10 20:06:07.622: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/web-windowshuk30j-elb to be available
Nov 10 20:06:07.746: INFO: waiting for service default/web-windowshuk30j-elb to be available
E1110 20:06:31.519557   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:07:24.107942   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 10 20:07:48.904: INFO: service default/web-windowshuk30j-elb is available, took 1m41.157413837s
STEP: connecting to the external LB service from a curl pod
Nov 10 20:07:49.009: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-job272764ve706 to be complete
Nov 10 20:07:49.118: INFO: waiting for job default/curl-to-elb-job272764ve706 to be complete
Nov 10 20:07:59.330: INFO: job default/curl-to-elb-job272764ve706 is complete, took 10.211436336s
STEP: connecting directly to the external LB service
Nov 10 20:07:59.330: INFO: starting attempts to connect directly to the external LB service
2021/11/10 20:07:59 [DEBUG] GET http://20.108.59.240
E1110 20:08:19.484906   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
2021/11/10 20:08:29 [ERR] GET http://20.108.59.240 request failed: Get "http://20.108.59.240": dial tcp 20.108.59.240:80: i/o timeout
2021/11/10 20:08:29 [DEBUG] GET http://20.108.59.240: retrying in 1s (4 left)
Nov 10 20:08:30.549: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 10 20:08:30.549: INFO: starting to delete external LB service web-windowshuk30j-elb
Nov 10 20:08:30.679: INFO: starting to delete deployment web-windowshuk30j
Nov 10 20:08:30.787: INFO: starting to delete job curl-to-elb-job272764ve706
... skipping 6 lines ...
Nov 10 20:08:44.088: INFO: INFO: Collecting logs for node capz-e2e-vdb4mm-win-vmss-mp-0000000 in cluster capz-e2e-vdb4mm-win-vmss in namespace capz-e2e-vdb4mm

Nov 10 20:08:56.213: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-vdb4mm-win-vmss-mp-0

Nov 10 20:08:56.701: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-vdb4mm-win-vmss in namespace capz-e2e-vdb4mm

E1110 20:09:19.513920   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:10:16.963874   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 10 20:10:24.270: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

STEP: Dumping workload cluster capz-e2e-vdb4mm/capz-e2e-vdb4mm-win-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 1.006576852s
STEP: Dumping workload cluster capz-e2e-vdb4mm/capz-e2e-vdb4mm-win-vmss Azure activity log
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-92z6p, container kube-flannel
... skipping 5 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-vdb4mm-win-vmss-control-plane-pmstg, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-lrdck, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-7c7qn, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-f6rn4, container coredns
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-vdb4mm-win-vmss-control-plane-pmstg, container etcd
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-rk9g5, container kube-flannel
STEP: Got error while iterating over activity logs for resource group capz-e2e-vdb4mm-win-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000761489s
STEP: Dumping all the Cluster API resources in the "capz-e2e-vdb4mm" namespace
STEP: Deleting all clusters in the capz-e2e-vdb4mm namespace
STEP: Deleting cluster capz-e2e-vdb4mm-win-vmss
INFO: Waiting for the Cluster capz-e2e-vdb4mm/capz-e2e-vdb4mm-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-vdb4mm-win-vmss to be deleted
E1110 20:11:11.016803   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:11:52.365755   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-9kmjq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-kdbq5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-cgcj4, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-92z6p, container kube-flannel: http2: client connection lost
E1110 20:12:49.595751   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:13:43.367461   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:14:18.878797   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:14:58.392587   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:15:55.143249   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:16:54.629959   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:17:48.342172   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:18:24.222415   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:19:10.077437   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:19:40.513119   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:20:11.332909   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:21:10.429379   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:21:40.965909   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:22:30.269889   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:23:01.630621   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:23:58.406812   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:24:41.154424   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-vdb4mm
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1110 20:25:22.889235   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:25:56.352273   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
E1110 20:26:27.775671   24207 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-xofyv0/events?resourceVersion=8608": dial tcp: lookup capz-e2e-xofyv0-public-custom-vnet-b6011293.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 36m25s on Ginkgo node 1 of 3


• [SLOW TEST:2184.630 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 5 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a GPU-enabled cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:76

Ran 9 of 22 Specs in 6653.058 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 1h52m11.686371043s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...