This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-15 18:32
Elapsed1h55m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating a Windows enabled VMSS cluster with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node 34m39s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sWindows\senabled\sVMSS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\san\sLinux\sAzureMachinePool\swith\s1\snodes\sand\sWindows\sAzureMachinePool\swith\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543
Timed out after 900.002s.
Expected
    <int>: 0
to equal
    <int>: 1
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.3/framework/machinepool_helpers.go:85
				
				Click to see stdout/stderrfrom junit.e2e_suite.2.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 438 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Mon, 15 Nov 2021 18:38:52 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-i1rrwt" for hosting the cluster
Nov 15 18:38:52.452: INFO: starting to create namespace for hosting the "capz-e2e-i1rrwt" test spec
2021/11/15 18:38:52 failed trying to get namespace (capz-e2e-i1rrwt):namespaces "capz-e2e-i1rrwt" not found
INFO: Creating namespace capz-e2e-i1rrwt
INFO: Creating event watcher for namespace "capz-e2e-i1rrwt"
Nov 15 18:38:52.508: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-i1rrwt-ipv6
INFO: Creating the workload cluster with name "capz-e2e-i1rrwt-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 579.663392ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-i1rrwt" namespace
STEP: Deleting all clusters in the capz-e2e-i1rrwt namespace
STEP: Deleting cluster capz-e2e-i1rrwt-ipv6
INFO: Waiting for the Cluster capz-e2e-i1rrwt/capz-e2e-i1rrwt-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-i1rrwt-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-i1rrwt-ipv6-control-plane-5dcg4, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-i1rrwt-ipv6-control-plane-5dcg4, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-i1rrwt-ipv6-control-plane-5dcg4, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-nbjpx, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7cz5t, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-i1rrwt-ipv6-control-plane-q55bs, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-i1rrwt-ipv6-control-plane-ngsx6, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rg4r5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qztb5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-i1rrwt-ipv6-control-plane-q55bs, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8h5z6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-i1rrwt-ipv6-control-plane-ngsx6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-bc56c, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-7jm48, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-i1rrwt-ipv6-control-plane-q55bs, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-i1rrwt-ipv6-control-plane-5dcg4, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rjwwk, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-58ljb, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-i1rrwt-ipv6-control-plane-ngsx6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-i1rrwt-ipv6-control-plane-ngsx6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-gt8mc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vp8t9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-i1rrwt-ipv6-control-plane-q55bs, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-i1rrwt
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 15m20s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Mon, 15 Nov 2021 18:54:12 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-3nbxjs" for hosting the cluster
Nov 15 18:54:12.532: INFO: starting to create namespace for hosting the "capz-e2e-3nbxjs" test spec
2021/11/15 18:54:12 failed trying to get namespace (capz-e2e-3nbxjs):namespaces "capz-e2e-3nbxjs" not found
INFO: Creating namespace capz-e2e-3nbxjs
INFO: Creating event watcher for namespace "capz-e2e-3nbxjs"
Nov 15 18:54:12.561: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-3nbxjs-vmss
INFO: Creating the workload cluster with name "capz-e2e-3nbxjs-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 52 lines ...
STEP: waiting for job default/curl-to-elb-job4bmuisuho7x to be complete
Nov 15 19:04:22.863: INFO: waiting for job default/curl-to-elb-job4bmuisuho7x to be complete
Nov 15 19:04:32.906: INFO: job default/curl-to-elb-job4bmuisuho7x is complete, took 10.042906046s
STEP: connecting directly to the external LB service
Nov 15 19:04:32.906: INFO: starting attempts to connect directly to the external LB service
2021/11/15 19:04:32 [DEBUG] GET http://65.52.11.89
2021/11/15 19:05:02 [ERR] GET http://65.52.11.89 request failed: Get "http://65.52.11.89": dial tcp 65.52.11.89:80: i/o timeout
2021/11/15 19:05:02 [DEBUG] GET http://65.52.11.89: retrying in 1s (4 left)
Nov 15 19:05:03.937: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 15 19:05:03.937: INFO: starting to delete external LB service webqflm4s-elb
Nov 15 19:05:03.992: INFO: starting to delete deployment webqflm4s
Nov 15 19:05:04.013: INFO: starting to delete job curl-to-elb-job4bmuisuho7x
... skipping 43 lines ...
STEP: Fetching activity logs took 1.012296195s
STEP: Dumping all the Cluster API resources in the "capz-e2e-3nbxjs" namespace
STEP: Deleting all clusters in the capz-e2e-3nbxjs namespace
STEP: Deleting cluster capz-e2e-3nbxjs-vmss
INFO: Waiting for the Cluster capz-e2e-3nbxjs/capz-e2e-3nbxjs-vmss to be deleted
STEP: Waiting for cluster capz-e2e-3nbxjs-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-3nbxjs-vmss-control-plane-kknnh, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-dc6zj, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mms9l, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-3nbxjs-vmss-control-plane-kknnh, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-q5mjf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-3nbxjs-vmss-control-plane-kknnh, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-pvhjz, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mb5jk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-3nbxjs-vmss-control-plane-kknnh, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-3nbxjs
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 19m35s on Ginkgo node 3 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Mon, 15 Nov 2021 18:38:52 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-oh5z1p" for hosting the cluster
Nov 15 18:38:52.451: INFO: starting to create namespace for hosting the "capz-e2e-oh5z1p" test spec
2021/11/15 18:38:52 failed trying to get namespace (capz-e2e-oh5z1p):namespaces "capz-e2e-oh5z1p" not found
INFO: Creating namespace capz-e2e-oh5z1p
INFO: Creating event watcher for namespace "capz-e2e-oh5z1p"
Nov 15 18:38:52.503: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-oh5z1p-ha
INFO: Creating the workload cluster with name "capz-e2e-oh5z1p-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 57 lines ...
STEP: waiting for job default/curl-to-elb-jobd1hh9ups3jk to be complete
Nov 15 18:49:26.076: INFO: waiting for job default/curl-to-elb-jobd1hh9ups3jk to be complete
Nov 15 18:49:36.109: INFO: job default/curl-to-elb-jobd1hh9ups3jk is complete, took 10.033019769s
STEP: connecting directly to the external LB service
Nov 15 18:49:36.109: INFO: starting attempts to connect directly to the external LB service
2021/11/15 18:49:36 [DEBUG] GET http://52.162.153.64
2021/11/15 18:50:06 [ERR] GET http://52.162.153.64 request failed: Get "http://52.162.153.64": dial tcp 52.162.153.64:80: i/o timeout
2021/11/15 18:50:06 [DEBUG] GET http://52.162.153.64: retrying in 1s (4 left)
Nov 15 18:50:22.388: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 15 18:50:22.388: INFO: starting to delete external LB service web6rpjmr-elb
Nov 15 18:50:22.502: INFO: starting to delete deployment web6rpjmr
Nov 15 18:50:22.525: INFO: starting to delete job curl-to-elb-jobd1hh9ups3jk
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 15 18:50:22.611: INFO: starting to create dev deployment namespace
2021/11/15 18:50:22 failed trying to get namespace (development):namespaces "development" not found
2021/11/15 18:50:22 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 15 18:50:22.681: INFO: starting to create prod deployment namespace
2021/11/15 18:50:22 failed trying to get namespace (production):namespaces "production" not found
2021/11/15 18:50:22 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 15 18:50:22.732: INFO: starting to create frontend-prod deployments
Nov 15 18:50:22.756: INFO: starting to create frontend-dev deployments
Nov 15 18:50:22.787: INFO: starting to create backend deployments
Nov 15 18:50:22.819: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 15 18:50:45.788: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.93.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 15 18:52:56.409: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 15 18:52:56.558: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.93.196 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.93.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 15 18:57:18.561: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 15 18:57:18.679: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.93.198 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 15 18:59:29.628: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 15 18:59:29.746: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.93.195 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.93.198 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 15 19:03:51.770: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 15 19:03:51.928: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.93.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 15 19:06:02.841: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 15 19:06:02.981: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.93.196 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-oh5z1p-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-oh5z1p/capz-e2e-oh5z1p-ha logs
Nov 15 19:08:14.199: INFO: INFO: Collecting logs for node capz-e2e-oh5z1p-ha-control-plane-pcmdw in cluster capz-e2e-oh5z1p-ha in namespace capz-e2e-oh5z1p

Nov 15 19:08:25.799: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-oh5z1p-ha-control-plane-pcmdw
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-oh5z1p-ha-control-plane-46hzx, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-bphcg, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-qj45m, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-lxsk9, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-oh5z1p-ha-control-plane-vh8h5, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-bd96j, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-oh5z1p-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000461498s
STEP: Dumping all the Cluster API resources in the "capz-e2e-oh5z1p" namespace
STEP: Deleting all clusters in the capz-e2e-oh5z1p namespace
STEP: Deleting cluster capz-e2e-oh5z1p-ha
INFO: Waiting for the Cluster capz-e2e-oh5z1p/capz-e2e-oh5z1p-ha to be deleted
STEP: Waiting for cluster capz-e2e-oh5z1p-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-oh5z1p-ha-control-plane-pcmdw, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-oh5z1p-ha-control-plane-pcmdw, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-oh5z1p-ha-control-plane-46hzx, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qnxpw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rf7lc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-whz7t, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-oh5z1p-ha-control-plane-46hzx, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-oh5z1p-ha-control-plane-pcmdw, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-oh5z1p-ha-control-plane-46hzx, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-lbhjd, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-oh5z1p-ha-control-plane-46hzx, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-oh5z1p-ha-control-plane-pcmdw, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-oh5z1p
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 51m34s on Ginkgo node 2 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Mon, 15 Nov 2021 18:38:52 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-o053zh" for hosting the cluster
Nov 15 18:38:52.403: INFO: starting to create namespace for hosting the "capz-e2e-o053zh" test spec
2021/11/15 18:38:52 failed trying to get namespace (capz-e2e-o053zh):namespaces "capz-e2e-o053zh" not found
INFO: Creating namespace capz-e2e-o053zh
INFO: Creating event watcher for namespace "capz-e2e-o053zh"
Nov 15 18:38:52.436: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-o053zh-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-s587m, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-o053zh-public-custom-vnet-control-plane-xwgz7, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-z8nng, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-h5kwv, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-o053zh-public-custom-vnet-control-plane-xwgz7, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-95dk4, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-o053zh-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000734023s
STEP: Dumping all the Cluster API resources in the "capz-e2e-o053zh" namespace
STEP: Deleting all clusters in the capz-e2e-o053zh namespace
STEP: Deleting cluster capz-e2e-o053zh-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-o053zh/capz-e2e-o053zh-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-o053zh-public-custom-vnet to be deleted
W1115 19:28:47.779627   24234 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1115 19:29:19.301325   24234 trace.go:205] Trace[1605139653]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (15-Nov-2021 19:28:49.299) (total time: 30001ms):
Trace[1605139653]: [30.001573741s] [30.001573741s] END
E1115 19:29:19.301377   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp 23.96.222.157:6443: i/o timeout
I1115 19:29:52.413323   24234 trace.go:205] Trace[1715691899]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (15-Nov-2021 19:29:22.412) (total time: 30001ms):
Trace[1715691899]: [30.001133043s] [30.001133043s] END
E1115 19:29:52.413370   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp 23.96.222.157:6443: i/o timeout
I1115 19:30:28.610350   24234 trace.go:205] Trace[1977647779]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (15-Nov-2021 19:29:58.609) (total time: 30000ms):
Trace[1977647779]: [30.000716108s] [30.000716108s] END
E1115 19:30:28.610396   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp 23.96.222.157:6443: i/o timeout
I1115 19:31:08.665090   24234 trace.go:205] Trace[133659100]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (15-Nov-2021 19:30:38.664) (total time: 30000ms):
Trace[133659100]: [30.000873238s] [30.000873238s] END
E1115 19:31:08.665198   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp 23.96.222.157:6443: i/o timeout
I1115 19:32:01.545765   24234 trace.go:205] Trace[1114532084]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (15-Nov-2021 19:31:31.544) (total time: 30001ms):
Trace[1114532084]: [30.001088896s] [30.001088896s] END
E1115 19:32:01.545838   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp 23.96.222.157:6443: i/o timeout
I1115 19:33:06.734165   24234 trace.go:205] Trace[235793638]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (15-Nov-2021 19:32:36.732) (total time: 30001ms):
Trace[235793638]: [30.001541154s] [30.001541154s] END
E1115 19:33:06.734228   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp 23.96.222.157:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-o053zh
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 15 19:34:12.681: INFO: deleting an existing virtual network "custom-vnet"
I1115 19:34:15.520295   24234 trace.go:205] Trace[1382034055]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (15-Nov-2021 19:33:45.519) (total time: 30001ms):
Trace[1382034055]: [30.001222805s] [30.001222805s] END
E1115 19:34:15.520348   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp 23.96.222.157:6443: i/o timeout
Nov 15 19:34:23.637: INFO: deleting an existing route table "node-routetable"
Nov 15 19:34:34.311: INFO: deleting an existing network security group "node-nsg"
Nov 15 19:34:45.081: INFO: deleting an existing network security group "control-plane-nsg"
E1115 19:34:53.586581   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 15 19:34:55.573: INFO: verifying the existing resource group "capz-e2e-o053zh-public-custom-vnet" is empty
Nov 15 19:34:55.857: INFO: deleting the existing resource group "capz-e2e-o053zh-public-custom-vnet"
E1115 19:35:45.859619   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:36:32.022389   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1115 19:37:13.559335   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:37:44.289310   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:38:36.034941   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 59m50s on Ginkgo node 1 of 3


• [SLOW TEST:3590.041 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Mon, 15 Nov 2021 19:13:47 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-imacw3" for hosting the cluster
Nov 15 19:13:47.236: INFO: starting to create namespace for hosting the "capz-e2e-imacw3" test spec
2021/11/15 19:13:47 failed trying to get namespace (capz-e2e-imacw3):namespaces "capz-e2e-imacw3" not found
INFO: Creating namespace capz-e2e-imacw3
INFO: Creating event watcher for namespace "capz-e2e-imacw3"
Nov 15 19:13:47.265: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-imacw3-gpu
INFO: Creating the workload cluster with name "capz-e2e-imacw3-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 80 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Mon, 15 Nov 2021 19:30:26 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-dj30ml" for hosting the cluster
Nov 15 19:30:26.229: INFO: starting to create namespace for hosting the "capz-e2e-dj30ml" test spec
2021/11/15 19:30:26 failed trying to get namespace (capz-e2e-dj30ml):namespaces "capz-e2e-dj30ml" not found
INFO: Creating namespace capz-e2e-dj30ml
INFO: Creating event watcher for namespace "capz-e2e-dj30ml"
Nov 15 19:30:26.261: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-dj30ml-oot
INFO: Creating the workload cluster with name "capz-e2e-dj30ml-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-jobfw8p5ms9ibw to be complete
Nov 15 19:39:28.568: INFO: waiting for job default/curl-to-elb-jobfw8p5ms9ibw to be complete
Nov 15 19:39:38.608: INFO: job default/curl-to-elb-jobfw8p5ms9ibw is complete, took 10.03996087s
STEP: connecting directly to the external LB service
Nov 15 19:39:38.608: INFO: starting attempts to connect directly to the external LB service
2021/11/15 19:39:38 [DEBUG] GET http://52.159.73.250
2021/11/15 19:40:08 [ERR] GET http://52.159.73.250 request failed: Get "http://52.159.73.250": dial tcp 52.159.73.250:80: i/o timeout
2021/11/15 19:40:08 [DEBUG] GET http://52.159.73.250: retrying in 1s (4 left)
Nov 15 19:40:09.639: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 15 19:40:09.639: INFO: starting to delete external LB service webw5cm9u-elb
Nov 15 19:40:09.693: INFO: starting to delete deployment webw5cm9u
Nov 15 19:40:09.707: INFO: starting to delete job curl-to-elb-jobfw8p5ms9ibw
... skipping 34 lines ...
STEP: Fetching activity logs took 952.962329ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-dj30ml" namespace
STEP: Deleting all clusters in the capz-e2e-dj30ml namespace
STEP: Deleting cluster capz-e2e-dj30ml-oot
INFO: Waiting for the Cluster capz-e2e-dj30ml/capz-e2e-dj30ml-oot to be deleted
STEP: Waiting for cluster capz-e2e-dj30ml-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-dj30ml-oot-control-plane-pt9ml, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-vhbfp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-dj30ml-oot-control-plane-pt9ml, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-g4v7b, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xv5tc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-6hmbw, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-cjjdj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-9rdk5, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-sb8pd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-z8fsl, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-controller-manager, container cloud-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-dj30ml-oot-control-plane-pt9ml, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8hwcm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-dj30ml-oot-control-plane-pt9ml, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-dj30ml
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 20m50s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Mon, 15 Nov 2021 19:38:42 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-zuu8fn" for hosting the cluster
Nov 15 19:38:42.446: INFO: starting to create namespace for hosting the "capz-e2e-zuu8fn" test spec
2021/11/15 19:38:42 failed trying to get namespace (capz-e2e-zuu8fn):namespaces "capz-e2e-zuu8fn" not found
INFO: Creating namespace capz-e2e-zuu8fn
INFO: Creating event watcher for namespace "capz-e2e-zuu8fn"
Nov 15 19:38:42.479: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-zuu8fn-aks
INFO: Creating the workload cluster with name "capz-e2e-zuu8fn-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1115 19:39:26.350381   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:40:03.205865   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:40:40.304237   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:41:35.872314   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 15 19:42:15.769: INFO: Waiting for the first control plane machine managed by capz-e2e-zuu8fn/capz-e2e-zuu8fn-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
INFO: Waiting for control plane to be ready
Nov 15 19:42:15.796: INFO: Waiting for the first control plane machine managed by capz-e2e-zuu8fn/capz-e2e-zuu8fn-aks to be provisioned
STEP: Waiting for all control plane nodes to exist
INFO: Waiting for the machine deployments to be provisioned
INFO: Waiting for the machine pools to be provisioned
STEP: Waiting for the machine pool workload nodes to exist
STEP: Waiting for the machine pool workload nodes to exist
E1115 19:42:22.264174   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 15 19:42:26.238: INFO: want 2 instances, found 0 ready and 0 available. generation: 1, observedGeneration: 0
Nov 15 19:42:31.270: INFO: want 2 instances, found 2 ready and 2 available. generation: 1, observedGeneration: 1
Nov 15 19:42:31.298: INFO: mapping nsenter pods to hostnames for host-by-host execution
Nov 15 19:42:31.298: INFO: found host aks-agentpool0-19856661-vmss000000 with pod nsenter-pdjnn
Nov 15 19:42:31.298: INFO: found host aks-agentpool1-19856661-vmss000000 with pod nsenter-rkf22
STEP: checking that time synchronization is healthy on aks-agentpool1-19856661-vmss000000
... skipping 3 lines ...
STEP: time sync OK for host aks-agentpool1-19856661-vmss000000
STEP: time sync OK for host aks-agentpool1-19856661-vmss000000
STEP: Dumping logs from the "capz-e2e-zuu8fn-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-zuu8fn/capz-e2e-zuu8fn-aks logs
Nov 15 19:42:31.979: INFO: INFO: Collecting logs for node aks-agentpool1-19856661-vmss000000 in cluster capz-e2e-zuu8fn-aks in namespace capz-e2e-zuu8fn

E1115 19:42:57.074700   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:43:47.603635   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:44:36.811966   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 15 19:44:42.528: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-zuu8fn/capz-e2e-zuu8fn-aks: [dialing public load balancer at capz-e2e-zuu8fn-aks-0a6c1684.hcp.northcentralus.azmk8s.io: dial tcp 168.62.242.23:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
Nov 15 19:44:43.045: INFO: INFO: Collecting logs for node aks-agentpool1-19856661-vmss000000 in cluster capz-e2e-zuu8fn-aks in namespace capz-e2e-zuu8fn

E1115 19:45:31.640089   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:46:06.164901   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 15 19:46:53.597: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-zuu8fn/capz-e2e-zuu8fn-aks: [dialing public load balancer at capz-e2e-zuu8fn-aks-0a6c1684.hcp.northcentralus.azmk8s.io: dial tcp 168.62.242.23:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-zuu8fn/capz-e2e-zuu8fn-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 275.916951ms
STEP: Dumping workload cluster capz-e2e-zuu8fn/capz-e2e-zuu8fn-aks Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-qsqkm, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-gs7wt, container calico-node
STEP: Creating log watcher for controller kube-system/calico-typha-horizontal-autoscaler-599c7bb664-xlnh8, container autoscaler
... skipping 8 lines ...
STEP: Fetching activity logs took 530.076703ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-zuu8fn" namespace
STEP: Deleting all clusters in the capz-e2e-zuu8fn namespace
STEP: Deleting cluster capz-e2e-zuu8fn-aks
INFO: Waiting for the Cluster capz-e2e-zuu8fn/capz-e2e-zuu8fn-aks to be deleted
STEP: Waiting for cluster capz-e2e-zuu8fn-aks to be deleted
E1115 19:46:57.775540   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:47:38.925678   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:48:36.719432   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:49:07.048554   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:49:55.634021   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:50:28.494519   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:51:01.735807   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:51:51.020773   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:52:33.590277   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:53:16.464625   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:54:09.857519   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:54:52.421914   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-zuu8fn
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1115 19:55:23.845085   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:56:23.840707   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 17m52s on Ginkgo node 1 of 3


• [SLOW TEST:1071.988 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 8 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Mon, 15 Nov 2021 19:39:20 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-vbshg5" for hosting the cluster
Nov 15 19:39:20.765: INFO: starting to create namespace for hosting the "capz-e2e-vbshg5" test spec
2021/11/15 19:39:20 failed trying to get namespace (capz-e2e-vbshg5):namespaces "capz-e2e-vbshg5" not found
INFO: Creating namespace capz-e2e-vbshg5
INFO: Creating event watcher for namespace "capz-e2e-vbshg5"
Nov 15 19:39:20.800: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-vbshg5-win-ha
INFO: Creating the workload cluster with name "capz-e2e-vbshg5-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 55 lines ...
STEP: waiting for job default/curl-to-elb-jobhw8qiho10wh to be complete
Nov 15 19:49:12.733: INFO: waiting for job default/curl-to-elb-jobhw8qiho10wh to be complete
Nov 15 19:49:22.771: INFO: job default/curl-to-elb-jobhw8qiho10wh is complete, took 10.037428427s
STEP: connecting directly to the external LB service
Nov 15 19:49:22.771: INFO: starting attempts to connect directly to the external LB service
2021/11/15 19:49:22 [DEBUG] GET http://20.88.16.22
2021/11/15 19:49:52 [ERR] GET http://20.88.16.22 request failed: Get "http://20.88.16.22": dial tcp 20.88.16.22:80: i/o timeout
2021/11/15 19:49:52 [DEBUG] GET http://20.88.16.22: retrying in 1s (4 left)
Nov 15 19:50:09.206: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 15 19:50:09.206: INFO: starting to delete external LB service webk782pu-elb
Nov 15 19:50:09.279: INFO: starting to delete deployment webk782pu
Nov 15 19:50:09.297: INFO: starting to delete job curl-to-elb-jobhw8qiho10wh
... skipping 25 lines ...
STEP: waiting for job default/curl-to-elb-job3hbbtan28fh to be complete
Nov 15 19:55:00.263: INFO: waiting for job default/curl-to-elb-job3hbbtan28fh to be complete
Nov 15 19:55:10.300: INFO: job default/curl-to-elb-job3hbbtan28fh is complete, took 10.037440709s
STEP: connecting directly to the external LB service
Nov 15 19:55:10.300: INFO: starting attempts to connect directly to the external LB service
2021/11/15 19:55:10 [DEBUG] GET http://52.159.105.48
2021/11/15 19:55:40 [ERR] GET http://52.159.105.48 request failed: Get "http://52.159.105.48": dial tcp 52.159.105.48:80: i/o timeout
2021/11/15 19:55:40 [DEBUG] GET http://52.159.105.48: retrying in 1s (4 left)
Nov 15 19:55:42.360: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 15 19:55:42.360: INFO: starting to delete external LB service web-windowsabhhoi-elb
Nov 15 19:55:42.421: INFO: starting to delete deployment web-windowsabhhoi
Nov 15 19:55:42.440: INFO: starting to delete job curl-to-elb-job3hbbtan28fh
... skipping 43 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-vbshg5-win-ha-control-plane-554dm, container kube-apiserver
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-vbshg5-win-ha-control-plane-nndqm, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-vbshg5-win-ha-control-plane-xvf2g, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-skmnp, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-tsxj7, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-vbshg5-win-ha-control-plane-xvf2g, container etcd
STEP: Got error while iterating over activity logs for resource group capz-e2e-vbshg5-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000985468s
STEP: Dumping all the Cluster API resources in the "capz-e2e-vbshg5" namespace
STEP: Deleting all clusters in the capz-e2e-vbshg5 namespace
STEP: Deleting cluster capz-e2e-vbshg5-win-ha
INFO: Waiting for the Cluster capz-e2e-vbshg5/capz-e2e-vbshg5-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-vbshg5-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-k8qlw, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-vbshg5-win-ha-control-plane-xvf2g, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-vbshg5-win-ha-control-plane-xvf2g, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7nf96, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-d5vtq, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-vbshg5-win-ha-control-plane-nndqm, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-2ccn2, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hprxw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-vbshg5-win-ha-control-plane-nndqm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-vbshg5-win-ha-control-plane-nndqm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-vbshg5-win-ha-control-plane-xvf2g, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-dw22t, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-vbshg5-win-ha-control-plane-xvf2g, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-vhmtf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-vbshg5-win-ha-control-plane-554dm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-vbshg5-win-ha-control-plane-nndqm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-tsxj7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-vbshg5-win-ha-control-plane-554dm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-tw9xz, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-vbshg5-win-ha-control-plane-554dm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-skmnp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rvwdq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-vbshg5-win-ha-control-plane-554dm, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-vbshg5
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 35m17s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Mon, 15 Nov 2021 19:51:16 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-9qn81d" for hosting the cluster
Nov 15 19:51:16.673: INFO: starting to create namespace for hosting the "capz-e2e-9qn81d" test spec
2021/11/15 19:51:16 failed trying to get namespace (capz-e2e-9qn81d):namespaces "capz-e2e-9qn81d" not found
INFO: Creating namespace capz-e2e-9qn81d
INFO: Creating event watcher for namespace "capz-e2e-9qn81d"
Nov 15 19:51:16.703: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-9qn81d-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-9qn81d-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 48 lines ...
STEP: Fetching activity logs took 973.880187ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-9qn81d" namespace
STEP: Deleting all clusters in the capz-e2e-9qn81d namespace
STEP: Deleting cluster capz-e2e-9qn81d-win-vmss
INFO: Waiting for the Cluster capz-e2e-9qn81d/capz-e2e-9qn81d-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-9qn81d-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hj8j5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-24kjw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-d9vjh, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-22m8j, container kube-flannel: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-9qn81d
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 34m39s on Ginkgo node 2 of 3

... skipping 49 lines ...
    	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:256 +0x1da
    testing.tRunner(0xc000602d80, 0x23174f8)
    	/usr/local/go/src/testing/testing.go:1193 +0xef
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1238 +0x2b3
------------------------------
E1115 19:57:13.652211   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:57:49.954485   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:58:44.469435   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 19:59:32.733553   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:00:26.285298   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:01:22.408658   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:02:02.127779   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:02:54.839056   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:03:38.621723   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:04:28.204830   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:05:21.344701   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:06:03.059855   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:07:02.008161   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:07:39.348865   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:08:35.026259   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:09:12.590069   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:09:55.232768   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:10:35.533971   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:11:21.843940   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:12:00.923297   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:12:54.888117   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:13:27.200659   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:14:08.285491   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:14:42.139291   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:15:42.187076   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:16:16.152709   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:17:08.537508   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:17:45.485235   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:18:38.513750   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:19:22.754115   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:20:09.805832   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:20:45.263297   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:21:17.256960   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:21:56.346289   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:22:55.997305   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:23:51.281622   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:24:23.788329   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:24:56.619429   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1115 20:25:51.306050   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o053zh/events?resourceVersion=9411": dial tcp: lookup capz-e2e-o053zh-public-custom-vnet-5dd5c305.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a Windows enabled VMSS cluster [It] with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.3/framework/machinepool_helpers.go:85

Ran 9 of 22 Specs in 6537.679 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 1h50m16.568921733s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...