This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 9 succeeded
Started2021-11-06 18:29
Elapsed2h0m
Revisionrelease-0.5

No Test Failures!


Show 9 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 431 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Sat, 06 Nov 2021 18:36:19 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-27s8rp" for hosting the cluster
Nov  6 18:36:19.885: INFO: starting to create namespace for hosting the "capz-e2e-27s8rp" test spec
2021/11/06 18:36:19 failed trying to get namespace (capz-e2e-27s8rp):namespaces "capz-e2e-27s8rp" not found
INFO: Creating namespace capz-e2e-27s8rp
INFO: Creating event watcher for namespace "capz-e2e-27s8rp"
Nov  6 18:36:19.966: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-27s8rp-ipv6
INFO: Creating the workload cluster with name "capz-e2e-27s8rp-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 575.627167ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-27s8rp" namespace
STEP: Deleting all clusters in the capz-e2e-27s8rp namespace
STEP: Deleting cluster capz-e2e-27s8rp-ipv6
INFO: Waiting for the Cluster capz-e2e-27s8rp/capz-e2e-27s8rp-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-27s8rp-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nb792, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mc8dh, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-27s8rp-ipv6-control-plane-tkn84, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-vwt5g, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-kx744, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-qgl6c, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-27s8rp-ipv6-control-plane-5s6t8, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-27s8rp-ipv6-control-plane-5s6t8, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-gxtkc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-27s8rp-ipv6-control-plane-86rrp, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-27s8rp-ipv6-control-plane-tkn84, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4pnqw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-27s8rp-ipv6-control-plane-5s6t8, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-27s8rp-ipv6-control-plane-tkn84, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-27s8rp-ipv6-control-plane-86rrp, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-rx24w, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8bjld, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-plv77, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-27s8rp-ipv6-control-plane-86rrp, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wn5fp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-27s8rp-ipv6-control-plane-5s6t8, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-27s8rp-ipv6-control-plane-tkn84, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-27s8rp-ipv6-control-plane-86rrp, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-27s8rp
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 18m54s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Sat, 06 Nov 2021 18:55:13 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-70qcvu" for hosting the cluster
Nov  6 18:55:13.547: INFO: starting to create namespace for hosting the "capz-e2e-70qcvu" test spec
2021/11/06 18:55:13 failed trying to get namespace (capz-e2e-70qcvu):namespaces "capz-e2e-70qcvu" not found
INFO: Creating namespace capz-e2e-70qcvu
INFO: Creating event watcher for namespace "capz-e2e-70qcvu"
Nov  6 18:55:13.584: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-70qcvu-vmss
INFO: Creating the workload cluster with name "capz-e2e-70qcvu-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 106 lines ...
STEP: Fetching activity logs took 707.440233ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-70qcvu" namespace
STEP: Deleting all clusters in the capz-e2e-70qcvu namespace
STEP: Deleting cluster capz-e2e-70qcvu-vmss
INFO: Waiting for the Cluster capz-e2e-70qcvu/capz-e2e-70qcvu-vmss to be deleted
STEP: Waiting for cluster capz-e2e-70qcvu-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8pdvb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-z2dmp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-fxm6f, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wfgqf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-tcqnt, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-h4hn8, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-70qcvu-vmss-control-plane-m7r89, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-70qcvu-vmss-control-plane-m7r89, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-70qcvu-vmss-control-plane-m7r89, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rxvf2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-npgg2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-70qcvu-vmss-control-plane-m7r89, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-jgbvh, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-70qcvu
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 17m40s on Ginkgo node 2 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Sat, 06 Nov 2021 18:36:19 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-n3z4vf" for hosting the cluster
Nov  6 18:36:19.883: INFO: starting to create namespace for hosting the "capz-e2e-n3z4vf" test spec
2021/11/06 18:36:19 failed trying to get namespace (capz-e2e-n3z4vf):namespaces "capz-e2e-n3z4vf" not found
INFO: Creating namespace capz-e2e-n3z4vf
INFO: Creating event watcher for namespace "capz-e2e-n3z4vf"
Nov  6 18:36:19.987: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-n3z4vf-ha
INFO: Creating the workload cluster with name "capz-e2e-n3z4vf-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 57 lines ...
STEP: waiting for job default/curl-to-elb-jobcoeg8u171tx to be complete
Nov  6 18:47:34.241: INFO: waiting for job default/curl-to-elb-jobcoeg8u171tx to be complete
Nov  6 18:47:44.283: INFO: job default/curl-to-elb-jobcoeg8u171tx is complete, took 10.041747549s
STEP: connecting directly to the external LB service
Nov  6 18:47:44.283: INFO: starting attempts to connect directly to the external LB service
2021/11/06 18:47:44 [DEBUG] GET http://20.88.28.109
2021/11/06 18:48:14 [ERR] GET http://20.88.28.109 request failed: Get "http://20.88.28.109": dial tcp 20.88.28.109:80: i/o timeout
2021/11/06 18:48:14 [DEBUG] GET http://20.88.28.109: retrying in 1s (4 left)
2021/11/06 18:48:45 [ERR] GET http://20.88.28.109 request failed: Get "http://20.88.28.109": dial tcp 20.88.28.109:80: i/o timeout
2021/11/06 18:48:45 [DEBUG] GET http://20.88.28.109: retrying in 2s (3 left)
Nov  6 18:48:47.314: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov  6 18:48:47.314: INFO: starting to delete external LB service webkt7ecu-elb
Nov  6 18:48:47.408: INFO: starting to delete deployment webkt7ecu
Nov  6 18:48:47.432: INFO: starting to delete job curl-to-elb-jobcoeg8u171tx
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov  6 18:48:47.518: INFO: starting to create dev deployment namespace
2021/11/06 18:48:47 failed trying to get namespace (development):namespaces "development" not found
2021/11/06 18:48:47 namespace development does not exist, creating...
STEP: Creating production namespace
Nov  6 18:48:47.577: INFO: starting to create prod deployment namespace
2021/11/06 18:48:47 failed trying to get namespace (production):namespaces "production" not found
2021/11/06 18:48:47 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov  6 18:48:47.642: INFO: starting to create frontend-prod deployments
Nov  6 18:48:47.674: INFO: starting to create frontend-dev deployments
Nov  6 18:48:47.770: INFO: starting to create backend deployments
Nov  6 18:48:47.817: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov  6 18:49:10.165: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.129.2 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  6 18:51:20.257: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov  6 18:51:20.398: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.129.2 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.129.2 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  6 18:55:42.668: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov  6 18:55:42.811: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.129.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  6 18:57:53.473: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov  6 18:57:53.628: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.129.1 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.129.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  6 19:02:15.619: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov  6 19:02:15.776: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.129.2 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  6 19:04:26.688: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov  6 19:04:26.793: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.129.2 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-n3z4vf-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-n3z4vf/capz-e2e-n3z4vf-ha logs
Nov  6 19:06:38.098: INFO: INFO: Collecting logs for node capz-e2e-n3z4vf-ha-control-plane-ds7np in cluster capz-e2e-n3z4vf-ha in namespace capz-e2e-n3z4vf

Nov  6 19:06:49.886: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-n3z4vf-ha-control-plane-ds7np
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-n3z4vf-ha-control-plane-ds7np, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-9bq95, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-n9vqq, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-n3z4vf-ha-control-plane-z899b, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-n3z4vf-ha-control-plane-ds7np, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-sstq7, container calico-node
STEP: Got error while iterating over activity logs for resource group capz-e2e-n3z4vf-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001106206s
STEP: Dumping all the Cluster API resources in the "capz-e2e-n3z4vf" namespace
STEP: Deleting all clusters in the capz-e2e-n3z4vf namespace
STEP: Deleting cluster capz-e2e-n3z4vf-ha
INFO: Waiting for the Cluster capz-e2e-n3z4vf/capz-e2e-n3z4vf-ha to be deleted
STEP: Waiting for cluster capz-e2e-n3z4vf-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5grzq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-n3z4vf-ha-control-plane-ds7np, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-n3z4vf-ha-control-plane-ds7np, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-n9vqq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-nrmkn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-n3z4vf-ha-control-plane-ds7np, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-sg4p9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9gvph, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-n3z4vf-ha-control-plane-z899b, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-sstq7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-n3z4vf-ha-control-plane-z899b, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-n3z4vf-ha-control-plane-z899b, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-n3z4vf-ha-control-plane-ds7np, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-n3z4vf-ha-control-plane-z899b, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-n3z4vf
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 48m29s on Ginkgo node 3 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Sat, 06 Nov 2021 18:36:19 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-zbyfux" for hosting the cluster
Nov  6 18:36:19.848: INFO: starting to create namespace for hosting the "capz-e2e-zbyfux" test spec
2021/11/06 18:36:19 failed trying to get namespace (capz-e2e-zbyfux):namespaces "capz-e2e-zbyfux" not found
INFO: Creating namespace capz-e2e-zbyfux
INFO: Creating event watcher for namespace "capz-e2e-zbyfux"
Nov  6 18:36:19.882: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-zbyfux-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-zbyfux-public-custom-vnet-control-plane-77kzb, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-gfg6x, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-wb8r2, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-zbyfux-public-custom-vnet-control-plane-77kzb, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-zbyfux-public-custom-vnet-control-plane-77kzb, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-zbyfux-public-custom-vnet-control-plane-77kzb, container kube-controller-manager
STEP: Got error while iterating over activity logs for resource group capz-e2e-zbyfux-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000769463s
STEP: Dumping all the Cluster API resources in the "capz-e2e-zbyfux" namespace
STEP: Deleting all clusters in the capz-e2e-zbyfux namespace
STEP: Deleting cluster capz-e2e-zbyfux-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-zbyfux/capz-e2e-zbyfux-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-zbyfux-public-custom-vnet to be deleted
W1106 19:23:28.295890   24216 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1106 19:23:59.582791   24216 trace.go:205] Trace[932597119]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (06-Nov-2021 19:23:29.581) (total time: 30001ms):
Trace[932597119]: [30.001258203s] [30.001258203s] END
E1106 19:23:59.582863   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp 52.159.85.155:6443: i/o timeout
I1106 19:24:31.789866   24216 trace.go:205] Trace[454735433]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (06-Nov-2021 19:24:01.789) (total time: 30000ms):
Trace[454735433]: [30.000591156s] [30.000591156s] END
E1106 19:24:31.789924   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp 52.159.85.155:6443: i/o timeout
I1106 19:25:05.597716   24216 trace.go:205] Trace[152771501]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (06-Nov-2021 19:24:35.596) (total time: 30000ms):
Trace[152771501]: [30.000823982s] [30.000823982s] END
E1106 19:25:05.597772   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp 52.159.85.155:6443: i/o timeout
I1106 19:25:46.488119   24216 trace.go:205] Trace[1199082608]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (06-Nov-2021 19:25:16.487) (total time: 30000ms):
Trace[1199082608]: [30.000484982s] [30.000484982s] END
E1106 19:25:46.488180   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp 52.159.85.155:6443: i/o timeout
I1106 19:26:35.100539   24216 trace.go:205] Trace[1171895083]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (06-Nov-2021 19:26:05.099) (total time: 30000ms):
Trace[1171895083]: [30.000542958s] [30.000542958s] END
E1106 19:26:35.100605   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp 52.159.85.155:6443: i/o timeout
I1106 19:27:34.163941   24216 trace.go:205] Trace[413828033]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (06-Nov-2021 19:27:04.163) (total time: 30000ms):
Trace[413828033]: [30.000606786s] [30.000606786s] END
E1106 19:27:34.164012   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp 52.159.85.155:6443: i/o timeout
I1106 19:28:58.920305   24216 trace.go:205] Trace[217674350]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (06-Nov-2021 19:28:28.918) (total time: 30001ms):
Trace[217674350]: [30.001426499s] [30.001426499s] END
E1106 19:28:58.920384   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp 52.159.85.155:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-zbyfux
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov  6 19:29:06.029: INFO: deleting an existing virtual network "custom-vnet"
Nov  6 19:29:16.548: INFO: deleting an existing route table "node-routetable"
Nov  6 19:29:26.791: INFO: deleting an existing network security group "node-nsg"
Nov  6 19:29:37.070: INFO: deleting an existing network security group "control-plane-nsg"
E1106 19:29:47.158664   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov  6 19:29:47.313: INFO: verifying the existing resource group "capz-e2e-zbyfux-public-custom-vnet" is empty
Nov  6 19:29:47.614: INFO: deleting the existing resource group "capz-e2e-zbyfux-public-custom-vnet"
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1106 19:30:32.436100   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:31:09.425038   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 54m58s on Ginkgo node 1 of 3


• [SLOW TEST:3297.995 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Sat, 06 Nov 2021 19:12:53 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-47v10q" for hosting the cluster
Nov  6 19:12:53.229: INFO: starting to create namespace for hosting the "capz-e2e-47v10q" test spec
2021/11/06 19:12:53 failed trying to get namespace (capz-e2e-47v10q):namespaces "capz-e2e-47v10q" not found
INFO: Creating namespace capz-e2e-47v10q
INFO: Creating event watcher for namespace "capz-e2e-47v10q"
Nov  6 19:12:53.263: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-47v10q-gpu
INFO: Creating the workload cluster with name "capz-e2e-47v10q-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 58 lines ...
STEP: Fetching activity logs took 750.921129ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-47v10q" namespace
STEP: Deleting all clusters in the capz-e2e-47v10q namespace
STEP: Deleting cluster capz-e2e-47v10q-gpu
INFO: Waiting for the Cluster capz-e2e-47v10q/capz-e2e-47v10q-gpu to be deleted
STEP: Waiting for cluster capz-e2e-47v10q-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-l9q76, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5k98h, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-47v10q
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 24m6s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Sat, 06 Nov 2021 19:24:49 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-4c771l" for hosting the cluster
Nov  6 19:24:49.278: INFO: starting to create namespace for hosting the "capz-e2e-4c771l" test spec
2021/11/06 19:24:49 failed trying to get namespace (capz-e2e-4c771l):namespaces "capz-e2e-4c771l" not found
INFO: Creating namespace capz-e2e-4c771l
INFO: Creating event watcher for namespace "capz-e2e-4c771l"
Nov  6 19:24:49.316: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-4c771l-oot
INFO: Creating the workload cluster with name "capz-e2e-4c771l-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 98 lines ...
STEP: Fetching activity logs took 1.029011421s
STEP: Dumping all the Cluster API resources in the "capz-e2e-4c771l" namespace
STEP: Deleting all clusters in the capz-e2e-4c771l namespace
STEP: Deleting cluster capz-e2e-4c771l-oot
INFO: Waiting for the Cluster capz-e2e-4c771l/capz-e2e-4c771l-oot to be deleted
STEP: Waiting for cluster capz-e2e-4c771l-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/cloud-controller-manager, container cloud-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-q8bb8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qmjvx, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-t7w2f, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-b2t98, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-tlb8z, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-znj8f, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-r282m, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-4c771l-oot-control-plane-mhpkz, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-4c771l-oot-control-plane-mhpkz, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-4fs9q, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-4c771l-oot-control-plane-mhpkz, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-4c771l-oot-control-plane-mhpkz, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-27m5z, container cloud-node-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-4c771l
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 20m55s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Sat, 06 Nov 2021 19:31:17 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-6ulqf5" for hosting the cluster
Nov  6 19:31:17.848: INFO: starting to create namespace for hosting the "capz-e2e-6ulqf5" test spec
2021/11/06 19:31:17 failed trying to get namespace (capz-e2e-6ulqf5):namespaces "capz-e2e-6ulqf5" not found
INFO: Creating namespace capz-e2e-6ulqf5
INFO: Creating event watcher for namespace "capz-e2e-6ulqf5"
Nov  6 19:31:17.879: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-6ulqf5-aks
INFO: Creating the workload cluster with name "capz-e2e-6ulqf5-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1106 19:32:08.566846   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:32:57.974767   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:33:50.323236   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:34:43.610603   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov  6 19:34:48.850: INFO: Waiting for the first control plane machine managed by capz-e2e-6ulqf5/capz-e2e-6ulqf5-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
INFO: Waiting for control plane to be ready
Nov  6 19:34:48.880: INFO: Waiting for the first control plane machine managed by capz-e2e-6ulqf5/capz-e2e-6ulqf5-aks to be provisioned
STEP: Waiting for all control plane nodes to exist
... skipping 5 lines ...
Nov  6 19:35:04.819: INFO: want 2 instances, found 2 ready and 2 available. generation: 1, observedGeneration: 1
Nov  6 19:35:04.842: INFO: mapping nsenter pods to hostnames for host-by-host execution
Nov  6 19:35:04.842: INFO: found host aks-agentpool1-18539306-vmss000000 with pod nsenter-blgdg
Nov  6 19:35:04.842: INFO: found host aks-agentpool0-18539306-vmss000000 with pod nsenter-z7mkw
STEP: checking that time synchronization is healthy on aks-agentpool1-18539306-vmss000000
STEP: checking that time synchronization is healthy on aks-agentpool1-18539306-vmss000000
E1106 19:35:14.621897   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: checking that time synchronization is healthy on aks-agentpool1-18539306-vmss000000
STEP: checking that time synchronization is healthy on aks-agentpool1-18539306-vmss000000
E1106 19:35:51.571767   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: time sync OK for host aks-agentpool1-18539306-vmss000000
STEP: time sync OK for host aks-agentpool1-18539306-vmss000000
STEP: time sync OK for host aks-agentpool1-18539306-vmss000000
STEP: time sync OK for host aks-agentpool1-18539306-vmss000000
STEP: Dumping logs from the "capz-e2e-6ulqf5-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-6ulqf5/capz-e2e-6ulqf5-aks logs
Nov  6 19:35:56.977: INFO: INFO: Collecting logs for node aks-agentpool1-18539306-vmss000000 in cluster capz-e2e-6ulqf5-aks in namespace capz-e2e-6ulqf5

E1106 19:36:46.269562   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:37:31.119235   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov  6 19:38:07.201: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-6ulqf5/capz-e2e-6ulqf5-aks: [dialing public load balancer at capz-e2e-6ulqf5-aks-87d7876f.hcp.northcentralus.azmk8s.io: dial tcp 52.162.194.91:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
Nov  6 19:38:07.699: INFO: INFO: Collecting logs for node aks-agentpool1-18539306-vmss000000 in cluster capz-e2e-6ulqf5-aks in namespace capz-e2e-6ulqf5

E1106 19:38:18.072557   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:38:59.853755   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:39:58.590337   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov  6 19:40:18.277: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-6ulqf5/capz-e2e-6ulqf5-aks: [dialing public load balancer at capz-e2e-6ulqf5-aks-87d7876f.hcp.northcentralus.azmk8s.io: dial tcp 52.162.194.91:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-6ulqf5/capz-e2e-6ulqf5-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 467.267861ms
STEP: Dumping workload cluster capz-e2e-6ulqf5/capz-e2e-6ulqf5-aks Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-85p5x, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-84d976c568-w6gkm, container coredns
STEP: Creating log watcher for controller kube-system/calico-typha-deployment-76cb9744d8-jvgml, container calico-typha
... skipping 8 lines ...
STEP: Fetching activity logs took 470.411221ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-6ulqf5" namespace
STEP: Deleting all clusters in the capz-e2e-6ulqf5 namespace
STEP: Deleting cluster capz-e2e-6ulqf5-aks
INFO: Waiting for the Cluster capz-e2e-6ulqf5/capz-e2e-6ulqf5-aks to be deleted
STEP: Waiting for cluster capz-e2e-6ulqf5-aks to be deleted
E1106 19:40:46.983394   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:41:19.376069   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:42:18.200475   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:42:58.543015   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:43:57.443905   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:44:40.752841   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-6ulqf5
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1106 19:45:40.190261   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:46:36.013088   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 15m37s on Ginkgo node 1 of 3


• [SLOW TEST:936.858 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 8 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Sat, 06 Nov 2021 19:36:58 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-izn1nu" for hosting the cluster
Nov  6 19:36:58.869: INFO: starting to create namespace for hosting the "capz-e2e-izn1nu" test spec
2021/11/06 19:36:58 failed trying to get namespace (capz-e2e-izn1nu):namespaces "capz-e2e-izn1nu" not found
INFO: Creating namespace capz-e2e-izn1nu
INFO: Creating event watcher for namespace "capz-e2e-izn1nu"
Nov  6 19:36:58.900: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-izn1nu-win-ha
INFO: Creating the workload cluster with name "capz-e2e-izn1nu-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 91 lines ...
STEP: waiting for job default/curl-to-elb-jobff44sxxhfir to be complete
Nov  6 19:49:03.168: INFO: waiting for job default/curl-to-elb-jobff44sxxhfir to be complete
Nov  6 19:49:13.211: INFO: job default/curl-to-elb-jobff44sxxhfir is complete, took 10.042730238s
STEP: connecting directly to the external LB service
Nov  6 19:49:13.211: INFO: starting attempts to connect directly to the external LB service
2021/11/06 19:49:13 [DEBUG] GET http://52.159.78.42
2021/11/06 19:49:43 [ERR] GET http://52.159.78.42 request failed: Get "http://52.159.78.42": dial tcp 52.159.78.42:80: i/o timeout
2021/11/06 19:49:43 [DEBUG] GET http://52.159.78.42: retrying in 1s (4 left)
2021/11/06 19:50:14 [ERR] GET http://52.159.78.42 request failed: Get "http://52.159.78.42": dial tcp 52.159.78.42:80: i/o timeout
2021/11/06 19:50:14 [DEBUG] GET http://52.159.78.42: retrying in 2s (3 left)
Nov  6 19:50:16.247: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov  6 19:50:16.247: INFO: starting to delete external LB service web-windowso6weui-elb
Nov  6 19:50:16.305: INFO: starting to delete deployment web-windowso6weui
Nov  6 19:50:16.324: INFO: starting to delete job curl-to-elb-jobff44sxxhfir
... skipping 43 lines ...
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-n8mcf, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-izn1nu-win-ha-control-plane-5mdz6, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-izn1nu-win-ha-control-plane-dmckf, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-izn1nu-win-ha-control-plane-dmckf, container etcd
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-izn1nu-win-ha-control-plane-2mpwx, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-vskqx, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-izn1nu-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000855237s
STEP: Dumping all the Cluster API resources in the "capz-e2e-izn1nu" namespace
STEP: Deleting all clusters in the capz-e2e-izn1nu namespace
STEP: Deleting cluster capz-e2e-izn1nu-win-ha
INFO: Waiting for the Cluster capz-e2e-izn1nu/capz-e2e-izn1nu-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-izn1nu-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-izn1nu-win-ha-control-plane-5mdz6, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-izn1nu-win-ha-control-plane-2mpwx, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-khtlm, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-izn1nu-win-ha-control-plane-2mpwx, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-izn1nu-win-ha-control-plane-5mdz6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-2g6cq, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-izn1nu-win-ha-control-plane-dmckf, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-n8mcf, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-izn1nu-win-ha-control-plane-dmckf, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5fx25, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4qsd2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-izn1nu-win-ha-control-plane-2mpwx, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-tpfrw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-j8m42, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-izn1nu-win-ha-control-plane-2mpwx, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-nb8lb, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-izn1nu-win-ha-control-plane-5mdz6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-izn1nu-win-ha-control-plane-dmckf, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-izn1nu-win-ha-control-plane-5mdz6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-izn1nu-win-ha-control-plane-dmckf, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-izn1nu
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 34m40s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Sat, 06 Nov 2021 19:45:44 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-eeiqwu" for hosting the cluster
Nov  6 19:45:44.038: INFO: starting to create namespace for hosting the "capz-e2e-eeiqwu" test spec
2021/11/06 19:45:44 failed trying to get namespace (capz-e2e-eeiqwu):namespaces "capz-e2e-eeiqwu" not found
INFO: Creating namespace capz-e2e-eeiqwu
INFO: Creating event watcher for namespace "capz-e2e-eeiqwu"
Nov  6 19:45:44.084: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-eeiqwu-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-eeiqwu-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 104 lines ...
Nov  6 20:07:54.462: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-eeiqwu-win-vmss-control-plane-p6p4h

Nov  6 20:08:21.894: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-eeiqwu-win-vmss in namespace capz-e2e-eeiqwu

Nov  6 20:08:38.957: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-eeiqwu-win-vmss-mp-0

Failed to get logs for machine pool capz-e2e-eeiqwu-win-vmss-mp-0, cluster capz-e2e-eeiqwu/capz-e2e-eeiqwu-win-vmss: [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1]
Nov  6 20:09:05.572: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-eeiqwu-win-vmss in namespace capz-e2e-eeiqwu

Nov  6 20:09:37.265: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

STEP: Dumping workload cluster capz-e2e-eeiqwu/capz-e2e-eeiqwu-win-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 196.743951ms
... skipping 7 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-eeiqwu-win-vmss-control-plane-p6p4h, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-eeiqwu-win-vmss-control-plane-p6p4h, container kube-apiserver
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-eeiqwu-win-vmss-control-plane-p6p4h, container etcd
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-l5tvk, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-eeiqwu-win-vmss-control-plane-p6p4h, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-2n569, container kube-flannel
STEP: Got error while iterating over activity logs for resource group capz-e2e-eeiqwu-win-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=0 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001538844s
STEP: Dumping all the Cluster API resources in the "capz-e2e-eeiqwu" namespace
STEP: Deleting all clusters in the capz-e2e-eeiqwu namespace
STEP: Deleting cluster capz-e2e-eeiqwu-win-vmss
INFO: Waiting for the Cluster capz-e2e-eeiqwu/capz-e2e-eeiqwu-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-eeiqwu-win-vmss to be deleted
... skipping 9 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows enabled VMSS cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:542
    with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543
------------------------------
E1106 19:47:26.572400   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:48:23.455981   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:49:00.843293   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:49:49.716971   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:50:43.655375   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:51:38.099057   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:52:21.586202   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:53:06.554182   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:53:39.362810   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:54:37.178638   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:55:22.391845   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:56:07.276462   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:57:01.554935   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:57:51.154582   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:58:23.330765   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 19:59:11.243495   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:00:01.243314   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:00:53.694313   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:01:51.298768   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:02:22.947240   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:02:57.939932   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:03:46.555207   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:04:22.638358   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:04:56.748987   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:05:50.639388   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:06:21.730010   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:07:06.540883   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:08:02.749924   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:08:58.678003   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:09:45.897732   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:10:20.759037   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:11:01.561613   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:11:34.688323   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:12:12.342080   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:12:58.453822   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:13:31.787558   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:14:16.442120   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:15:11.073583   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:15:55.885868   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:16:25.959225   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:17:06.257000   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:17:58.201011   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:18:34.931847   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:19:22.613772   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:20:17.211798   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:21:15.968440   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:21:51.344137   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:22:30.284968   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:23:16.268133   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:24:00.229256   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:24:51.697227   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:25:36.852742   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:26:31.580332   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1106 20:27:01.746197   24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zbyfux/events?resourceVersion=8875": dial tcp: lookup capz-e2e-zbyfux-public-custom-vnet-4bd2f3a7.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Tearing down the management cluster


Ran 9 of 22 Specs in 6809.949 seconds
SUCCESS! -- 9 Passed | 0 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 1h54m51.001146662s
Test Suite Passed
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=0
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: docker{"component":"entrypoint","file":"prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2021-11-06T20:29:43Z"}
Program process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
/usr/local/bin/runner.sh: line 38: kill: (163) - No such process
Cleaning up after docker
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
================================================================================
Done cleaning up after docker in docker.
{"component":"entrypoint","file":"prow/entrypoint/run.go:252","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process gracefully exited before 15m0s grace period","severity":"error","time":"2021-11-06T20:30:05Z"}