This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2022-04-24 19:36
Elapsed1h59m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node 58m55s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\san\sAKS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454
Timed out after 1200.002s.
System machine pools not ready
Expected
    <bool>: false
to equal
    <bool>: true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216
				
				Click to see stdout/stderrfrom junit.e2e_suite.2.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 436 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Sun, 24 Apr 2022 19:42:45 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-wcyuds" for hosting the cluster
Apr 24 19:42:45.885: INFO: starting to create namespace for hosting the "capz-e2e-wcyuds" test spec
2022/04/24 19:42:45 failed trying to get namespace (capz-e2e-wcyuds):namespaces "capz-e2e-wcyuds" not found
INFO: Creating namespace capz-e2e-wcyuds
INFO: Creating event watcher for namespace "capz-e2e-wcyuds"
Apr 24 19:42:45.964: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-wcyuds-ipv6
INFO: Creating the workload cluster with name "capz-e2e-wcyuds-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 650.03304ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-wcyuds" namespace
STEP: Deleting all clusters in the capz-e2e-wcyuds namespace
STEP: Deleting cluster capz-e2e-wcyuds-ipv6
INFO: Waiting for the Cluster capz-e2e-wcyuds/capz-e2e-wcyuds-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-wcyuds-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-wcyuds-ipv6-control-plane-bzrpk, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-wcyuds-ipv6-control-plane-bzrpk, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-4hpm4, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rwhrm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-wcyuds-ipv6-control-plane-bzrpk, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-v8mlt, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-rxwww, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-wcyuds-ipv6-control-plane-h7lkt, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-wcyuds-ipv6-control-plane-h7lkt, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ccfcb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-p7wz4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-wz2nv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-wcyuds-ipv6-control-plane-h7lkt, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-wcyuds-ipv6-control-plane-bzrpk, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-wcyuds-ipv6-control-plane-h7lkt, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-kvk2g, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nrt92, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-wcyuds
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 18m46s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Sun, 24 Apr 2022 20:01:31 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-g34mhx" for hosting the cluster
Apr 24 20:01:31.772: INFO: starting to create namespace for hosting the "capz-e2e-g34mhx" test spec
2022/04/24 20:01:31 failed trying to get namespace (capz-e2e-g34mhx):namespaces "capz-e2e-g34mhx" not found
INFO: Creating namespace capz-e2e-g34mhx
INFO: Creating event watcher for namespace "capz-e2e-g34mhx"
Apr 24 20:01:31.804: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-g34mhx-vmss
INFO: Creating the workload cluster with name "capz-e2e-g34mhx-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 106 lines ...
STEP: Fetching activity logs took 3.201257828s
STEP: Dumping all the Cluster API resources in the "capz-e2e-g34mhx" namespace
STEP: Deleting all clusters in the capz-e2e-g34mhx namespace
STEP: Deleting cluster capz-e2e-g34mhx-vmss
INFO: Waiting for the Cluster capz-e2e-g34mhx/capz-e2e-g34mhx-vmss to be deleted
STEP: Waiting for cluster capz-e2e-g34mhx-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-sdb4s, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-m5v54, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2wtr6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2l7td, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-g34mhx
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 22m51s on Ginkgo node 3 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Sun, 24 Apr 2022 19:42:45 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-ycwv10" for hosting the cluster
Apr 24 19:42:45.881: INFO: starting to create namespace for hosting the "capz-e2e-ycwv10" test spec
2022/04/24 19:42:45 failed trying to get namespace (capz-e2e-ycwv10):namespaces "capz-e2e-ycwv10" not found
INFO: Creating namespace capz-e2e-ycwv10
INFO: Creating event watcher for namespace "capz-e2e-ycwv10"
Apr 24 19:42:45.923: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-ycwv10-ha
INFO: Creating the workload cluster with name "capz-e2e-ycwv10-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 67 lines ...
Apr 24 19:52:41.792: INFO: starting to delete external LB service weboshxpu-elb
Apr 24 19:52:41.879: INFO: starting to delete deployment weboshxpu
Apr 24 19:52:41.917: INFO: starting to delete job curl-to-elb-jobvcow7ocjc89
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Apr 24 19:52:41.995: INFO: starting to create dev deployment namespace
2022/04/24 19:52:42 failed trying to get namespace (development):namespaces "development" not found
2022/04/24 19:52:42 namespace development does not exist, creating...
STEP: Creating production namespace
Apr 24 19:52:42.119: INFO: starting to create prod deployment namespace
2022/04/24 19:52:42 failed trying to get namespace (production):namespaces "production" not found
2022/04/24 19:52:42 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Apr 24 19:52:42.250: INFO: starting to create frontend-prod deployments
Apr 24 19:52:42.296: INFO: starting to create frontend-dev deployments
Apr 24 19:52:42.351: INFO: starting to create backend deployments
Apr 24 19:52:42.396: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Apr 24 19:53:05.365: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.44.69 port 80: Connection timed out

STEP: Cleaning up after ourselves
Apr 24 19:55:15.478: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Apr 24 19:55:15.629: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.44.69 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.44.69 port 80: Connection timed out

STEP: Cleaning up after ourselves
Apr 24 19:59:36.239: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Apr 24 19:59:36.405: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.201.194 port 80: Connection timed out

STEP: Cleaning up after ourselves
Apr 24 20:01:47.311: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Apr 24 20:01:47.468: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.44.68 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.201.194 port 80: Connection timed out

STEP: Cleaning up after ourselves
Apr 24 20:06:09.456: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Apr 24 20:06:09.629: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.44.69 port 80: Connection timed out

STEP: Cleaning up after ourselves
Apr 24 20:08:19.862: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Apr 24 20:08:20.041: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.44.69 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-ycwv10-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-ycwv10/capz-e2e-ycwv10-ha logs
Apr 24 20:10:31.958: INFO: INFO: Collecting logs for node capz-e2e-ycwv10-ha-control-plane-n9h5x in cluster capz-e2e-ycwv10-ha in namespace capz-e2e-ycwv10

Apr 24 20:10:42.217: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-ycwv10-ha-control-plane-n9h5x
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-6tj9r, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-gnvmg, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-7qgxp, container coredns
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-ycwv10-ha-control-plane-n9h5x, container etcd
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-ycwv10-ha-control-plane-kptv5, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-vh2rv, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-ycwv10-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001104376s
STEP: Dumping all the Cluster API resources in the "capz-e2e-ycwv10" namespace
STEP: Deleting all clusters in the capz-e2e-ycwv10 namespace
STEP: Deleting cluster capz-e2e-ycwv10-ha
INFO: Waiting for the Cluster capz-e2e-ycwv10/capz-e2e-ycwv10-ha to be deleted
STEP: Waiting for cluster capz-e2e-ycwv10-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-t2m74, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-gnvmg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ycwv10-ha-control-plane-kptv5, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ycwv10-ha-control-plane-kptv5, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-45xpk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ycwv10-ha-control-plane-kptv5, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ycwv10-ha-control-plane-kptv5, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2gx5x, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-pmhm2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6tj9r, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ycwv10-ha-control-plane-n9h5x, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7nkch, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vh2rv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ycwv10-ha-control-plane-n9h5x, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ycwv10-ha-control-plane-n9h5x, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ycwv10-ha-control-plane-n9h5x, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-ycwv10
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 41m40s on Ginkgo node 1 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Sun, 24 Apr 2022 19:42:45 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-5vbmev" for hosting the cluster
Apr 24 19:42:45.881: INFO: starting to create namespace for hosting the "capz-e2e-5vbmev" test spec
2022/04/24 19:42:45 failed trying to get namespace (capz-e2e-5vbmev):namespaces "capz-e2e-5vbmev" not found
INFO: Creating namespace capz-e2e-5vbmev
INFO: Creating event watcher for namespace "capz-e2e-5vbmev"
Apr 24 19:42:45.969: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-5vbmev-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Dumping workload cluster capz-e2e-5vbmev/capz-e2e-5vbmev-public-custom-vnet Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-q8njv, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-98rq2, container coredns
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-5vbmev-public-custom-vnet-control-plane-8xz7q, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-2rx9j, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-qw8gr, container calico-node
STEP: Got error while iterating over activity logs for resource group capz-e2e-5vbmev-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000759596s
STEP: Dumping all the Cluster API resources in the "capz-e2e-5vbmev" namespace
STEP: Deleting all clusters in the capz-e2e-5vbmev namespace
STEP: Deleting cluster capz-e2e-5vbmev-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-5vbmev/capz-e2e-5vbmev-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-5vbmev-public-custom-vnet to be deleted
W0424 20:28:05.965932   24134 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I0424 20:28:36.794658   24134 trace.go:205] Trace[1702700223]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (24-Apr-2022 20:28:06.793) (total time: 30000ms):
Trace[1702700223]: [30.000830547s] [30.000830547s] END
E0424 20:28:36.794721   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp 20.232.83.243:6443: i/o timeout
I0424 20:29:08.866962   24134 trace.go:205] Trace[1002155012]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (24-Apr-2022 20:28:38.866) (total time: 30000ms):
Trace[1002155012]: [30.000756371s] [30.000756371s] END
E0424 20:29:08.867035   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp 20.232.83.243:6443: i/o timeout
I0424 20:29:42.185517   24134 trace.go:205] Trace[1490635578]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (24-Apr-2022 20:29:12.184) (total time: 30001ms):
Trace[1490635578]: [30.00101031s] [30.00101031s] END
E0424 20:29:42.185647   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp 20.232.83.243:6443: i/o timeout
I0424 20:30:22.655853   24134 trace.go:205] Trace[809032984]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (24-Apr-2022 20:29:52.654) (total time: 30001ms):
Trace[809032984]: [30.001240219s] [30.001240219s] END
E0424 20:30:22.655918   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp 20.232.83.243:6443: i/o timeout
I0424 20:31:16.104348   24134 trace.go:205] Trace[533059725]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (24-Apr-2022 20:30:46.101) (total time: 30002ms):
Trace[533059725]: [30.002606059s] [30.002606059s] END
E0424 20:31:16.104413   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp 20.232.83.243:6443: i/o timeout
I0424 20:32:21.764705   24134 trace.go:205] Trace[92750878]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (24-Apr-2022 20:31:51.761) (total time: 30003ms):
Trace[92750878]: [30.003151322s] [30.003151322s] END
E0424 20:32:21.764773   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp 20.232.83.243:6443: i/o timeout
E0424 20:33:20.591884   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-5vbmev
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Apr 24 20:33:26.990: INFO: deleting an existing virtual network "custom-vnet"
Apr 24 20:33:37.632: INFO: deleting an existing route table "node-routetable"
Apr 24 20:33:39.946: INFO: deleting an existing network security group "node-nsg"
Apr 24 20:33:50.301: INFO: deleting an existing network security group "control-plane-nsg"
E0424 20:33:56.783705   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Apr 24 20:34:00.657: INFO: verifying the existing resource group "capz-e2e-5vbmev-public-custom-vnet" is empty
Apr 24 20:34:00.744: INFO: deleting the existing resource group "capz-e2e-5vbmev-public-custom-vnet"
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E0424 20:34:33.402350   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:35:09.476681   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 52m28s on Ginkgo node 2 of 3


• [SLOW TEST:3148.272 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Sun, 24 Apr 2022 20:24:26 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-yn1p1j" for hosting the cluster
Apr 24 20:24:26.231: INFO: starting to create namespace for hosting the "capz-e2e-yn1p1j" test spec
2022/04/24 20:24:26 failed trying to get namespace (capz-e2e-yn1p1j):namespaces "capz-e2e-yn1p1j" not found
INFO: Creating namespace capz-e2e-yn1p1j
INFO: Creating event watcher for namespace "capz-e2e-yn1p1j"
Apr 24 20:24:26.263: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-yn1p1j-oot
INFO: Creating the workload cluster with name "capz-e2e-yn1p1j-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 98 lines ...
STEP: Fetching activity logs took 738.728742ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-yn1p1j" namespace
STEP: Deleting all clusters in the capz-e2e-yn1p1j namespace
STEP: Deleting cluster capz-e2e-yn1p1j-oot
INFO: Waiting for the Cluster capz-e2e-yn1p1j/capz-e2e-yn1p1j-oot to be deleted
STEP: Waiting for cluster capz-e2e-yn1p1j-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-wpd8n, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-x4xlh, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-swmxq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4t5k7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-zdm6b, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-vzdhl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-698h5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-yn1p1j-oot-control-plane-k46c6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-yn1p1j-oot-control-plane-k46c6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-yn1p1j-oot-control-plane-k46c6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-pmn9l, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-8dvt7, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-controller-manager, container cloud-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-yn1p1j-oot-control-plane-k46c6, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-c2lrn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-czt7d, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-9zjg4, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-yn1p1j
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 16m25s on Ginkgo node 1 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Sun, 24 Apr 2022 20:24:22 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-jwvieo" for hosting the cluster
Apr 24 20:24:22.515: INFO: starting to create namespace for hosting the "capz-e2e-jwvieo" test spec
2022/04/24 20:24:22 failed trying to get namespace (capz-e2e-jwvieo):namespaces "capz-e2e-jwvieo" not found
INFO: Creating namespace capz-e2e-jwvieo
INFO: Creating event watcher for namespace "capz-e2e-jwvieo"
Apr 24 20:24:22.560: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-jwvieo-gpu
INFO: Creating the workload cluster with name "capz-e2e-jwvieo-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 58 lines ...
STEP: Fetching activity logs took 859.75261ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-jwvieo" namespace
STEP: Deleting all clusters in the capz-e2e-jwvieo namespace
STEP: Deleting cluster capz-e2e-jwvieo-gpu
INFO: Waiting for the Cluster capz-e2e-jwvieo/capz-e2e-jwvieo-gpu to be deleted
STEP: Waiting for cluster capz-e2e-jwvieo-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-v2j4w, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-xtkwr, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-jwvieo-gpu-control-plane-slxnp, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-jwvieo-gpu-control-plane-slxnp, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-jwvieo-gpu-control-plane-slxnp, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-jwvieo-gpu-control-plane-slxnp, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-nb8g5, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-jbxcs, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-h8nqs, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-jwvieo
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 17m13s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Sun, 24 Apr 2022 20:41:35 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-w28lns" for hosting the cluster
Apr 24 20:41:35.660: INFO: starting to create namespace for hosting the "capz-e2e-w28lns" test spec
2022/04/24 20:41:35 failed trying to get namespace (capz-e2e-w28lns):namespaces "capz-e2e-w28lns" not found
INFO: Creating namespace capz-e2e-w28lns
INFO: Creating event watcher for namespace "capz-e2e-w28lns"
Apr 24 20:41:35.698: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-w28lns-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-w28lns-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 89 lines ...
STEP: waiting for job default/curl-to-elb-jobx4q90q1iix5 to be complete
Apr 24 20:55:09.424: INFO: waiting for job default/curl-to-elb-jobx4q90q1iix5 to be complete
Apr 24 20:55:19.495: INFO: job default/curl-to-elb-jobx4q90q1iix5 is complete, took 10.071480459s
STEP: connecting directly to the external LB service
Apr 24 20:55:19.495: INFO: starting attempts to connect directly to the external LB service
2022/04/24 20:55:19 [DEBUG] GET http://20.232.74.38
2022/04/24 20:55:49 [ERR] GET http://20.232.74.38 request failed: Get "http://20.232.74.38": dial tcp 20.232.74.38:80: i/o timeout
2022/04/24 20:55:49 [DEBUG] GET http://20.232.74.38: retrying in 1s (4 left)
Apr 24 20:55:50.555: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Apr 24 20:55:50.555: INFO: starting to delete external LB service web-windows6lznzh-elb
Apr 24 20:55:50.610: INFO: starting to delete deployment web-windows6lznzh
Apr 24 20:55:50.640: INFO: starting to delete job curl-to-elb-jobx4q90q1iix5
... skipping 29 lines ...
STEP: Fetching activity logs took 980.32083ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-w28lns" namespace
STEP: Deleting all clusters in the capz-e2e-w28lns namespace
STEP: Deleting cluster capz-e2e-w28lns-win-vmss
INFO: Waiting for the Cluster capz-e2e-w28lns/capz-e2e-w28lns-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-w28lns-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-27wrf, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-ht9s9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-tvkcr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-c29b8, container kube-flannel: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-w28lns
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 31m21s on Ginkgo node 3 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Sun, 24 Apr 2022 20:40:50 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-qlxke5" for hosting the cluster
Apr 24 20:40:50.976: INFO: starting to create namespace for hosting the "capz-e2e-qlxke5" test spec
2022/04/24 20:40:50 failed trying to get namespace (capz-e2e-qlxke5):namespaces "capz-e2e-qlxke5" not found
INFO: Creating namespace capz-e2e-qlxke5
INFO: Creating event watcher for namespace "capz-e2e-qlxke5"
Apr 24 20:40:51.007: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-qlxke5-win-ha
INFO: Creating the workload cluster with name "capz-e2e-qlxke5-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 145 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-qlxke5-win-ha-control-plane-296hx, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-qlxke5-win-ha-control-plane-cnsc7, container kube-scheduler
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-qlxke5-win-ha-control-plane-cnsc7, container etcd
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-qlxke5-win-ha-control-plane-j4vf5, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-2znh8, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-hwtc9, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-qlxke5-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000445012s
STEP: Dumping all the Cluster API resources in the "capz-e2e-qlxke5" namespace
STEP: Deleting all clusters in the capz-e2e-qlxke5 namespace
STEP: Deleting cluster capz-e2e-qlxke5-win-ha
INFO: Waiting for the Cluster capz-e2e-qlxke5/capz-e2e-qlxke5-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-qlxke5-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-q6ljv, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-qlxke5-win-ha-control-plane-cnsc7, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-qlxke5-win-ha-control-plane-cnsc7, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-qlxke5-win-ha-control-plane-cnsc7, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rhr4x, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-qlxke5-win-ha-control-plane-cnsc7, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-s7x4r, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-dqwd8, container kube-flannel: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-qlxke5
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 41m50s on Ginkgo node 1 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Sun, 24 Apr 2022 20:35:14 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-xj6jih" for hosting the cluster
Apr 24 20:35:14.156: INFO: starting to create namespace for hosting the "capz-e2e-xj6jih" test spec
2022/04/24 20:35:14 failed trying to get namespace (capz-e2e-xj6jih):namespaces "capz-e2e-xj6jih" not found
INFO: Creating namespace capz-e2e-xj6jih
INFO: Creating event watcher for namespace "capz-e2e-xj6jih"
Apr 24 20:35:14.877: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-xj6jih-aks
INFO: Creating the workload cluster with name "capz-e2e-xj6jih-aks" using the "aks-multi-tenancy" template (Kubernetes v1.22.6, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E0424 20:35:54.083641   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:36:35.483702   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:37:15.466623   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:37:57.194555   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:38:46.338850   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:39:37.810055   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:40:23.659648   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:41:16.369789   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:41:55.437122   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Apr 24 20:42:46.519: INFO: Waiting for the first control plane machine managed by capz-e2e-xj6jih/capz-e2e-xj6jih-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
E0424 20:42:49.585640   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:43:41.030992   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:44:24.373844   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:44:56.497005   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:45:55.156383   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:46:39.656399   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:47:12.482091   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:47:48.595894   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:48:25.487317   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:49:14.835801   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:50:12.275230   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:50:43.715147   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:51:14.761235   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:51:57.882320   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:52:51.205916   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:53:47.115779   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:54:31.198949   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:55:02.200081   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:56:00.515272   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:56:31.356779   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:57:11.782999   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:57:47.530010   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:58:31.236631   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:59:06.579109   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 20:59:39.143870   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:00:15.753297   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:00:49.507688   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:01:22.164272   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:01:58.829039   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:02:40.674156   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Dumping logs from the "capz-e2e-xj6jih-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-xj6jih/capz-e2e-xj6jih-aks logs
STEP: Dumping workload cluster capz-e2e-xj6jih/capz-e2e-xj6jih-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 402.299133ms
STEP: Dumping workload cluster capz-e2e-xj6jih/capz-e2e-xj6jih-aks Azure activity log
STEP: Creating log watcher for controller kube-system/azure-ip-masq-agent-5pp64, container azure-ip-masq-agent
... skipping 22 lines ...
STEP: Fetching activity logs took 814.442218ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-xj6jih" namespace
STEP: Deleting all clusters in the capz-e2e-xj6jih namespace
STEP: Deleting cluster capz-e2e-xj6jih-aks
INFO: Waiting for the Cluster capz-e2e-xj6jih/capz-e2e-xj6jih-aks to be deleted
STEP: Waiting for cluster capz-e2e-xj6jih-aks to be deleted
E0424 21:03:20.176935   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:04:15.497752   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:04:52.650674   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:05:29.700962   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:06:22.876979   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:07:14.074915   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:07:49.704308   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:08:32.477907   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:09:16.967112   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:09:49.288479   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:10:36.619190   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:11:07.349780   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:11:46.038548   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:12:45.192345   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:13:20.891554   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:14:20.163966   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:14:56.021305   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:15:31.403435   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:16:17.683429   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:17:02.608066   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:17:51.255914   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:18:42.345453   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:19:38.377174   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:20:18.562768   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:20:51.921703   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:21:25.639092   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:22:16.045008   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:23:06.180562   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:23:54.535156   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:24:29.404893   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:25:17.197033   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:25:58.208405   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:26:28.571973   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:27:18.380674   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:28:17.330519   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:28:50.781169   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:29:36.986084   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:30:20.666891   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:30:56.860917   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:31:40.237891   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:32:35.127841   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Redacting sensitive information from logs
E0424 21:33:11.177434   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0424 21:34:08.253492   24134 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-5vbmev/events?resourceVersion=8336": dial tcp: lookup capz-e2e-5vbmev-public-custom-vnet-82cd1835.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host


• Failure [3535.954 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating an AKS cluster
... skipping 55 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating an AKS cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216

Ran 9 of 22 Specs in 6799.697 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 1h54m40.651420584s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...