This job view page is being replaced by Spyglass soon. Check out the new job view.
PRk8s-infra-cherrypick-robot: [release-1.3] Wait calico pod to be ready before installing windows CNMs
Resultsuccess
Tests 0 failed / 4 succeeded
Started2022-09-07 09:48
Elapsed1h39m
Revision
Refs 2634
uploadercrier

No Test Failures!


Show 4 Passed Tests

Show 19 Skipped Tests

Error lines from build-log.txt

... skipping 515 lines ...
 ✓ Installing CNI 🔌
 • Installing StorageClass 💾  ...
 ✓ Installing StorageClass 💾
INFO: The kubeconfig file for the kind cluster is /tmp/e2e-kind811094229
INFO: Loading image: "capzci.azurecr.io/cluster-api-azure-controller-amd64:20220907094925"
INFO: Loading image: "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.4"
INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.4" into the kind cluster "capz-e2e": error saving image "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.4" to "/tmp/image-tar922477711/image.tar": unable to read image data: Error response from daemon: reference does not exist
INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.4"
INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.4" into the kind cluster "capz-e2e": error saving image "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.4" to "/tmp/image-tar2646808494/image.tar": unable to read image data: Error response from daemon: reference does not exist
INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.4"
INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.4" into the kind cluster "capz-e2e": error saving image "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.4" to "/tmp/image-tar2389449420/image.tar": unable to read image data: Error response from daemon: reference does not exist
STEP: Initializing the bootstrap cluster
INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure azure
INFO: Waiting for provider controllers to be running
STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available
INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-8447dbccc5-pwkth, container manager
STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available
... skipping 19 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:271

INFO: "With ipv6 worker node" started at Wed, 07 Sep 2022 09:58:07 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-trx0rd" for hosting the cluster
Sep  7 09:58:07.405: INFO: starting to create namespace for hosting the "capz-e2e-trx0rd" test spec
2022/09/07 09:58:07 failed trying to get namespace (capz-e2e-trx0rd):namespaces "capz-e2e-trx0rd" not found
INFO: Creating namespace capz-e2e-trx0rd
INFO: Creating event watcher for namespace "capz-e2e-trx0rd"
Sep  7 09:58:07.449: INFO: Creating cluster identity secret "cluster-identity-secret"
INFO: Cluster name is capz-e2e-trx0rd-ipv6
INFO: Creating the workload cluster with name "capz-e2e-trx0rd-ipv6" using the "ipv6" template (Kubernetes v1.22.13, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 129 lines ...
STEP: Fetching activity logs took 1.941676788s
STEP: Dumping all the Cluster API resources in the "capz-e2e-trx0rd" namespace
STEP: Deleting all clusters in the capz-e2e-trx0rd namespace
STEP: Deleting cluster capz-e2e-trx0rd-ipv6
INFO: Waiting for the Cluster capz-e2e-trx0rd/capz-e2e-trx0rd-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-trx0rd-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-95kj2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rfwsl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-trx0rd-ipv6-control-plane-2fwwd, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-trx0rd-ipv6-control-plane-jzlzb, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-s55dm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xc75k, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-trx0rd-ipv6-control-plane-jzlzb, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-trx0rd-ipv6-control-plane-2fwwd, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-trx0rd-ipv6-control-plane-j5667, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-trx0rd-ipv6-control-plane-jzlzb, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dkt4l, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-trx0rd-ipv6-control-plane-j5667, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-trx0rd-ipv6-control-plane-jzlzb, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-typha-66d648f7ff-hpm4f, container calico-typha: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-tr58f, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-v4z4m, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-trx0rd-ipv6-control-plane-2fwwd, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-6gx4h, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-vdfl5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-brkfb, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-trx0rd-ipv6-control-plane-j5667, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-trx0rd-ipv6-control-plane-j5667, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-trx0rd-ipv6-control-plane-2fwwd, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-969cf87c4-xr82x, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-trx0rd
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 19m0s on Ginkgo node 3 of 3

... skipping 10 lines ...
  Creates a public management cluster in a custom vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:145

INFO: "Creates a public management cluster in a custom vnet" started at Wed, 07 Sep 2022 09:58:05 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-epk2qe" for hosting the cluster
Sep  7 09:58:05.227: INFO: starting to create namespace for hosting the "capz-e2e-epk2qe" test spec
2022/09/07 09:58:05 failed trying to get namespace (capz-e2e-epk2qe):namespaces "capz-e2e-epk2qe" not found
INFO: Creating namespace capz-e2e-epk2qe
INFO: Creating event watcher for namespace "capz-e2e-epk2qe"
Sep  7 09:58:05.268: INFO: Creating cluster identity secret "cluster-identity-secret"
INFO: Cluster name is capz-e2e-epk2qe-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 98 lines ...
STEP: Collecting events for Pod kube-system/calico-node-ncqsx
STEP: Collecting events for Pod kube-system/calico-node-jdgd6
STEP: Dumping workload cluster capz-e2e-epk2qe/capz-e2e-epk2qe-public-custom-vnet Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-98blx, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/etcd-capz-e2e-epk2qe-public-custom-vnet-control-plane-ctvs5
STEP: Collecting events for Pod kube-system/calico-kube-controllers-969cf87c4-98blx
STEP: failed to find events of Pod "etcd-capz-e2e-epk2qe-public-custom-vnet-control-plane-ctvs5"
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-rvv4k
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-tldmm, container coredns
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-epk2qe-public-custom-vnet-control-plane-ctvs5, container kube-apiserver
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-tldmm
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-e2e-epk2qe-public-custom-vnet-control-plane-ctvs5
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-epk2qe-public-custom-vnet-control-plane-ctvs5, container kube-controller-manager
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-e2e-epk2qe-public-custom-vnet-control-plane-ctvs5
STEP: failed to find events of Pod "kube-scheduler-capz-e2e-epk2qe-public-custom-vnet-control-plane-ctvs5"
STEP: Creating log watcher for controller kube-system/kube-proxy-r8nmk, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-r8nmk
STEP: Creating log watcher for controller kube-system/kube-proxy-xghnc, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-xghnc
STEP: failed to find events of Pod "kube-controller-manager-capz-e2e-epk2qe-public-custom-vnet-control-plane-ctvs5"
STEP: Fetching activity logs took 3.725404697s
STEP: Dumping all the Cluster API resources in the "capz-e2e-epk2qe" namespace
STEP: Deleting all clusters in the capz-e2e-epk2qe namespace
STEP: Deleting cluster capz-e2e-epk2qe-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-epk2qe/capz-e2e-epk2qe-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-epk2qe-public-custom-vnet to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-epk2qe-public-custom-vnet-control-plane-ctvs5, container kube-controller-manager: http2: client connection lost
W0907 10:39:29.192826   30604 reflector.go:442] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
STEP: Got error while streaming logs for pod kube-system/kube-proxy-r8nmk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-tldmm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-rvv4k, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-jdgd6, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-epk2qe-public-custom-vnet-control-plane-ctvs5, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-epk2qe-public-custom-vnet-control-plane-ctvs5, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-969cf87c4-98blx, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-epk2qe-public-custom-vnet-control-plane-ctvs5, container etcd: http2: client connection lost
W0907 10:40:00.769250   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp 51.143.62.186:6443: i/o timeout
I0907 10:40:00.769387   30604 trace.go:205] Trace[1663940567]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167 (07-Sep-2022 10:39:30.767) (total time: 30001ms):
Trace[1663940567]: ---"Objects listed" error:Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp 51.143.62.186:6443: i/o timeout 30001ms (10:40:00.769)
Trace[1663940567]: [30.001667778s] [30.001667778s] END
E0907 10:40:00.769440   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp 51.143.62.186:6443: i/o timeout
W0907 10:40:33.848799   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp 51.143.62.186:6443: i/o timeout
I0907 10:40:33.848915   30604 trace.go:205] Trace[155798033]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167 (07-Sep-2022 10:40:03.846) (total time: 30002ms):
Trace[155798033]: ---"Objects listed" error:Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp 51.143.62.186:6443: i/o timeout 30002ms (10:40:33.848)
Trace[155798033]: [30.00283034s] [30.00283034s] END
E0907 10:40:33.848936   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp 51.143.62.186:6443: i/o timeout
W0907 10:41:07.667815   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp 51.143.62.186:6443: i/o timeout
I0907 10:41:07.667944   30604 trace.go:205] Trace[801688636]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167 (07-Sep-2022 10:40:37.666) (total time: 30001ms):
Trace[801688636]: ---"Objects listed" error:Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp 51.143.62.186:6443: i/o timeout 30001ms (10:41:07.667)
Trace[801688636]: [30.001265905s] [30.001265905s] END
E0907 10:41:07.667980   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp 51.143.62.186:6443: i/o timeout
W0907 10:41:44.337758   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp 51.143.62.186:6443: i/o timeout
I0907 10:41:44.337880   30604 trace.go:205] Trace[2006885331]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167 (07-Sep-2022 10:41:14.336) (total time: 30001ms):
Trace[2006885331]: ---"Objects listed" error:Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp 51.143.62.186:6443: i/o timeout 30001ms (10:41:44.337)
Trace[2006885331]: [30.001313354s] [30.001313354s] END
E0907 10:41:44.337902   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp 51.143.62.186:6443: i/o timeout
W0907 10:42:36.469512   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp 51.143.62.186:6443: i/o timeout
I0907 10:42:36.469672   30604 trace.go:205] Trace[10412047]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167 (07-Sep-2022 10:42:06.468) (total time: 30001ms):
Trace[10412047]: ---"Objects listed" error:Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp 51.143.62.186:6443: i/o timeout 30001ms (10:42:36.469)
Trace[10412047]: [30.001177894s] [30.001177894s] END
E0907 10:42:36.469694   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp 51.143.62.186:6443: i/o timeout
W0907 10:43:06.506583   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:43:06.506677   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-epk2qe
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Sep  7 10:43:24.162: INFO: deleting an existing virtual network "custom-vnet"
Sep  7 10:43:34.650: INFO: deleting an existing route table "node-routetable"
Sep  7 10:43:37.139: INFO: deleting an existing network security group "node-nsg"
W0907 10:43:44.705518   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:43:44.705695   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Sep  7 10:43:47.670: INFO: deleting an existing network security group "control-plane-nsg"
Sep  7 10:43:58.169: INFO: verifying the existing resource group "capz-e2e-epk2qe-public-custom-vnet" is empty
Sep  7 10:43:58.270: INFO: deleting the existing resource group "capz-e2e-epk2qe-public-custom-vnet"
W0907 10:44:23.774553   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:44:23.774649   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
W0907 10:45:19.337631   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:45:19.337749   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in a custom vnet" ran for 47m36s on Ginkgo node 1 of 3


• [SLOW TEST:2856.114 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:44
... skipping 8 lines ...
  With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:197

INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" started at Wed, 07 Sep 2022 09:58:06 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-bs3xs4" for hosting the cluster
Sep  7 09:58:06.873: INFO: starting to create namespace for hosting the "capz-e2e-bs3xs4" test spec
2022/09/07 09:58:06 failed trying to get namespace (capz-e2e-bs3xs4):namespaces "capz-e2e-bs3xs4" not found
INFO: Creating namespace capz-e2e-bs3xs4
INFO: Creating event watcher for namespace "capz-e2e-bs3xs4"
Sep  7 09:58:06.914: INFO: Creating cluster identity secret "cluster-identity-secret"
INFO: Cluster name is capz-e2e-bs3xs4-ha
INFO: Creating the workload cluster with name "capz-e2e-bs3xs4-ha" using the "(default)" template (Kubernetes v1.22.13, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 63 lines ...
STEP: waiting for job default/curl-to-elb-job14smt5q22hs to be complete
Sep  7 10:22:06.094: INFO: waiting for job default/curl-to-elb-job14smt5q22hs to be complete
Sep  7 10:22:16.210: INFO: job default/curl-to-elb-job14smt5q22hs is complete, took 10.116035926s
STEP: connecting directly to the external LB service
Sep  7 10:22:16.210: INFO: starting attempts to connect directly to the external LB service
2022/09/07 10:22:16 [DEBUG] GET http://52.143.83.207
2022/09/07 10:22:46 [ERR] GET http://52.143.83.207 request failed: Get "http://52.143.83.207": dial tcp 52.143.83.207:80: i/o timeout
2022/09/07 10:22:46 [DEBUG] GET http://52.143.83.207: retrying in 1s (4 left)
Sep  7 10:22:54.533: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Sep  7 10:22:54.533: INFO: starting to delete external LB service webvvywwo-elb
Sep  7 10:22:54.626: INFO: waiting for the external LB service to be deleted: webvvywwo-elb
Sep  7 10:23:31.400: INFO: starting to delete deployment webvvywwo
Sep  7 10:23:31.472: INFO: starting to delete job curl-to-elb-job14smt5q22hs
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Sep  7 10:23:31.577: INFO: starting to create dev deployment namespace
2022/09/07 10:23:31 failed trying to get namespace (development):namespaces "development" not found
2022/09/07 10:23:31 namespace development does not exist, creating...
STEP: Creating production namespace
Sep  7 10:23:31.699: INFO: starting to create prod deployment namespace
2022/09/07 10:23:31 failed trying to get namespace (production):namespaces "production" not found
2022/09/07 10:23:31 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Sep  7 10:23:31.818: INFO: starting to create frontend-prod deployments
Sep  7 10:23:31.885: INFO: starting to create frontend-dev deployments
Sep  7 10:23:31.951: INFO: starting to create backend deployments
Sep  7 10:23:32.021: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Sep  7 10:23:55.846: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.90.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
Sep  7 10:26:06.457: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Sep  7 10:26:06.704: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.90.132 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.90.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
Sep  7 10:30:28.113: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Sep  7 10:30:28.370: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.82.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Sep  7 10:32:39.188: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Sep  7 10:32:39.443: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.82.130 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.82.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Sep  7 10:37:01.333: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Sep  7 10:37:01.623: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.90.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
Sep  7 10:39:12.883: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Sep  7 10:39:13.166: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.90.132 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowsi17p2t to be available
Sep  7 10:41:24.794: INFO: starting to wait for deployment to become available
Sep  7 10:42:05.085: INFO: Deployment default/web-windowsi17p2t is now available, took 40.290478437s
... skipping 53 lines ...
Sep  7 10:47:11.671: INFO: Collecting boot logs for AzureMachine capz-e2e-bs3xs4-ha-md-0-8b4rq

Sep  7 10:47:11.975: INFO: Collecting logs for Windows node capz-e2e-88t75 in cluster capz-e2e-bs3xs4-ha in namespace capz-e2e-bs3xs4

Sep  7 10:48:35.854: INFO: Collecting boot logs for AzureMachine capz-e2e-bs3xs4-ha-md-win-88t75

Failed to get logs for machine capz-e2e-bs3xs4-ha-md-win-646cccb4c6-96j5d, cluster capz-e2e-bs3xs4/capz-e2e-bs3xs4-ha: running command "Get-Content "C:\\cni.log"": Process exited with status 1
Sep  7 10:48:36.218: INFO: Collecting logs for Windows node capz-e2e-q45wb in cluster capz-e2e-bs3xs4-ha in namespace capz-e2e-bs3xs4

Sep  7 10:49:00.870: INFO: Collecting boot logs for AzureMachine capz-e2e-bs3xs4-ha-md-win-q45wb

STEP: Dumping workload cluster capz-e2e-bs3xs4/capz-e2e-bs3xs4-ha kube-system pod logs
STEP: Fetching kube-system pod logs took 569.654762ms
... skipping 69 lines ...
STEP: Fetching activity logs took 2.798532549s
STEP: Dumping all the Cluster API resources in the "capz-e2e-bs3xs4" namespace
STEP: Deleting all clusters in the capz-e2e-bs3xs4 namespace
STEP: Deleting cluster capz-e2e-bs3xs4-ha
INFO: Waiting for the Cluster capz-e2e-bs3xs4/capz-e2e-bs3xs4-ha to be deleted
STEP: Waiting for cluster capz-e2e-bs3xs4-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-bs3xs4-ha-control-plane-qtpqv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7fnwg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-bs3xs4-ha-control-plane-f4mdm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-bwvmf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/containerd-logger-bjgmb, container containerd-logger: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-qdspw, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-sjzsf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-969cf87c4-xzjlm, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-bs3xs4-ha-control-plane-xbpmp, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7smnw, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-bs3xs4-ha-control-plane-f4mdm, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7t4qv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-bs3xs4-ha-control-plane-qtpqv, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cbj8s, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/containerd-logger-lvkh6, container containerd-logger: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-bs3xs4-ha-control-plane-qtpqv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-qdspw, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-dxxsn, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-bs3xs4-ha-control-plane-xbpmp, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-bs3xs4-ha-control-plane-xbpmp, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-dvfq2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-dxxsn, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cmsvf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-nb554, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-bs3xs4-ha-control-plane-f4mdm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mjnzl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-bs3xs4-ha-control-plane-xbpmp, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-8gmtl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/csi-proxy-6tp22, container csi-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/csi-proxy-fnzfn, container csi-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-vs7c4, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-bs3xs4-ha-control-plane-qtpqv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-h2zsn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-bs3xs4-ha-control-plane-f4mdm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-w7p72, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-bs3xs4
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" ran for 1h0m7s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:310

INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" started at Wed, 07 Sep 2022 10:17:07 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-2io6mt" for hosting the cluster
Sep  7 10:17:07.566: INFO: starting to create namespace for hosting the "capz-e2e-2io6mt" test spec
2022/09/07 10:17:07 failed trying to get namespace (capz-e2e-2io6mt):namespaces "capz-e2e-2io6mt" not found
INFO: Creating namespace capz-e2e-2io6mt
INFO: Creating event watcher for namespace "capz-e2e-2io6mt"
Sep  7 10:17:07.602: INFO: Creating cluster identity secret "cluster-identity-secret"
INFO: Cluster name is capz-e2e-2io6mt-vmss
INFO: Creating the workload cluster with name "capz-e2e-2io6mt-vmss" using the "machine-pool" template (Kubernetes v1.22.13, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 38 lines ...
Sep  7 10:39:30.554: INFO: job default/kubescape-scan is complete, took 20.161810554s
Output of "kubescape scan framework nsa --enable-host-scan --exclude-namespaces kube-system,kube-public":

Logs for pod kubescape-scan-6pjsh:
ARMO security scanner starting
[progress] Installing host sensor
[Error] failed to init host sensor
Warning: 'kubescape' is not updated to the latest release: 'v2.0.170'%!(EXTRA string=
)[progress] Downloading/Loading policy definitions
[success] Downloaded/Loaded policy
[progress] Accessing Kubernetes objects
W0907 10:39:18.385956       1 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
W0907 10:39:18.393481       1 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
[success] Accessed successfully to Kubernetes objects
[progress] Scanning cluster 
[success] Done scanning cluster 
[control: Allow privilege escalation - https://hub.armo.cloud/docs/c-0016] failed 😥
Description: Attackers may gain access to a container and uplift its privilege to enable excessive capabilities.
Failed:
   Namescape default
      Job - kubescape-scan 
Summary - Passed:0   Excluded:0   Failed:1   Total:1
Remediation: If your application does not need it, make sure the allowPrivilegeEscalation field of the securityContext is set to false.

[control: Allowed hostPath - https://hub.armo.cloud/docs/c-0006] passed 👍
Description: Mounting host directory to the container can be abused to get access to sensitive data and gain persistence on the host machine.
Summary - Passed:1   Excluded:0   Failed:0   Total:1

[control: Applications credentials in configuration files - https://hub.armo.cloud/docs/c-0012] passed 👍
Description: Attackers who have access to configuration files can steal the stored secrets and use them. This control checks if ConfigMaps or pod specifications have sensitive information in their configuration.
Summary - Passed:3   Excluded:0   Failed:0   Total:3

[control: Audit logs enabled - https://hub.armo.cloud/docs/c-0067] skipped 😕
Description: Audit logging is an important security feature in Kubernetes, it enables the operator to track requests to the cluster. It is important to use it so the operator has a record of events happened in Kubernetes
[control: Automatic mapping of service account - https://hub.armo.cloud/docs/c-0034] failed 😥
Description: Potential attacker may gain access to a POD and steal its service account token. Therefore, it is recommended to disable automatic mapping of the service account tokens in service account configuration and enable it only for PODs that need to use them.
Failed:
   Namescape kube-node-lease
      ServiceAccount - default 
   Namescape default
      ServiceAccount - default 
      ServiceAccount - kubescape-discovery 
      Job - kubescape-scan 
Summary - Passed:0   Excluded:0   Failed:4   Total:4
Remediation: Disable automatic mounting of service account tokens to PODs either at the service account level or at the individual POD level, by specifying the automountServiceAccountToken: false. Note that POD level takes precedence.

[control: CVE-2021-25741 - Using symlink for arbitrary host file system access. - https://hub.armo.cloud/docs/c-0058] skipped 😕
Description: A user may be able to create a container with subPath or subPathExpr volume mounts to access files & directories anywhere on the host filesystem. Following Kubernetes versions are affected: v1.22.0 - v1.22.1, v1.21.0 - v1.21.4, v1.20.0 - v1.20.10, version v1.19.14 and lower. This control checks the vulnerable versions and the actual usage of the subPath feature in all Pods in the cluster. If you want to learn more about the CVE, please refer to the CVE link: https://nvd.nist.gov/vuln/detail/CVE-2021-25741
[control: CVE-2021-25742-nginx-ingress-snippet-annotation-vulnerability - https://hub.armo.cloud/docs/c-0059] skipped 😕
Description: Security issue in ingress-nginx where a user that can create or update ingress objects can use the custom snippets feature to obtain all secrets in the cluster (see more at https://github.com/kubernetes/ingress-nginx/issues/7837)
[control: Cluster internal networking - https://hub.armo.cloud/docs/c-0054] failed 😥
Description: If no network policy is defined, attackers who gain access to a container may use it to move laterally in the cluster. This control lists namespaces in which no network policy is defined.
Failed:
   Namespace - default 
   Namespace - kube-node-lease 
Summary - Passed:0   Excluded:0   Failed:2   Total:2
Remediation: Define Kubernetes network policies or use alternative products to protect cluster network.

[control: Cluster-admin binding - https://hub.armo.cloud/docs/c-0035] failed 😥
Description: Attackers who have cluster admin permissions (can perform any action on any resource), can take advantage of their privileges for malicious activities. This control determines which subjects have cluster admin permissions.
Failed:
   Groups
      Group - system:masters 
Summary - Passed:52   Excluded:0   Failed:1   Total:53
Remediation: You should apply least privilege principle. Make sure cluster admin permissions are granted only when it is absolutely necessary. Don't use subjects with such high permissions for daily operations.

[control: Container hostPort - https://hub.armo.cloud/docs/c-0044] passed 👍
Description: Configuring hostPort requires a particular port number. If two objects specify the same HostPort, they could not be deployed to the same node. It may prevent the second object from starting, even if Kubernetes will try reschedule it on another node, provided there are available nodes with sufficient amount of resources. Also, if the number of replicas of such workload is higher than the number of nodes, the deployment will consistently fail.
Summary - Passed:1   Excluded:0   Failed:0   Total:1

[control: Control plane hardening - https://hub.armo.cloud/docs/c-0005] skipped 😕
Description: Kubernetes control plane API is running with non-secure port enabled which allows attackers to gain unprotected access to the cluster.
[control: Disable anonymous access to Kubelet service - https://hub.armo.cloud/docs/c-0069] skipped 😕
Description: By default, requests to the kubelet's HTTPS endpoint that are not rejected by other configured authentication methods are treated as anonymous requests, and given a username of system:anonymous and a group of system:unauthenticated.
[control: Enforce Kubelet client TLS authentication - https://hub.armo.cloud/docs/c-0070] skipped 😕
Description: Kubelets are the node level orchestrator in Kubernetes control plane. They are publishing service port 10250 where they accept commands from API server. Operator must make sure that only API server is allowed to submit commands to Kubelet. This is done through client certificate verification, must configure Kubelet with client CA file to use for this purpose.
[control: Exec into container - https://hub.armo.cloud/docs/c-0002] failed 😥
Description: Attackers with relevant permissions can run malicious commands in the context of legitimate containers in the cluster using “kubectl exec” command. This control determines which subjects have permissions to use this command.
Failed:
   Groups
      Group - system:masters 
Summary - Passed:52   Excluded:0   Failed:1   Total:53
Remediation: It is recommended to prohibit “kubectl exec” command in production environments. It is also recommended not to use subjects with this permission for daily cluster operations.

[control: Exposed dashboard - https://hub.armo.cloud/docs/c-0047] skipped 😕
Description: Kubernetes dashboard versions before v2.0.1 do not support user authentication. If exposed externally, it will allow unauthenticated remote management of the cluster. This control checks presence of the kubernetes-dashboard deployment and its version number.
[control: Host PID/IPC privileges - https://hub.armo.cloud/docs/c-0038] passed 👍
Description: Containers should be isolated from the host machine as much as possible. The hostPID and hostIPC fields in deployment yaml may allow cross-container influence and may expose the host itself to potentially malicious or destructive actions. This control identifies all PODs using hostPID or hostIPC privileges.
Summary - Passed:1   Excluded:0   Failed:0   Total:1

[control: HostNetwork access - https://hub.armo.cloud/docs/c-0041] passed 👍
Description: Potential attackers may gain access to a POD and inherit access to the entire host network. For example, in AWS case, they will have access to the entire VPC. This control identifies all the PODs with host network access enabled.
Summary - Passed:1   Excluded:0   Failed:0   Total:1

[control: Immutable container filesystem - https://hub.armo.cloud/docs/c-0017] failed 😥
Description: Mutable container filesystem can be abused to inject malicious code or data into containers. Use immutable (read-only) filesystem to limit potential attacks.
Failed:
   Namescape default
      Job - kubescape-scan 
Summary - Passed:0   Excluded:0   Failed:1   Total:1
Remediation: Set the filesystem of the container to read-only when possible (POD securityContext, readOnlyRootFilesystem: true). If containers application needs to write into the filesystem, it is recommended to mount secondary filesystems for specific directories where application require write access.

[control: Ingress and Egress blocked - https://hub.armo.cloud/docs/c-0030] failed 😥
Description: Disable Ingress and Egress traffic on all pods wherever possible. It is recommended to define restrictive network policy on all new PODs, and then enable sources/destinations that this POD must communicate with.
Failed:
   Namescape default
      Job - kubescape-scan 
Summary - Passed:0   Excluded:0   Failed:1   Total:1
Remediation: Define a network policy that restricts ingress and egress connections.

[control: Insecure capabilities - https://hub.armo.cloud/docs/c-0046] passed 👍
Description: Giving insecure or excsessive capabilities to a container can increase the impact of the container compromise. This control identifies all the PODs with dangerous capabilities (see documentation pages for details).
Summary - Passed:1   Excluded:0   Failed:0   Total:1

[control: Linux hardening - https://hub.armo.cloud/docs/c-0055] failed 😥
Description: Containers may be given more privileges than they actually need. This can increase the potential impact of a container compromise.
Failed:
   Namescape default
      Job - kubescape-scan 
Summary - Passed:0   Excluded:0   Failed:1   Total:1
Remediation: You can use AppArmor, Seccomp, SELinux and Linux Capabilities mechanisms to restrict containers abilities to utilize unwanted privileges.

[control: Non-root containers - https://hub.armo.cloud/docs/c-0013] failed 😥
Description: Potential attackers may gain access to a container and leverage its existing privileges to conduct an attack. Therefore, it is not recommended to deploy containers with root privileges unless it is absolutely necessary. This control identifies all the Pods running as root or can escalate to root.
Failed:
   Namescape default
      Job - kubescape-scan 
Summary - Passed:0   Excluded:0   Failed:1   Total:1
Remediation: If your application does not need root privileges, make sure to define the runAsUser or runAsGroup under the PodSecurityContext and use user ID 1000 or higher. Do not turn on allowPrivlegeEscalation bit and make sure runAsNonRoot is true.

[control: PSP enabled - https://hub.armo.cloud/docs/c-0068] skipped 😕
Description: PSP enable fine-grained authorization of pod creation and it is important to enable it
[control: Privileged container - https://hub.armo.cloud/docs/c-0057] passed 👍
Description: Potential attackers may gain access to privileged containers and inherit access to the host resources. Therefore, it is not recommended to deploy privileged containers unless it is absolutely necessary. This control identifies all the privileged Pods.
Summary - Passed:1   Excluded:0   Failed:0   Total:1

[control: Resource policies - https://hub.armo.cloud/docs/c-0009] failed 😥
Description: CPU and memory resources should have a limit set for every container or a namespace to prevent resource exhaustion. This control identifies all the Pods without resource limit definitions by checking their yaml definition file as well as their namespace LimitRange objects. It is also recommended to use ResourceQuota object to restrict overall namespace resources, but this is not verified by this control.
Failed:
   Namescape default
      Job - kubescape-scan 
Summary - Passed:0   Excluded:0   Failed:1   Total:1
Remediation: Define LimitRange and Resource Limits in the namespace or in the deployment/POD yamls.

[control: Secret/ETCD encryption enabled - https://hub.armo.cloud/docs/c-0066] skipped 😕
Description: All Kubernetes Secrets are stored primarily in etcd therefore it is important to encrypt it.
FRAMEWORK NSA


You can see the results in a user-friendly UI, choose your preferred compliance framework, check risk results history and trends, manage exceptions, get remediation recommendations and much more by registering here: https://portal.armo.cloud/cli-signup 

+-----------------------------------------------------------------------+------------------+--------------------+---------------+--------------+
|                             CONTROL NAME                              | FAILED RESOURCES | EXCLUDED RESOURCES | ALL RESOURCES | % RISK-SCORE |
+-----------------------------------------------------------------------+------------------+--------------------+---------------+--------------+
| Allow privilege escalation                                            |        1         |         0          |       1       |     100%     |
| Allowed hostPath                                                      |        0         |         0          |       1       |      0%      |
| Applications credentials in configuration files                       |        0         |         0          |       3       |      0%      |
| Audit logs enabled                                                    |        0         |         0          |       0       |   skipped    |
| Automatic mapping of service account                                  |        4         |         0          |       4       |     100%     |
... skipping 130 lines ...
Sep  7 10:52:26.790: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-2io6mt-vmss-mp-0

Sep  7 10:52:27.200: INFO: Collecting logs for Linux node win-p-win000001 in cluster capz-e2e-2io6mt-vmss in namespace capz-e2e-2io6mt

Sep  7 10:53:47.559: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-e2e-2io6mt-vmss-mp-0

Failed to get logs for machine pool capz-e2e-2io6mt-vmss-mp-0, cluster capz-e2e-2io6mt/capz-e2e-2io6mt-vmss: [[running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1], Unable to collect VMSS Boot Diagnostic logs: failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="NotFound" Message="The entity was not found in this Azure location.", [running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1]]
Sep  7 10:53:48.279: INFO: Collecting logs for Windows node win-p-win000000 in cluster capz-e2e-2io6mt-vmss in namespace capz-e2e-2io6mt

Sep  7 10:54:12.750: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

Sep  7 10:54:13.113: INFO: Collecting logs for Windows node win-p-win000001 in cluster capz-e2e-2io6mt-vmss in namespace capz-e2e-2io6mt

Sep  7 10:58:48.265: INFO: Collecting boot logs for VMSS instance 1 of scale set win-p-win

Failed to get logs for machine pool capz-e2e-2io6mt-vmss-mp-win, cluster capz-e2e-2io6mt/capz-e2e-2io6mt-vmss: [dialing from control plane to target node at win-p-win000001: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VMSS Boot Diagnostic logs: failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="NotFound" Message="The entity was not found in this Azure location."]
STEP: Dumping workload cluster capz-e2e-2io6mt/capz-e2e-2io6mt-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 535.44479ms
STEP: Dumping workload cluster capz-e2e-2io6mt/capz-e2e-2io6mt-vmss Azure activity log
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-2io6mt-vmss-control-plane-jj87f, container etcd
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-e2e-2io6mt-vmss-control-plane-jj87f
STEP: Creating log watcher for controller kube-system/calico-node-fsvbg, container calico-node
STEP: Collecting events for Pod kube-system/calico-kube-controllers-969cf87c4-kzct7
STEP: failed to find events of Pod "kube-controller-manager-capz-e2e-2io6mt-vmss-control-plane-jj87f"
STEP: Collecting events for Pod kube-system/etcd-capz-e2e-2io6mt-vmss-control-plane-jj87f
STEP: failed to find events of Pod "etcd-capz-e2e-2io6mt-vmss-control-plane-jj87f"
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-2io6mt-vmss-control-plane-jj87f, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-kzct7, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/kube-proxy-mzlrq
STEP: Collecting events for Pod kube-system/calico-node-fsvbg
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-dbq6z
STEP: Creating log watcher for controller kube-system/calico-node-4tbfq, container calico-node
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-e2e-2io6mt-vmss-control-plane-jj87f
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-tfh9p, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-dbq6z, container coredns
STEP: failed to find events of Pod "kube-apiserver-capz-e2e-2io6mt-vmss-control-plane-jj87f"
STEP: Collecting events for Pod kube-system/calico-node-4tbfq
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-2io6mt-vmss-control-plane-jj87f, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-jfqkm, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-jfqkm
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-tfh9p
STEP: Creating log watcher for controller kube-system/kube-proxy-mzlrq, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-e2e-2io6mt-vmss-control-plane-jj87f
STEP: failed to find events of Pod "kube-scheduler-capz-e2e-2io6mt-vmss-control-plane-jj87f"
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-2io6mt-vmss-control-plane-jj87f, container kube-scheduler
STEP: Fetching activity logs took 2.652151144s
STEP: Dumping all the Cluster API resources in the "capz-e2e-2io6mt" namespace
STEP: Deleting all clusters in the capz-e2e-2io6mt namespace
STEP: Deleting cluster capz-e2e-2io6mt-vmss
INFO: Waiting for the Cluster capz-e2e-2io6mt/capz-e2e-2io6mt-vmss to be deleted
STEP: Waiting for cluster capz-e2e-2io6mt-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mzlrq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-2io6mt-vmss-control-plane-jj87f, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-tfh9p, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4tbfq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fsvbg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-2io6mt-vmss-control-plane-jj87f, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-dbq6z, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-969cf87c4-kzct7, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jfqkm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-2io6mt-vmss-control-plane-jj87f, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-2io6mt-vmss-control-plane-jj87f, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-2io6mt
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" ran for 1h9m16s on Ginkgo node 3 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:44
  Creating a VMSS cluster [REQUIRED]
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:309
    with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:310
------------------------------
W0907 10:45:54.649063   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:45:54.649200   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 10:46:33.968118   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:46:33.968254   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 10:47:11.717072   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:47:11.717208   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 10:47:54.714316   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:47:54.714449   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 10:48:43.317653   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:48:43.317780   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 10:49:35.756820   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:49:35.756976   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 10:50:31.601435   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:50:31.601549   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 10:51:08.465750   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:51:08.465891   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 10:51:45.753934   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:51:45.754053   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 10:52:20.043058   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:52:20.043145   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 10:52:57.018674   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:52:57.018799   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 10:53:32.182327   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:53:32.182481   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 10:54:19.818613   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:54:19.818720   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 10:55:02.786484   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:55:02.786605   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 10:55:33.417968   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:55:33.418099   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 10:56:26.146023   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:56:26.146128   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 10:57:19.317294   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:57:19.317396   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 10:58:08.374352   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:58:08.374466   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 10:58:44.789361   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:58:44.789471   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 10:59:23.456991   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 10:59:23.457100   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:00:17.593795   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:00:17.593926   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:01:14.243139   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:01:14.243276   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:01:56.894994   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:01:56.895138   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:02:42.748735   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:02:42.748820   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:03:27.124621   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:03:27.124762   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:04:20.830224   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:04:20.830329   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:04:58.360885   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:04:58.361039   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:05:29.028757   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:05:29.028868   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:06:28.130088   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:06:28.130199   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:07:25.188506   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:07:25.188629   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:08:14.398040   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:08:14.398166   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:08:53.862348   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:08:53.862443   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:09:48.100987   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:09:48.101126   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:10:29.861966   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:10:29.862073   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:11:23.922387   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:11:23.922526   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:11:55.758168   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:11:55.758274   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:12:55.164152   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:12:55.164264   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:13:35.991479   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:13:35.991603   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:14:14.479080   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:14:14.479174   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:15:07.788529   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:15:07.788645   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:15:48.105039   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:15:48.105151   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:16:33.778359   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:16:33.778469   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:17:33.149407   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:17:33.149559   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:18:07.713958   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:18:07.714112   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:18:43.499652   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:18:43.499772   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:19:27.940985   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:19:27.941106   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:19:59.972917   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:19:59.973003   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:20:50.900281   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:20:50.900473   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:21:26.285164   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:21:26.285265   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:21:58.508059   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:21:58.508175   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:22:51.468106   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:22:51.468242   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:23:29.067296   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:23:29.067462   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:24:22.726026   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:24:22.726138   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:25:08.542594   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:25:08.542735   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
W0907 11:25:52.968571   30604 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0907 11:25:52.968688   30604 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-epk2qe/events?resourceVersion=7784": dial tcp: lookup capz-e2e-epk2qe-public-custom-vnet-2d96af77.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Tearing down the management cluster


Ran 4 of 23 Specs in 5504.155 seconds
SUCCESS! -- 4 Passed | 0 Failed | 0 Pending | 19 Skipped


Ginkgo ran 1 suite in 1h33m32.072613333s
Test Suite Passed
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
================ REDACTING LOGS ================
... skipping 10 lines ...