This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2022-05-12 19:42
Elapsed1h28m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating a private cluster Creates a public management cluster in the same vnet 20m42s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sprivate\scluster\sCreates\sa\spublic\smanagement\scluster\sin\sthe\ssame\svnet$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141
Expected success, but got an error:
    <*errors.withStack | 0xc00082d860>: {
        error: <*exec.ExitError | 0xc0004cc620>{
            ProcessState: {
                pid: 28114,
                status: 256,
                rusage: {
                    Utime: {Sec: 0, Usec: 510195},
                    Stime: {Sec: 0, Usec: 359039},
                    Maxrss: 105900,
                    Ixrss: 0,
                    Idrss: 0,
                    Isrss: 0,
                    Minflt: 14110,
                    Majflt: 0,
                    Nswap: 0,
                    Inblock: 0,
                    Oublock: 25392,
                    Msgsnd: 0,
                    Msgrcv: 0,
                    Nsignals: 0,
                    Nvcsw: 4490,
                    Nivcsw: 552,
                },
            },
            Stderr: nil,
        },
        stack: [0x1819e9e, 0x181a565, 0x19839b7, 0x1b3c528, 0x1c9d968, 0x1cbebcc, 0x813b23, 0x82154a, 0x1cbf2db, 0x7fc603, 0x7fc21c, 0x7fb547, 0x8024ef, 0x801b92, 0x811491, 0x810fa7, 0x810797, 0x812ea6, 0x820bd8, 0x820916, 0x1cae6ba, 0x529ce5, 0x474781],
    }
    exit status 1
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/clusterctl/clusterctl_helpers.go:272
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Show 14 Skipped Tests

Error lines from build-log.txt

... skipping 432 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Thu, 12 May 2022 19:49:32 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-893d5c" for hosting the cluster
May 12 19:49:32.881: INFO: starting to create namespace for hosting the "capz-e2e-893d5c" test spec
2022/05/12 19:49:32 failed trying to get namespace (capz-e2e-893d5c):namespaces "capz-e2e-893d5c" not found
INFO: Creating namespace capz-e2e-893d5c
INFO: Creating event watcher for namespace "capz-e2e-893d5c"
May 12 19:49:32.935: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-893d5c-ipv6
INFO: Creating the workload cluster with name "capz-e2e-893d5c-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 1.064538338s
STEP: Dumping all the Cluster API resources in the "capz-e2e-893d5c" namespace
STEP: Deleting all clusters in the capz-e2e-893d5c namespace
STEP: Deleting cluster capz-e2e-893d5c-ipv6
INFO: Waiting for the Cluster capz-e2e-893d5c/capz-e2e-893d5c-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-893d5c-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-jt6gm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-893d5c-ipv6-control-plane-nqdjh, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-893d5c-ipv6-control-plane-bjd8n, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-893d5c-ipv6-control-plane-nqdjh, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-cdp7x, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-893d5c-ipv6-control-plane-bjd8n, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-lgzm8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-l9lw2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-893d5c-ipv6-control-plane-nqdjh, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-893d5c-ipv6-control-plane-bjd8n, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-893d5c-ipv6-control-plane-bjd8n, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-893d5c-ipv6-control-plane-nqdjh, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-5bxmz, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2fqnf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5hcz5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8xbl4, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-46n6b, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-893d5c
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 18m16s on Ginkgo node 2 of 3

... skipping 10 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Thu, 12 May 2022 19:49:32 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-hwpo9y" for hosting the cluster
May 12 19:49:32.830: INFO: starting to create namespace for hosting the "capz-e2e-hwpo9y" test spec
2022/05/12 19:49:32 failed trying to get namespace (capz-e2e-hwpo9y):namespaces "capz-e2e-hwpo9y" not found
INFO: Creating namespace capz-e2e-hwpo9y
INFO: Creating event watcher for namespace "capz-e2e-hwpo9y"
May 12 19:49:32.858: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-hwpo9y-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 49 lines ...
STEP: Ensure public API server is stable before creating private cluster
STEP: Creating a private workload cluster
INFO: Creating the workload cluster with name "capz-e2e-kbsi7u-private" using the "private" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster capz-e2e-kbsi7u-private --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 3 --worker-machine-count 1 --flavor private
INFO: Applying the cluster template yaml to the cluster
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.azurecluster.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.azuremachinetemplate.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.azuremachinetemplate.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource

STEP: Dumping logs from the "capz-e2e-hwpo9y-public-custom-vnet" workload cluster
STEP: Dumping workload cluster capz-e2e-hwpo9y/capz-e2e-hwpo9y-public-custom-vnet logs
May 12 19:57:26.501: INFO: INFO: Collecting logs for node capz-e2e-hwpo9y-public-custom-vnet-control-plane-z5qd4 in cluster capz-e2e-hwpo9y-public-custom-vnet in namespace capz-e2e-hwpo9y

May 12 19:57:33.778: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-hwpo9y-public-custom-vnet-control-plane-z5qd4
... skipping 19 lines ...
STEP: Fetching activity logs took 549.231983ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-hwpo9y" namespace
STEP: Deleting all clusters in the capz-e2e-hwpo9y namespace
STEP: Deleting cluster capz-e2e-hwpo9y-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-hwpo9y/capz-e2e-hwpo9y-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-hwpo9y-public-custom-vnet to be deleted
W0512 20:02:48.154568   24162 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I0512 20:03:19.069393   24162 trace.go:205] Trace[686818063]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-May-2022 20:02:49.068) (total time: 30001ms):
Trace[686818063]: [30.001212001s] [30.001212001s] END
E0512 20:03:19.069466   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp 20.23.28.182:6443: i/o timeout
I0512 20:03:52.186376   24162 trace.go:205] Trace[562546104]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-May-2022 20:03:22.185) (total time: 30001ms):
Trace[562546104]: [30.001185376s] [30.001185376s] END
E0512 20:03:52.186437   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp 20.23.28.182:6443: i/o timeout
I0512 20:04:27.879082   24162 trace.go:205] Trace[1447930670]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-May-2022 20:03:57.878) (total time: 30000ms):
Trace[1447930670]: [30.000701309s] [30.000701309s] END
E0512 20:04:27.879147   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp 20.23.28.182:6443: i/o timeout
I0512 20:05:05.874334   24162 trace.go:205] Trace[565873539]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-May-2022 20:04:35.872) (total time: 30001ms):
Trace[565873539]: [30.001385551s] [30.001385551s] END
E0512 20:05:05.874413   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp 20.23.28.182:6443: i/o timeout
I0512 20:05:55.114649   24162 trace.go:205] Trace[1207448821]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-May-2022 20:05:25.113) (total time: 30001ms):
Trace[1207448821]: [30.001582398s] [30.001582398s] END
E0512 20:05:55.114727   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp 20.23.28.182:6443: i/o timeout
I0512 20:07:16.263808   24162 trace.go:205] Trace[311411285]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-May-2022 20:06:46.262) (total time: 30000ms):
Trace[311411285]: [30.000983005s] [30.000983005s] END
E0512 20:07:16.263885   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp 20.23.28.182:6443: i/o timeout
E0512 20:08:03.415807   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-hwpo9y
STEP: Running additional cleanup for the "create-workload-cluster" test spec
May 12 20:08:06.747: INFO: deleting an existing virtual network "custom-vnet"
May 12 20:08:18.620: INFO: deleting an existing route table "node-routetable"
May 12 20:08:21.304: INFO: deleting an existing network security group "node-nsg"
May 12 20:08:31.938: INFO: deleting an existing network security group "control-plane-nsg"
May 12 20:08:42.862: INFO: verifying the existing resource group "capz-e2e-hwpo9y-public-custom-vnet" is empty
May 12 20:08:43.007: INFO: deleting the existing resource group "capz-e2e-hwpo9y-public-custom-vnet"
E0512 20:09:01.865575   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:09:33.327824   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E0512 20:10:04.251885   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 20m42s on Ginkgo node 1 of 3


• Failure [1242.448 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a private cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:140
    Creates a public management cluster in the same vnet [It]
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

    Expected success, but got an error:
        <*errors.withStack | 0xc00082d860>: {
            error: <*exec.ExitError | 0xc0004cc620>{
                ProcessState: {
                    pid: 28114,
                    status: 256,
                    rusage: {
                        Utime: {Sec: 0, Usec: 510195},
                        Stime: {Sec: 0, Usec: 359039},
... skipping 69 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Thu, 12 May 2022 20:07:48 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-fz4cpw" for hosting the cluster
May 12 20:07:48.516: INFO: starting to create namespace for hosting the "capz-e2e-fz4cpw" test spec
2022/05/12 20:07:48 failed trying to get namespace (capz-e2e-fz4cpw):namespaces "capz-e2e-fz4cpw" not found
INFO: Creating namespace capz-e2e-fz4cpw
INFO: Creating event watcher for namespace "capz-e2e-fz4cpw"
May 12 20:07:48.552: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-fz4cpw-vmss
INFO: Creating the workload cluster with name "capz-e2e-fz4cpw-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 128 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Thu, 12 May 2022 20:10:15 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-8wd6j8" for hosting the cluster
May 12 20:10:15.282: INFO: starting to create namespace for hosting the "capz-e2e-8wd6j8" test spec
2022/05/12 20:10:15 failed trying to get namespace (capz-e2e-8wd6j8):namespaces "capz-e2e-8wd6j8" not found
INFO: Creating namespace capz-e2e-8wd6j8
INFO: Creating event watcher for namespace "capz-e2e-8wd6j8"
May 12 20:10:15.328: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-8wd6j8-oot
INFO: Creating the workload cluster with name "capz-e2e-8wd6j8-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 13 lines ...
configmap/cloud-node-manager-addon created
clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-8wd6j8-oot-calico created
configmap/cni-capz-e2e-8wd6j8-oot-calico created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E0512 20:10:56.617636   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by capz-e2e-8wd6j8/capz-e2e-8wd6j8-oot-control-plane to be provisioned
STEP: Waiting for one control plane node to exist
E0512 20:11:42.950804   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:12:33.317772   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:13:24.697096   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:13:55.235972   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
INFO: Waiting for control plane capz-e2e-8wd6j8/capz-e2e-8wd6j8-oot-control-plane to be ready (implies underlying nodes to be ready as well)
STEP: Waiting for the control plane to be ready
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
E0512 20:14:41.685892   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:15:15.529522   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for the machine pools to be provisioned
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/webfultz2 to be available
May 12 20:15:37.018: INFO: starting to wait for deployment to become available
E0512 20:16:01.303831   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
May 12 20:16:17.599: INFO: Deployment default/webfultz2 is now available, took 40.580452821s
STEP: creating an internal Load Balancer service
May 12 20:16:17.599: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/webfultz2-ilb to be available
May 12 20:16:17.743: INFO: waiting for service default/webfultz2-ilb to be available
E0512 20:16:42.421904   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
May 12 20:17:28.626: INFO: service default/webfultz2-ilb is available, took 1m10.883047209s
STEP: connecting to the internal LB service from a curl pod
May 12 20:17:28.735: INFO: starting to create a curl to ilb job
E0512 20:17:28.825197   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: waiting for job default/curl-to-ilb-job053me to be complete
May 12 20:17:28.856: INFO: waiting for job default/curl-to-ilb-job053me to be complete
May 12 20:17:39.075: INFO: job default/curl-to-ilb-job053me is complete, took 10.219297475s
STEP: deleting the ilb test resources
May 12 20:17:39.076: INFO: deleting the ilb service: webfultz2-ilb
May 12 20:17:39.212: INFO: deleting the ilb job: curl-to-ilb-job053me
STEP: creating an external Load Balancer service
May 12 20:17:39.322: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/webfultz2-elb to be available
May 12 20:17:39.443: INFO: waiting for service default/webfultz2-elb to be available
E0512 20:18:05.154713   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:18:56.810395   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
May 12 20:19:30.750: INFO: service default/webfultz2-elb is available, took 1m51.307004431s
STEP: connecting to the external LB service from a curl pod
May 12 20:19:30.858: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-jobzagrc435tcu to be complete
May 12 20:19:30.971: INFO: waiting for job default/curl-to-elb-jobzagrc435tcu to be complete
May 12 20:19:41.188: INFO: job default/curl-to-elb-jobzagrc435tcu is complete, took 10.217486884s
... skipping 6 lines ...
May 12 20:19:41.566: INFO: starting to delete deployment webfultz2
May 12 20:19:41.675: INFO: starting to delete job curl-to-elb-jobzagrc435tcu
STEP: Dumping logs from the "capz-e2e-8wd6j8-oot" workload cluster
STEP: Dumping workload cluster capz-e2e-8wd6j8/capz-e2e-8wd6j8-oot logs
May 12 20:19:41.834: INFO: INFO: Collecting logs for node capz-e2e-8wd6j8-oot-control-plane-dknlc in cluster capz-e2e-8wd6j8-oot in namespace capz-e2e-8wd6j8

E0512 20:19:44.335541   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
May 12 20:19:58.414: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-8wd6j8-oot-control-plane-dknlc

May 12 20:19:59.674: INFO: INFO: Collecting logs for node capz-e2e-8wd6j8-oot-md-0-49gkh in cluster capz-e2e-8wd6j8-oot in namespace capz-e2e-8wd6j8

May 12 20:20:12.686: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-8wd6j8-oot-md-0-49gkh

... skipping 24 lines ...
STEP: Fetching activity logs took 550.659653ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-8wd6j8" namespace
STEP: Deleting all clusters in the capz-e2e-8wd6j8 namespace
STEP: Deleting cluster capz-e2e-8wd6j8-oot
INFO: Waiting for the Cluster capz-e2e-8wd6j8/capz-e2e-8wd6j8-oot to be deleted
STEP: Waiting for cluster capz-e2e-8wd6j8-oot to be deleted
E0512 20:20:38.835945   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:21:09.187077   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:21:50.776782   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:22:44.301950   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:23:32.750861   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:24:03.527183   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:24:45.354808   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:25:27.441476   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:26:02.363066   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:26:44.074135   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:27:30.721890   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:28:16.965896   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-8wd6j8
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E0512 20:29:07.248846   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 19m17s on Ginkgo node 1 of 3


• [SLOW TEST:1156.847 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Thu, 12 May 2022 19:49:32 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-5cghxw" for hosting the cluster
May 12 19:49:32.881: INFO: starting to create namespace for hosting the "capz-e2e-5cghxw" test spec
2022/05/12 19:49:32 failed trying to get namespace (capz-e2e-5cghxw):namespaces "capz-e2e-5cghxw" not found
INFO: Creating namespace capz-e2e-5cghxw
INFO: Creating event watcher for namespace "capz-e2e-5cghxw"
May 12 19:49:32.940: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-5cghxw-ha
INFO: Creating the workload cluster with name "capz-e2e-5cghxw-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 59 lines ...
STEP: waiting for job default/curl-to-elb-jobblt3ft92a50 to be complete
May 12 19:59:49.737: INFO: waiting for job default/curl-to-elb-jobblt3ft92a50 to be complete
May 12 19:59:59.959: INFO: job default/curl-to-elb-jobblt3ft92a50 is complete, took 10.222324495s
STEP: connecting directly to the external LB service
May 12 19:59:59.959: INFO: starting attempts to connect directly to the external LB service
2022/05/12 19:59:59 [DEBUG] GET http://20.23.31.124
2022/05/12 20:00:29 [ERR] GET http://20.23.31.124 request failed: Get "http://20.23.31.124": dial tcp 20.23.31.124:80: i/o timeout
2022/05/12 20:00:29 [DEBUG] GET http://20.23.31.124: retrying in 1s (4 left)
May 12 20:00:46.433: INFO: successfully connected to the external LB service
STEP: deleting the test resources
May 12 20:00:46.433: INFO: starting to delete external LB service web1ft8c5-elb
May 12 20:00:46.595: INFO: starting to delete deployment web1ft8c5
May 12 20:00:46.720: INFO: starting to delete job curl-to-elb-jobblt3ft92a50
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
May 12 20:00:46.889: INFO: starting to create dev deployment namespace
2022/05/12 20:00:47 failed trying to get namespace (development):namespaces "development" not found
2022/05/12 20:00:47 namespace development does not exist, creating...
STEP: Creating production namespace
May 12 20:00:47.121: INFO: starting to create prod deployment namespace
2022/05/12 20:00:47 failed trying to get namespace (production):namespaces "production" not found
2022/05/12 20:00:47 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
May 12 20:00:47.351: INFO: starting to create frontend-prod deployments
May 12 20:00:47.464: INFO: starting to create frontend-dev deployments
May 12 20:00:47.583: INFO: starting to create backend deployments
May 12 20:00:47.696: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
May 12 20:01:14.497: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.136.69 port 80: Connection timed out

STEP: Cleaning up after ourselves
May 12 20:03:25.862: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
May 12 20:03:26.273: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.136.69 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.136.69 port 80: Connection timed out

STEP: Cleaning up after ourselves
May 12 20:07:48.004: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
May 12 20:07:48.412: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.136.70 port 80: Connection timed out

STEP: Cleaning up after ourselves
May 12 20:10:01.125: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
May 12 20:10:01.524: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.136.67 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.136.70 port 80: Connection timed out

STEP: Cleaning up after ourselves
May 12 20:14:25.316: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
May 12 20:14:25.725: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.136.69 port 80: Connection timed out

STEP: Cleaning up after ourselves
May 12 20:16:38.437: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
May 12 20:16:38.840: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.136.69 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-5cghxw-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-5cghxw/capz-e2e-5cghxw-ha logs
May 12 20:18:50.362: INFO: INFO: Collecting logs for node capz-e2e-5cghxw-ha-control-plane-s7fll in cluster capz-e2e-5cghxw-ha in namespace capz-e2e-5cghxw

May 12 20:19:01.685: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-5cghxw-ha-control-plane-s7fll
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-vtl47, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-r54ws, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-sch9w, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-z7klc, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-5htd8, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-8nmn7, container coredns
STEP: Got error while iterating over activity logs for resource group capz-e2e-5cghxw-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000576465s
STEP: Dumping all the Cluster API resources in the "capz-e2e-5cghxw" namespace
STEP: Deleting all clusters in the capz-e2e-5cghxw namespace
STEP: Deleting cluster capz-e2e-5cghxw-ha
INFO: Waiting for the Cluster capz-e2e-5cghxw/capz-e2e-5cghxw-ha to be deleted
STEP: Waiting for cluster capz-e2e-5cghxw-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-z7klc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-5cghxw-ha-control-plane-7rkfz, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-5cghxw-ha-control-plane-7rkfz, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-5cghxw-ha-control-plane-7rkfz, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vtl47, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-sch9w, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-5cghxw-ha-control-plane-888qv, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4rm77, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-5cghxw-ha-control-plane-s7fll, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-5cghxw-ha-control-plane-888qv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jj9wg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-r54ws, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-5cghxw-ha-control-plane-s7fll, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-5cghxw-ha-control-plane-s7fll, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8nmn7, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mmfd8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-87z2x, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-5cghxw-ha-control-plane-s7fll, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5htd8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-5cghxw-ha-control-plane-7rkfz, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-5cghxw-ha-control-plane-888qv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-5cghxw-ha-control-plane-888qv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hcc4s, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-5cghxw
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 43m57s on Ginkgo node 3 of 3

... skipping 8 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Thu, 12 May 2022 20:26:26 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-8mzmbb" for hosting the cluster
May 12 20:26:26.393: INFO: starting to create namespace for hosting the "capz-e2e-8mzmbb" test spec
2022/05/12 20:26:26 failed trying to get namespace (capz-e2e-8mzmbb):namespaces "capz-e2e-8mzmbb" not found
INFO: Creating namespace capz-e2e-8mzmbb
INFO: Creating event watcher for namespace "capz-e2e-8mzmbb"
May 12 20:26:26.437: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-8mzmbb-aks
INFO: Creating the workload cluster with name "capz-e2e-8mzmbb-aks" using the "aks-multi-tenancy" template (Kubernetes v1.22.6, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 34 lines ...
STEP: Dumping logs from the "capz-e2e-8mzmbb-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-8mzmbb/capz-e2e-8mzmbb-aks logs
May 12 20:35:47.015: INFO: INFO: Collecting logs for node aks-agentpool1-42202984-vmss000000 in cluster capz-e2e-8mzmbb-aks in namespace capz-e2e-8mzmbb

May 12 20:37:56.748: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-8mzmbb/capz-e2e-8mzmbb-aks: [dialing public load balancer at capz-e2e-8mzmbb-aks-5627ae3a.hcp.westeurope.azmk8s.io: dial tcp 20.76.50.198:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
May 12 20:37:57.270: INFO: INFO: Collecting logs for node aks-agentpool1-42202984-vmss000000 in cluster capz-e2e-8mzmbb-aks in namespace capz-e2e-8mzmbb

May 12 20:40:07.820: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-8mzmbb/capz-e2e-8mzmbb-aks: [dialing public load balancer at capz-e2e-8mzmbb-aks-5627ae3a.hcp.westeurope.azmk8s.io: dial tcp 20.76.50.198:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-8mzmbb/capz-e2e-8mzmbb-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 1.051559303s
STEP: Dumping workload cluster capz-e2e-8mzmbb/capz-e2e-8mzmbb-aks Azure activity log
STEP: Creating log watcher for controller kube-system/azure-ip-masq-agent-7ddks, container azure-ip-masq-agent
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-kkgfm, container node-driver-registrar
STEP: Creating log watcher for controller kube-system/csi-azuredisk-node-2c76r, container azuredisk
... skipping 44 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Thu, 12 May 2022 20:33:29 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-z8i00k" for hosting the cluster
May 12 20:33:29.586: INFO: starting to create namespace for hosting the "capz-e2e-z8i00k" test spec
2022/05/12 20:33:29 failed trying to get namespace (capz-e2e-z8i00k):namespaces "capz-e2e-z8i00k" not found
INFO: Creating namespace capz-e2e-z8i00k
INFO: Creating event watcher for namespace "capz-e2e-z8i00k"
May 12 20:33:29.626: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-z8i00k-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-z8i00k-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 123 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-xwl9c, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-z8i00k-win-vmss-control-plane-kslfq, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-z8i00k-win-vmss-control-plane-kslfq, container kube-apiserver
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-nms55, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-xdgff, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-z8i00k-win-vmss-control-plane-kslfq, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-z8i00k-win-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001100592s
STEP: Dumping all the Cluster API resources in the "capz-e2e-z8i00k" namespace
STEP: Deleting all clusters in the capz-e2e-z8i00k namespace
STEP: Deleting cluster capz-e2e-z8i00k-win-vmss
INFO: Waiting for the Cluster capz-e2e-z8i00k/capz-e2e-z8i00k-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-z8i00k-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-dg7q8, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-xwl9c, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-z8i00k
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 31m30s on Ginkgo node 3 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Thu, 12 May 2022 20:29:32 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-kloipm" for hosting the cluster
May 12 20:29:32.132: INFO: starting to create namespace for hosting the "capz-e2e-kloipm" test spec
2022/05/12 20:29:32 failed trying to get namespace (capz-e2e-kloipm):namespaces "capz-e2e-kloipm" not found
INFO: Creating namespace capz-e2e-kloipm
INFO: Creating event watcher for namespace "capz-e2e-kloipm"
May 12 20:29:32.171: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-kloipm-win-ha
INFO: Creating the workload cluster with name "capz-e2e-kloipm-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 12 lines ...
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created
clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-kloipm-win-ha-flannel created
configmap/cni-capz-e2e-kloipm-win-ha-flannel created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E0512 20:29:50.610110   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:30:25.924893   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:31:12.004719   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by capz-e2e-kloipm/capz-e2e-kloipm-win-ha-control-plane to be provisioned
STEP: Waiting for one control plane node to exist
E0512 20:31:43.615288   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:32:27.812129   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:33:06.857357   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
INFO: Waiting for the remaining control plane machines managed by capz-e2e-kloipm/capz-e2e-kloipm-win-ha-control-plane to be provisioned
STEP: Waiting for all control plane nodes to exist
E0512 20:33:57.835810   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:34:42.129860   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:35:38.801665   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:36:12.933564   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:36:57.834260   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:37:43.621974   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane capz-e2e-kloipm/capz-e2e-kloipm-win-ha-control-plane to be ready (implies underlying nodes to be ready as well)
STEP: Waiting for the control plane to be ready
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
STEP: Waiting for the workload nodes to exist
INFO: Waiting for the machine pools to be provisioned
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/weby7pcy0 to be available
May 12 20:38:23.914: INFO: starting to wait for deployment to become available
E0512 20:38:31.115126   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
May 12 20:38:44.276: INFO: Deployment default/weby7pcy0 is now available, took 20.361911787s
STEP: creating an internal Load Balancer service
May 12 20:38:44.276: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/weby7pcy0-ilb to be available
May 12 20:38:44.436: INFO: waiting for service default/weby7pcy0-ilb to be available
E0512 20:39:14.866222   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
May 12 20:39:55.341: INFO: service default/weby7pcy0-ilb is available, took 1m10.90431786s
STEP: connecting to the internal LB service from a curl pod
May 12 20:39:55.453: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-job7s97q to be complete
May 12 20:39:55.582: INFO: waiting for job default/curl-to-ilb-job7s97q to be complete
May 12 20:40:05.808: INFO: job default/curl-to-ilb-job7s97q is complete, took 10.225637016s
STEP: deleting the ilb test resources
May 12 20:40:05.808: INFO: deleting the ilb service: weby7pcy0-ilb
May 12 20:40:05.976: INFO: deleting the ilb job: curl-to-ilb-job7s97q
STEP: creating an external Load Balancer service
May 12 20:40:06.098: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/weby7pcy0-elb to be available
May 12 20:40:06.253: INFO: waiting for service default/weby7pcy0-elb to be available
E0512 20:40:11.978975   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
May 12 20:40:26.592: INFO: service default/weby7pcy0-elb is available, took 20.339534726s
STEP: connecting to the external LB service from a curl pod
May 12 20:40:26.704: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-job6dkj363zl83 to be complete
May 12 20:40:26.825: INFO: waiting for job default/curl-to-elb-job6dkj363zl83 to be complete
May 12 20:40:37.049: INFO: job default/curl-to-elb-job6dkj363zl83 is complete, took 10.22486174s
... skipping 6 lines ...
May 12 20:40:37.431: INFO: starting to delete deployment weby7pcy0
May 12 20:40:37.551: INFO: starting to delete job curl-to-elb-job6dkj363zl83
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowsodhw01 to be available
May 12 20:40:37.937: INFO: starting to wait for deployment to become available
E0512 20:41:11.830878   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:41:53.405063   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
May 12 20:41:59.021: INFO: Deployment default/web-windowsodhw01 is now available, took 1m21.084201181s
STEP: creating an internal Load Balancer service
May 12 20:41:59.021: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/web-windowsodhw01-ilb to be available
May 12 20:41:59.177: INFO: waiting for service default/web-windowsodhw01-ilb to be available
May 12 20:42:09.403: INFO: service default/web-windowsodhw01-ilb is available, took 10.225977948s
... skipping 6 lines ...
May 12 20:42:19.857: INFO: deleting the ilb service: web-windowsodhw01-ilb
May 12 20:42:20.023: INFO: deleting the ilb job: curl-to-ilb-jobcc3cc
STEP: creating an external Load Balancer service
May 12 20:42:20.142: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/web-windowsodhw01-elb to be available
May 12 20:42:20.295: INFO: waiting for service default/web-windowsodhw01-elb to be available
E0512 20:42:26.821141   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
May 12 20:42:50.749: INFO: service default/web-windowsodhw01-elb is available, took 30.454388211s
STEP: connecting to the external LB service from a curl pod
May 12 20:42:50.863: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-jobuaam0rtrenz to be complete
May 12 20:42:50.982: INFO: waiting for job default/curl-to-elb-jobuaam0rtrenz to be complete
May 12 20:43:01.207: INFO: job default/curl-to-elb-jobuaam0rtrenz is complete, took 10.224893034s
... skipping 6 lines ...
May 12 20:43:01.586: INFO: starting to delete deployment web-windowsodhw01
May 12 20:43:01.707: INFO: starting to delete job curl-to-elb-jobuaam0rtrenz
STEP: Dumping logs from the "capz-e2e-kloipm-win-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-kloipm/capz-e2e-kloipm-win-ha logs
May 12 20:43:01.868: INFO: INFO: Collecting logs for node capz-e2e-kloipm-win-ha-control-plane-wjvzx in cluster capz-e2e-kloipm-win-ha in namespace capz-e2e-kloipm

E0512 20:43:13.702873   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
May 12 20:43:16.498: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-kloipm-win-ha-control-plane-wjvzx

May 12 20:43:17.907: INFO: INFO: Collecting logs for node capz-e2e-kloipm-win-ha-control-plane-hxq9t in cluster capz-e2e-kloipm-win-ha in namespace capz-e2e-kloipm

May 12 20:43:28.179: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-kloipm-win-ha-control-plane-hxq9t

... skipping 4 lines ...
May 12 20:43:38.079: INFO: INFO: Collecting logs for node capz-e2e-kloipm-win-ha-md-0-bs89t in cluster capz-e2e-kloipm-win-ha in namespace capz-e2e-kloipm

May 12 20:43:50.070: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-kloipm-win-ha-md-0-bs89t

May 12 20:43:50.493: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster capz-e2e-kloipm-win-ha in namespace capz-e2e-kloipm

E0512 20:44:08.275697   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
May 12 20:44:38.491: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-kloipm-win-ha-md-win-vjcwx

STEP: Dumping workload cluster capz-e2e-kloipm/capz-e2e-kloipm-win-ha kube-system pod logs
STEP: Fetching kube-system pod logs took 901.841503ms
STEP: Creating log watcher for controller kube-system/kube-proxy-fxkck, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-kloipm-win-ha-control-plane-fh6jv, container kube-scheduler
... skipping 17 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-kloipm-win-ha-control-plane-fh6jv, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-cwf8g, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-vlk2d, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-kloipm-win-ha-control-plane-hxq9t, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-7tmlz, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-kloipm-win-ha-control-plane-wjvzx, container kube-scheduler
E0512 20:44:43.970206   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Got error while iterating over activity logs for resource group capz-e2e-kloipm-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000934035s
STEP: Dumping all the Cluster API resources in the "capz-e2e-kloipm" namespace
STEP: Deleting all clusters in the capz-e2e-kloipm namespace
STEP: Deleting cluster capz-e2e-kloipm-win-ha
INFO: Waiting for the Cluster capz-e2e-kloipm/capz-e2e-kloipm-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-kloipm-win-ha to be deleted
E0512 20:45:26.311476   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:46:23.304408   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:47:03.129481   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:47:40.759925   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:48:37.803403   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:49:34.868663   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:50:23.685074   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:50:58.997529   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:51:32.222025   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:52:05.475033   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:52:59.337790   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:53:45.138964   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:54:21.360566   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:55:00.377761   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:55:54.389397   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:56:28.983085   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:57:02.798596   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:57:42.870414   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-6mxlx, container kube-flannel: http2: server sent GOAWAY and closed the connection; LastStreamID=109, ErrCode=NO_ERROR, debug=""
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-kloipm-win-ha-control-plane-hxq9t, container kube-apiserver: http2: server sent GOAWAY and closed the connection; LastStreamID=109, ErrCode=NO_ERROR, debug=""
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-kloipm-win-ha-control-plane-hxq9t, container kube-controller-manager: http2: server sent GOAWAY and closed the connection; LastStreamID=109, ErrCode=NO_ERROR, debug=""
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-kzwgw, container kube-flannel: http2: server sent GOAWAY and closed the connection; LastStreamID=109, ErrCode=NO_ERROR, debug=""
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vrzw9, container kube-proxy: http2: server sent GOAWAY and closed the connection; LastStreamID=109, ErrCode=NO_ERROR, debug=""
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-kloipm-win-ha-control-plane-hxq9t, container kube-scheduler: http2: server sent GOAWAY and closed the connection; LastStreamID=109, ErrCode=NO_ERROR, debug=""
STEP: Got error while streaming logs for pod kube-system/kube-proxy-fxkck, container kube-proxy: http2: server sent GOAWAY and closed the connection; LastStreamID=109, ErrCode=NO_ERROR, debug=""
E0512 20:58:31.453228   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:59:08.098140   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 20:59:41.608753   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 21:00:32.682335   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 21:01:30.615367   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 21:02:19.510673   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 21:03:07.027335   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 21:03:55.721264   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 21:04:27.345423   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 21:05:04.072398   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 21:05:42.108428   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 21:06:18.494488   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 21:06:54.469900   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 21:07:33.628853   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0512 21:08:19.580565   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-kloipm
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E0512 21:09:04.353970   24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 39m58s on Ginkgo node 1 of 3


• [SLOW TEST:2398.495 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 5 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a private cluster [It] Creates a public management cluster in the same vnet 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/clusterctl/clusterctl_helpers.go:272

Ran 8 of 22 Specs in 4915.974 seconds
FAIL! -- 7 Passed | 1 Failed | 0 Pending | 14 Skipped


Ginkgo ran 1 suite in 1h23m19.32814767s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...