Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 7 succeeded |
Started | |
Elapsed | 1h28m |
Revision | release-0.5 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sprivate\scluster\sCreates\sa\spublic\smanagement\scluster\sin\sthe\ssame\svnet$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141 Expected success, but got an error: <*errors.withStack | 0xc00082d860>: { error: <*exec.ExitError | 0xc0004cc620>{ ProcessState: { pid: 28114, status: 256, rusage: { Utime: {Sec: 0, Usec: 510195}, Stime: {Sec: 0, Usec: 359039}, Maxrss: 105900, Ixrss: 0, Idrss: 0, Isrss: 0, Minflt: 14110, Majflt: 0, Nswap: 0, Inblock: 0, Oublock: 25392, Msgsnd: 0, Msgrcv: 0, Nsignals: 0, Nvcsw: 4490, Nivcsw: 552, }, }, Stderr: nil, }, stack: [0x1819e9e, 0x181a565, 0x19839b7, 0x1b3c528, 0x1c9d968, 0x1cbebcc, 0x813b23, 0x82154a, 0x1cbf2db, 0x7fc603, 0x7fc21c, 0x7fb547, 0x8024ef, 0x801b92, 0x811491, 0x810fa7, 0x810797, 0x812ea6, 0x820bd8, 0x820916, 0x1cae6ba, 0x529ce5, 0x474781], } exit status 1 /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/clusterctl/clusterctl_helpers.go:272from junit.e2e_suite.1.xml
INFO: "Creates a public management cluster in the same vnet" started at Thu, 12 May 2022 19:49:32 UTC on Ginkgo node 1 of 3 �[1mSTEP�[0m: Creating namespace "capz-e2e-hwpo9y" for hosting the cluster May 12 19:49:32.830: INFO: starting to create namespace for hosting the "capz-e2e-hwpo9y" test spec 2022/05/12 19:49:32 failed trying to get namespace (capz-e2e-hwpo9y):namespaces "capz-e2e-hwpo9y" not found INFO: Creating namespace capz-e2e-hwpo9y INFO: Creating event watcher for namespace "capz-e2e-hwpo9y" May 12 19:49:32.858: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-hwpo9y-public-custom-vnet �[1mSTEP�[0m: creating Azure clients with the workload cluster's subscription �[1mSTEP�[0m: creating a resource group �[1mSTEP�[0m: creating a network security group �[1mSTEP�[0m: creating a node security group �[1mSTEP�[0m: creating a node routetable �[1mSTEP�[0m: creating a virtual network INFO: Creating the workload cluster with name "capz-e2e-hwpo9y-public-custom-vnet" using the "custom-vnet" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-hwpo9y-public-custom-vnet --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor custom-vnet INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/capz-e2e-hwpo9y-public-custom-vnet created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-hwpo9y-public-custom-vnet created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-hwpo9y-public-custom-vnet-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-hwpo9y-public-custom-vnet-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-hwpo9y-public-custom-vnet-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-hwpo9y-public-custom-vnet-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-hwpo9y-public-custom-vnet-md-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created machinehealthcheck.cluster.x-k8s.io/capz-e2e-hwpo9y-public-custom-vnet-mhc-0 created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-hwpo9y-public-custom-vnet-calico created configmap/cni-capz-e2e-hwpo9y-public-custom-vnet-calico created INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-e2e-hwpo9y/capz-e2e-hwpo9y-public-custom-vnet-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-hwpo9y/capz-e2e-hwpo9y-public-custom-vnet-control-plane to be ready (implies underlying nodes to be ready as well) �[1mSTEP�[0m: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned �[1mSTEP�[0m: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned �[1mSTEP�[0m: checking that time synchronization is healthy on capz-e2e-hwpo9y-public-custom-vnet-control-plane-z5qd4 �[1mSTEP�[0m: checking that time synchronization is healthy on capz-e2e-hwpo9y-public-custom-vnet-md-0-s2hrk �[1mSTEP�[0m: creating a Kubernetes client to the workload cluster �[1mSTEP�[0m: Creating a namespace for hosting the azure-private-cluster test spec May 12 19:54:36.785: INFO: starting to create namespace for hosting the azure-private-cluster test spec INFO: Creating namespace capz-e2e-hwpo9y INFO: Creating event watcher for namespace "capz-e2e-hwpo9y" �[1mSTEP�[0m: Initializing the workload cluster INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure azure INFO: Waiting for provider controllers to be running �[1mSTEP�[0m: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-75467796c5-87px7, container manager �[1mSTEP�[0m: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-688b75d88d-pht9j, container manager �[1mSTEP�[0m: Waiting for deployment capi-system/capi-controller-manager to be available INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-58757dd9b4-crqmd, container manager �[1mSTEP�[0m: Waiting for deployment capz-system/capz-controller-manager to be available INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-7dcf488f96-t5r5r, container manager �[1mSTEP�[0m: Ensure public API server is stable before creating private cluster �[1mSTEP�[0m: Creating a private workload cluster INFO: Creating the workload cluster with name "capz-e2e-kbsi7u-private" using the "private" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-kbsi7u-private --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 3 --worker-machine-count 1 --flavor private INFO: Applying the cluster template yaml to the cluster Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.azurecluster.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.azuremachinetemplate.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.azuremachinetemplate.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource �[1mSTEP�[0m: Dumping logs from the "capz-e2e-hwpo9y-public-custom-vnet" workload cluster �[1mSTEP�[0m: Dumping workload cluster capz-e2e-hwpo9y/capz-e2e-hwpo9y-public-custom-vnet logs May 12 19:57:26.501: INFO: INFO: Collecting logs for node capz-e2e-hwpo9y-public-custom-vnet-control-plane-z5qd4 in cluster capz-e2e-hwpo9y-public-custom-vnet in namespace capz-e2e-hwpo9y May 12 19:57:33.778: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-hwpo9y-public-custom-vnet-control-plane-z5qd4 May 12 19:57:35.123: INFO: INFO: Collecting logs for node capz-e2e-hwpo9y-public-custom-vnet-md-0-s2hrk in cluster capz-e2e-hwpo9y-public-custom-vnet in namespace capz-e2e-hwpo9y May 12 19:57:44.363: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-hwpo9y-public-custom-vnet-md-0-s2hrk �[1mSTEP�[0m: Dumping workload cluster capz-e2e-hwpo9y/capz-e2e-hwpo9y-public-custom-vnet kube-system pod logs �[1mSTEP�[0m: Fetching kube-system pod logs took 700.869403ms �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-2wrx6, container calico-node �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-cqm9z, container calico-node �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-5mhpk, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-hwpo9y-public-custom-vnet-control-plane-z5qd4, container kube-apiserver �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-w674p, container coredns �[1mSTEP�[0m: Dumping workload cluster capz-e2e-hwpo9y/capz-e2e-hwpo9y-public-custom-vnet Azure activity log �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-wblrl, container coredns �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-hwpo9y-public-custom-vnet-control-plane-z5qd4, container kube-controller-manager �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-bl84q, container calico-kube-controllers �[1mSTEP�[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-hwpo9y-public-custom-vnet-control-plane-z5qd4, container etcd �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-hwpo9y-public-custom-vnet-control-plane-z5qd4, container kube-scheduler �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-zsfdp, container kube-proxy �[1mSTEP�[0m: Fetching activity logs took 549.231983ms �[1mSTEP�[0m: Dumping all the Cluster API resources in the "capz-e2e-hwpo9y" namespace �[1mSTEP�[0m: Deleting all clusters in the capz-e2e-hwpo9y namespace �[1mSTEP�[0m: Deleting cluster capz-e2e-hwpo9y-public-custom-vnet INFO: Waiting for the Cluster capz-e2e-hwpo9y/capz-e2e-hwpo9y-public-custom-vnet to be deleted �[1mSTEP�[0m: Waiting for cluster capz-e2e-hwpo9y-public-custom-vnet to be deleted W0512 20:02:48.154568 24162 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding I0512 20:03:19.069393 24162 trace.go:205] Trace[686818063]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-May-2022 20:02:49.068) (total time: 30001ms): Trace[686818063]: [30.001212001s] [30.001212001s] END E0512 20:03:19.069466 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp 20.23.28.182:6443: i/o timeout I0512 20:03:52.186376 24162 trace.go:205] Trace[562546104]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-May-2022 20:03:22.185) (total time: 30001ms): Trace[562546104]: [30.001185376s] [30.001185376s] END E0512 20:03:52.186437 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp 20.23.28.182:6443: i/o timeout I0512 20:04:27.879082 24162 trace.go:205] Trace[1447930670]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-May-2022 20:03:57.878) (total time: 30000ms): Trace[1447930670]: [30.000701309s] [30.000701309s] END E0512 20:04:27.879147 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp 20.23.28.182:6443: i/o timeout I0512 20:05:05.874334 24162 trace.go:205] Trace[565873539]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-May-2022 20:04:35.872) (total time: 30001ms): Trace[565873539]: [30.001385551s] [30.001385551s] END E0512 20:05:05.874413 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp 20.23.28.182:6443: i/o timeout I0512 20:05:55.114649 24162 trace.go:205] Trace[1207448821]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-May-2022 20:05:25.113) (total time: 30001ms): Trace[1207448821]: [30.001582398s] [30.001582398s] END E0512 20:05:55.114727 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp 20.23.28.182:6443: i/o timeout I0512 20:07:16.263808 24162 trace.go:205] Trace[311411285]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-May-2022 20:06:46.262) (total time: 30000ms): Trace[311411285]: [30.000983005s] [30.000983005s] END E0512 20:07:16.263885 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp 20.23.28.182:6443: i/o timeout E0512 20:08:03.415807 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host �[1mSTEP�[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-hwpo9y �[1mSTEP�[0m: Running additional cleanup for the "create-workload-cluster" test spec May 12 20:08:06.747: INFO: deleting an existing virtual network "custom-vnet" May 12 20:08:18.620: INFO: deleting an existing route table "node-routetable" May 12 20:08:21.304: INFO: deleting an existing network security group "node-nsg" May 12 20:08:31.938: INFO: deleting an existing network security group "control-plane-nsg" May 12 20:08:42.862: INFO: verifying the existing resource group "capz-e2e-hwpo9y-public-custom-vnet" is empty May 12 20:08:43.007: INFO: deleting the existing resource group "capz-e2e-hwpo9y-public-custom-vnet" E0512 20:09:01.865575 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:09:33.327824 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host �[1mSTEP�[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" �[1mSTEP�[0m: Redacting sensitive information from logs E0512 20:10:04.251885 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: "Creates a public management cluster in the same vnet" ran for 20m42s on Ginkgo node 1 of 3
Filter through log files | View test history on testgrid
capz-e2e Workload cluster creation Creating a VMSS cluster with a single control plane node and an AzureMachinePool with 2 nodes
capz-e2e Workload cluster creation Creating a Windows Enabled cluster With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
capz-e2e Workload cluster creation Creating a Windows enabled VMSS cluster with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider with a 1 control plane nodes and 2 worker nodes
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster With ipv6 worker node
capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node
capz-e2e Workload cluster creation With 3 control-plane nodes and 2 worker nodes
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster using scale in rollout Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a single control plane cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should adopt up-to-date control plane Machines without modification Should adopt up-to-date control plane Machines without modification
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Workload cluster creation Creating a GPU-enabled cluster with a single control plane node and 1 node
... skipping 432 lines ... [1mWith ipv6 worker node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269[0m INFO: "With ipv6 worker node" started at Thu, 12 May 2022 19:49:32 UTC on Ginkgo node 2 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-893d5c" for hosting the cluster May 12 19:49:32.881: INFO: starting to create namespace for hosting the "capz-e2e-893d5c" test spec 2022/05/12 19:49:32 failed trying to get namespace (capz-e2e-893d5c):namespaces "capz-e2e-893d5c" not found INFO: Creating namespace capz-e2e-893d5c INFO: Creating event watcher for namespace "capz-e2e-893d5c" May 12 19:49:32.935: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-893d5c-ipv6 INFO: Creating the workload cluster with name "capz-e2e-893d5c-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 93 lines ... [1mSTEP[0m: Fetching activity logs took 1.064538338s [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-893d5c" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-893d5c namespace [1mSTEP[0m: Deleting cluster capz-e2e-893d5c-ipv6 INFO: Waiting for the Cluster capz-e2e-893d5c/capz-e2e-893d5c-ipv6 to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-893d5c-ipv6 to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-jt6gm, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-893d5c-ipv6-control-plane-nqdjh, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-893d5c-ipv6-control-plane-bjd8n, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-893d5c-ipv6-control-plane-nqdjh, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-cdp7x, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-893d5c-ipv6-control-plane-bjd8n, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-lgzm8, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-l9lw2, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-893d5c-ipv6-control-plane-nqdjh, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-893d5c-ipv6-control-plane-bjd8n, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-893d5c-ipv6-control-plane-bjd8n, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-893d5c-ipv6-control-plane-nqdjh, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-5bxmz, container calico-kube-controllers: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-2fqnf, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-5hcz5, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-8xbl4, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-46n6b, container kube-proxy: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-893d5c [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "With ipv6 worker node" ran for 18m16s on Ginkgo node 2 of 3 ... skipping 10 lines ... [1mCreates a public management cluster in the same vnet[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141[0m INFO: "Creates a public management cluster in the same vnet" started at Thu, 12 May 2022 19:49:32 UTC on Ginkgo node 1 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-hwpo9y" for hosting the cluster May 12 19:49:32.830: INFO: starting to create namespace for hosting the "capz-e2e-hwpo9y" test spec 2022/05/12 19:49:32 failed trying to get namespace (capz-e2e-hwpo9y):namespaces "capz-e2e-hwpo9y" not found INFO: Creating namespace capz-e2e-hwpo9y INFO: Creating event watcher for namespace "capz-e2e-hwpo9y" May 12 19:49:32.858: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-hwpo9y-public-custom-vnet [1mSTEP[0m: creating Azure clients with the workload cluster's subscription [1mSTEP[0m: creating a resource group ... skipping 49 lines ... [1mSTEP[0m: Ensure public API server is stable before creating private cluster [1mSTEP[0m: Creating a private workload cluster INFO: Creating the workload cluster with name "capz-e2e-kbsi7u-private" using the "private" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-kbsi7u-private --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 3 --worker-machine-count 1 --flavor private INFO: Applying the cluster template yaml to the cluster Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.azurecluster.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.azuremachinetemplate.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.azuremachinetemplate.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource [1mSTEP[0m: Dumping logs from the "capz-e2e-hwpo9y-public-custom-vnet" workload cluster [1mSTEP[0m: Dumping workload cluster capz-e2e-hwpo9y/capz-e2e-hwpo9y-public-custom-vnet logs May 12 19:57:26.501: INFO: INFO: Collecting logs for node capz-e2e-hwpo9y-public-custom-vnet-control-plane-z5qd4 in cluster capz-e2e-hwpo9y-public-custom-vnet in namespace capz-e2e-hwpo9y May 12 19:57:33.778: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-hwpo9y-public-custom-vnet-control-plane-z5qd4 ... skipping 19 lines ... [1mSTEP[0m: Fetching activity logs took 549.231983ms [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-hwpo9y" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-hwpo9y namespace [1mSTEP[0m: Deleting cluster capz-e2e-hwpo9y-public-custom-vnet INFO: Waiting for the Cluster capz-e2e-hwpo9y/capz-e2e-hwpo9y-public-custom-vnet to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-hwpo9y-public-custom-vnet to be deleted W0512 20:02:48.154568 24162 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding I0512 20:03:19.069393 24162 trace.go:205] Trace[686818063]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-May-2022 20:02:49.068) (total time: 30001ms): Trace[686818063]: [30.001212001s] [30.001212001s] END E0512 20:03:19.069466 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp 20.23.28.182:6443: i/o timeout I0512 20:03:52.186376 24162 trace.go:205] Trace[562546104]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-May-2022 20:03:22.185) (total time: 30001ms): Trace[562546104]: [30.001185376s] [30.001185376s] END E0512 20:03:52.186437 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp 20.23.28.182:6443: i/o timeout I0512 20:04:27.879082 24162 trace.go:205] Trace[1447930670]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-May-2022 20:03:57.878) (total time: 30000ms): Trace[1447930670]: [30.000701309s] [30.000701309s] END E0512 20:04:27.879147 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp 20.23.28.182:6443: i/o timeout I0512 20:05:05.874334 24162 trace.go:205] Trace[565873539]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-May-2022 20:04:35.872) (total time: 30001ms): Trace[565873539]: [30.001385551s] [30.001385551s] END E0512 20:05:05.874413 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp 20.23.28.182:6443: i/o timeout I0512 20:05:55.114649 24162 trace.go:205] Trace[1207448821]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-May-2022 20:05:25.113) (total time: 30001ms): Trace[1207448821]: [30.001582398s] [30.001582398s] END E0512 20:05:55.114727 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp 20.23.28.182:6443: i/o timeout I0512 20:07:16.263808 24162 trace.go:205] Trace[311411285]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-May-2022 20:06:46.262) (total time: 30000ms): Trace[311411285]: [30.000983005s] [30.000983005s] END E0512 20:07:16.263885 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp 20.23.28.182:6443: i/o timeout E0512 20:08:03.415807 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-hwpo9y [1mSTEP[0m: Running additional cleanup for the "create-workload-cluster" test spec May 12 20:08:06.747: INFO: deleting an existing virtual network "custom-vnet" May 12 20:08:18.620: INFO: deleting an existing route table "node-routetable" May 12 20:08:21.304: INFO: deleting an existing network security group "node-nsg" May 12 20:08:31.938: INFO: deleting an existing network security group "control-plane-nsg" May 12 20:08:42.862: INFO: verifying the existing resource group "capz-e2e-hwpo9y-public-custom-vnet" is empty May 12 20:08:43.007: INFO: deleting the existing resource group "capz-e2e-hwpo9y-public-custom-vnet" E0512 20:09:01.865575 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:09:33.327824 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs E0512 20:10:04.251885 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: "Creates a public management cluster in the same vnet" ran for 20m42s on Ginkgo node 1 of 3 [91m[1m• Failure [1242.448 seconds][0m Workload cluster creation [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43[0m Creating a private cluster [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:140[0m [91m[1mCreates a public management cluster in the same vnet [It][0m [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141[0m [91mExpected success, but got an error: <*errors.withStack | 0xc00082d860>: { error: <*exec.ExitError | 0xc0004cc620>{ ProcessState: { pid: 28114, status: 256, rusage: { Utime: {Sec: 0, Usec: 510195}, Stime: {Sec: 0, Usec: 359039}, ... skipping 69 lines ... [1mwith a single control plane node and an AzureMachinePool with 2 nodes[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315[0m INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Thu, 12 May 2022 20:07:48 UTC on Ginkgo node 2 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-fz4cpw" for hosting the cluster May 12 20:07:48.516: INFO: starting to create namespace for hosting the "capz-e2e-fz4cpw" test spec 2022/05/12 20:07:48 failed trying to get namespace (capz-e2e-fz4cpw):namespaces "capz-e2e-fz4cpw" not found INFO: Creating namespace capz-e2e-fz4cpw INFO: Creating event watcher for namespace "capz-e2e-fz4cpw" May 12 20:07:48.552: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-fz4cpw-vmss INFO: Creating the workload cluster with name "capz-e2e-fz4cpw-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml ... skipping 128 lines ... [1mwith a 1 control plane nodes and 2 worker nodes[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419[0m INFO: "with a 1 control plane nodes and 2 worker nodes" started at Thu, 12 May 2022 20:10:15 UTC on Ginkgo node 1 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-8wd6j8" for hosting the cluster May 12 20:10:15.282: INFO: starting to create namespace for hosting the "capz-e2e-8wd6j8" test spec 2022/05/12 20:10:15 failed trying to get namespace (capz-e2e-8wd6j8):namespaces "capz-e2e-8wd6j8" not found INFO: Creating namespace capz-e2e-8wd6j8 INFO: Creating event watcher for namespace "capz-e2e-8wd6j8" May 12 20:10:15.328: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-8wd6j8-oot INFO: Creating the workload cluster with name "capz-e2e-8wd6j8-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml ... skipping 13 lines ... configmap/cloud-node-manager-addon created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-8wd6j8-oot-calico created configmap/cni-capz-e2e-8wd6j8-oot-calico created INFO: Waiting for the cluster infrastructure to be provisioned [1mSTEP[0m: Waiting for cluster to enter the provisioned phase E0512 20:10:56.617636 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-e2e-8wd6j8/capz-e2e-8wd6j8-oot-control-plane to be provisioned [1mSTEP[0m: Waiting for one control plane node to exist E0512 20:11:42.950804 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:12:33.317772 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:13:24.697096 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:13:55.235972 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-8wd6j8/capz-e2e-8wd6j8-oot-control-plane to be ready (implies underlying nodes to be ready as well) [1mSTEP[0m: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist E0512 20:14:41.685892 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:15:15.529522 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: Waiting for the machine pools to be provisioned [1mSTEP[0m: creating a Kubernetes client to the workload cluster [1mSTEP[0m: creating an HTTP deployment [1mSTEP[0m: waiting for deployment default/webfultz2 to be available May 12 20:15:37.018: INFO: starting to wait for deployment to become available E0512 20:16:01.303831 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host May 12 20:16:17.599: INFO: Deployment default/webfultz2 is now available, took 40.580452821s [1mSTEP[0m: creating an internal Load Balancer service May 12 20:16:17.599: INFO: starting to create an internal Load Balancer service [1mSTEP[0m: waiting for service default/webfultz2-ilb to be available May 12 20:16:17.743: INFO: waiting for service default/webfultz2-ilb to be available E0512 20:16:42.421904 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host May 12 20:17:28.626: INFO: service default/webfultz2-ilb is available, took 1m10.883047209s [1mSTEP[0m: connecting to the internal LB service from a curl pod May 12 20:17:28.735: INFO: starting to create a curl to ilb job E0512 20:17:28.825197 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: waiting for job default/curl-to-ilb-job053me to be complete May 12 20:17:28.856: INFO: waiting for job default/curl-to-ilb-job053me to be complete May 12 20:17:39.075: INFO: job default/curl-to-ilb-job053me is complete, took 10.219297475s [1mSTEP[0m: deleting the ilb test resources May 12 20:17:39.076: INFO: deleting the ilb service: webfultz2-ilb May 12 20:17:39.212: INFO: deleting the ilb job: curl-to-ilb-job053me [1mSTEP[0m: creating an external Load Balancer service May 12 20:17:39.322: INFO: starting to create an external Load Balancer service [1mSTEP[0m: waiting for service default/webfultz2-elb to be available May 12 20:17:39.443: INFO: waiting for service default/webfultz2-elb to be available E0512 20:18:05.154713 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:18:56.810395 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host May 12 20:19:30.750: INFO: service default/webfultz2-elb is available, took 1m51.307004431s [1mSTEP[0m: connecting to the external LB service from a curl pod May 12 20:19:30.858: INFO: starting to create curl-to-elb job [1mSTEP[0m: waiting for job default/curl-to-elb-jobzagrc435tcu to be complete May 12 20:19:30.971: INFO: waiting for job default/curl-to-elb-jobzagrc435tcu to be complete May 12 20:19:41.188: INFO: job default/curl-to-elb-jobzagrc435tcu is complete, took 10.217486884s ... skipping 6 lines ... May 12 20:19:41.566: INFO: starting to delete deployment webfultz2 May 12 20:19:41.675: INFO: starting to delete job curl-to-elb-jobzagrc435tcu [1mSTEP[0m: Dumping logs from the "capz-e2e-8wd6j8-oot" workload cluster [1mSTEP[0m: Dumping workload cluster capz-e2e-8wd6j8/capz-e2e-8wd6j8-oot logs May 12 20:19:41.834: INFO: INFO: Collecting logs for node capz-e2e-8wd6j8-oot-control-plane-dknlc in cluster capz-e2e-8wd6j8-oot in namespace capz-e2e-8wd6j8 E0512 20:19:44.335541 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host May 12 20:19:58.414: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-8wd6j8-oot-control-plane-dknlc May 12 20:19:59.674: INFO: INFO: Collecting logs for node capz-e2e-8wd6j8-oot-md-0-49gkh in cluster capz-e2e-8wd6j8-oot in namespace capz-e2e-8wd6j8 May 12 20:20:12.686: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-8wd6j8-oot-md-0-49gkh ... skipping 24 lines ... [1mSTEP[0m: Fetching activity logs took 550.659653ms [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-8wd6j8" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-8wd6j8 namespace [1mSTEP[0m: Deleting cluster capz-e2e-8wd6j8-oot INFO: Waiting for the Cluster capz-e2e-8wd6j8/capz-e2e-8wd6j8-oot to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-8wd6j8-oot to be deleted E0512 20:20:38.835945 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:21:09.187077 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:21:50.776782 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:22:44.301950 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:23:32.750861 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:24:03.527183 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:24:45.354808 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:25:27.441476 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:26:02.363066 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:26:44.074135 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:27:30.721890 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:28:16.965896 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-8wd6j8 [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs E0512 20:29:07.248846 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 19m17s on Ginkgo node 1 of 3 [32m• [SLOW TEST:1156.847 seconds][0m Workload cluster creation [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43[0m ... skipping 6 lines ... [1mWith 3 control-plane nodes and 2 worker nodes[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203[0m INFO: "With 3 control-plane nodes and 2 worker nodes" started at Thu, 12 May 2022 19:49:32 UTC on Ginkgo node 3 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-5cghxw" for hosting the cluster May 12 19:49:32.881: INFO: starting to create namespace for hosting the "capz-e2e-5cghxw" test spec 2022/05/12 19:49:32 failed trying to get namespace (capz-e2e-5cghxw):namespaces "capz-e2e-5cghxw" not found INFO: Creating namespace capz-e2e-5cghxw INFO: Creating event watcher for namespace "capz-e2e-5cghxw" May 12 19:49:32.940: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-5cghxw-ha INFO: Creating the workload cluster with name "capz-e2e-5cghxw-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml ... skipping 59 lines ... [1mSTEP[0m: waiting for job default/curl-to-elb-jobblt3ft92a50 to be complete May 12 19:59:49.737: INFO: waiting for job default/curl-to-elb-jobblt3ft92a50 to be complete May 12 19:59:59.959: INFO: job default/curl-to-elb-jobblt3ft92a50 is complete, took 10.222324495s [1mSTEP[0m: connecting directly to the external LB service May 12 19:59:59.959: INFO: starting attempts to connect directly to the external LB service 2022/05/12 19:59:59 [DEBUG] GET http://20.23.31.124 2022/05/12 20:00:29 [ERR] GET http://20.23.31.124 request failed: Get "http://20.23.31.124": dial tcp 20.23.31.124:80: i/o timeout 2022/05/12 20:00:29 [DEBUG] GET http://20.23.31.124: retrying in 1s (4 left) May 12 20:00:46.433: INFO: successfully connected to the external LB service [1mSTEP[0m: deleting the test resources May 12 20:00:46.433: INFO: starting to delete external LB service web1ft8c5-elb May 12 20:00:46.595: INFO: starting to delete deployment web1ft8c5 May 12 20:00:46.720: INFO: starting to delete job curl-to-elb-jobblt3ft92a50 [1mSTEP[0m: creating a Kubernetes client to the workload cluster [1mSTEP[0m: Creating development namespace May 12 20:00:46.889: INFO: starting to create dev deployment namespace 2022/05/12 20:00:47 failed trying to get namespace (development):namespaces "development" not found 2022/05/12 20:00:47 namespace development does not exist, creating... [1mSTEP[0m: Creating production namespace May 12 20:00:47.121: INFO: starting to create prod deployment namespace 2022/05/12 20:00:47 failed trying to get namespace (production):namespaces "production" not found 2022/05/12 20:00:47 namespace production does not exist, creating... [1mSTEP[0m: Creating frontendProd, backend and network-policy pod deployments May 12 20:00:47.351: INFO: starting to create frontend-prod deployments May 12 20:00:47.464: INFO: starting to create frontend-dev deployments May 12 20:00:47.583: INFO: starting to create backend deployments May 12 20:00:47.696: INFO: starting to create network-policy deployments ... skipping 11 lines ... [1mSTEP[0m: Ensuring we have outbound internet access from the network-policy pods [1mSTEP[0m: Ensuring we have connectivity from network-policy pods to frontend-prod pods [1mSTEP[0m: Ensuring we have connectivity from network-policy pods to backend pods [1mSTEP[0m: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace May 12 20:01:14.497: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace [1mSTEP[0m: Ensuring we no longer have ingress access from the network-policy pods to backend pods curl: (7) Failed to connect to 192.168.136.69 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves May 12 20:03:25.862: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves [1mSTEP[0m: Applying a network policy to deny egress access in development namespace May 12 20:03:26.273: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace [1mSTEP[0m: Ensuring we no longer have egress access from the network-policy pods to backend pods curl: (7) Failed to connect to 192.168.136.69 port 80: Connection timed out curl: (7) Failed to connect to 192.168.136.69 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves May 12 20:07:48.004: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves [1mSTEP[0m: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace May 12 20:07:48.412: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace [1mSTEP[0m: Ensuring we have egress access from pods with matching labels [1mSTEP[0m: Ensuring we don't have ingress access from pods without matching labels curl: (7) Failed to connect to 192.168.136.70 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves May 12 20:10:01.125: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves [1mSTEP[0m: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace May 12 20:10:01.524: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace [1mSTEP[0m: Ensuring we have egress access from pods with matching labels [1mSTEP[0m: Ensuring we don't have ingress access from pods without matching labels curl: (7) Failed to connect to 192.168.136.67 port 80: Connection timed out curl: (7) Failed to connect to 192.168.136.70 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves May 12 20:14:25.316: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves [1mSTEP[0m: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels May 12 20:14:25.725: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels [1mSTEP[0m: Ensuring we have ingress access from pods with matching labels [1mSTEP[0m: Ensuring we don't have ingress access from pods without matching labels curl: (7) Failed to connect to 192.168.136.69 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves May 12 20:16:38.437: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves [1mSTEP[0m: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development May 12 20:16:38.840: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development [1mSTEP[0m: Ensuring we don't have ingress access from role:frontend pods in production namespace curl: (7) Failed to connect to 192.168.136.69 port 80: Connection timed out [1mSTEP[0m: Ensuring we have ingress access from role:frontend pods in development namespace [1mSTEP[0m: Dumping logs from the "capz-e2e-5cghxw-ha" workload cluster [1mSTEP[0m: Dumping workload cluster capz-e2e-5cghxw/capz-e2e-5cghxw-ha logs May 12 20:18:50.362: INFO: INFO: Collecting logs for node capz-e2e-5cghxw-ha-control-plane-s7fll in cluster capz-e2e-5cghxw-ha in namespace capz-e2e-5cghxw May 12 20:19:01.685: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-5cghxw-ha-control-plane-s7fll ... skipping 39 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-vtl47, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-r54ws, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-sch9w, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-z7klc, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-5htd8, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-8nmn7, container coredns [1mSTEP[0m: Got error while iterating over activity logs for resource group capz-e2e-5cghxw-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded [1mSTEP[0m: Fetching activity logs took 30.000576465s [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-5cghxw" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-5cghxw namespace [1mSTEP[0m: Deleting cluster capz-e2e-5cghxw-ha INFO: Waiting for the Cluster capz-e2e-5cghxw/capz-e2e-5cghxw-ha to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-5cghxw-ha to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-z7klc, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-5cghxw-ha-control-plane-7rkfz, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-5cghxw-ha-control-plane-7rkfz, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-5cghxw-ha-control-plane-7rkfz, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-vtl47, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-sch9w, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-5cghxw-ha-control-plane-888qv, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-4rm77, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-5cghxw-ha-control-plane-s7fll, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-5cghxw-ha-control-plane-888qv, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-jj9wg, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-r54ws, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-5cghxw-ha-control-plane-s7fll, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-5cghxw-ha-control-plane-s7fll, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8nmn7, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-mmfd8, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-87z2x, container calico-kube-controllers: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-5cghxw-ha-control-plane-s7fll, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5htd8, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-5cghxw-ha-control-plane-7rkfz, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-5cghxw-ha-control-plane-888qv, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-5cghxw-ha-control-plane-888qv, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-hcc4s, container kube-proxy: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-5cghxw [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 43m57s on Ginkgo node 3 of 3 ... skipping 8 lines ... [1mwith a single control plane node and 1 node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454[0m INFO: "with a single control plane node and 1 node" started at Thu, 12 May 2022 20:26:26 UTC on Ginkgo node 2 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-8mzmbb" for hosting the cluster May 12 20:26:26.393: INFO: starting to create namespace for hosting the "capz-e2e-8mzmbb" test spec 2022/05/12 20:26:26 failed trying to get namespace (capz-e2e-8mzmbb):namespaces "capz-e2e-8mzmbb" not found INFO: Creating namespace capz-e2e-8mzmbb INFO: Creating event watcher for namespace "capz-e2e-8mzmbb" May 12 20:26:26.437: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-8mzmbb-aks INFO: Creating the workload cluster with name "capz-e2e-8mzmbb-aks" using the "aks-multi-tenancy" template (Kubernetes v1.22.6, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 34 lines ... [1mSTEP[0m: Dumping logs from the "capz-e2e-8mzmbb-aks" workload cluster [1mSTEP[0m: Dumping workload cluster capz-e2e-8mzmbb/capz-e2e-8mzmbb-aks logs May 12 20:35:47.015: INFO: INFO: Collecting logs for node aks-agentpool1-42202984-vmss000000 in cluster capz-e2e-8mzmbb-aks in namespace capz-e2e-8mzmbb May 12 20:37:56.748: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0 Failed to get logs for machine pool agentpool0, cluster capz-e2e-8mzmbb/capz-e2e-8mzmbb-aks: [dialing public load balancer at capz-e2e-8mzmbb-aks-5627ae3a.hcp.westeurope.azmk8s.io: dial tcp 20.76.50.198:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."] May 12 20:37:57.270: INFO: INFO: Collecting logs for node aks-agentpool1-42202984-vmss000000 in cluster capz-e2e-8mzmbb-aks in namespace capz-e2e-8mzmbb May 12 20:40:07.820: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0 Failed to get logs for machine pool agentpool1, cluster capz-e2e-8mzmbb/capz-e2e-8mzmbb-aks: [dialing public load balancer at capz-e2e-8mzmbb-aks-5627ae3a.hcp.westeurope.azmk8s.io: dial tcp 20.76.50.198:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."] [1mSTEP[0m: Dumping workload cluster capz-e2e-8mzmbb/capz-e2e-8mzmbb-aks kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 1.051559303s [1mSTEP[0m: Dumping workload cluster capz-e2e-8mzmbb/capz-e2e-8mzmbb-aks Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/azure-ip-masq-agent-7ddks, container azure-ip-masq-agent [1mSTEP[0m: Creating log watcher for controller kube-system/csi-azurefile-node-kkgfm, container node-driver-registrar [1mSTEP[0m: Creating log watcher for controller kube-system/csi-azuredisk-node-2c76r, container azuredisk ... skipping 44 lines ... [1mwith a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543[0m INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Thu, 12 May 2022 20:33:29 UTC on Ginkgo node 3 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-z8i00k" for hosting the cluster May 12 20:33:29.586: INFO: starting to create namespace for hosting the "capz-e2e-z8i00k" test spec 2022/05/12 20:33:29 failed trying to get namespace (capz-e2e-z8i00k):namespaces "capz-e2e-z8i00k" not found INFO: Creating namespace capz-e2e-z8i00k INFO: Creating event watcher for namespace "capz-e2e-z8i00k" May 12 20:33:29.626: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-z8i00k-win-vmss INFO: Creating the workload cluster with name "capz-e2e-z8i00k-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 123 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-xwl9c, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-z8i00k-win-vmss-control-plane-kslfq, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-z8i00k-win-vmss-control-plane-kslfq, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-nms55, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-xdgff, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-z8i00k-win-vmss-control-plane-kslfq, container kube-scheduler [1mSTEP[0m: Got error while iterating over activity logs for resource group capz-e2e-z8i00k-win-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded [1mSTEP[0m: Fetching activity logs took 30.001100592s [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-z8i00k" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-z8i00k namespace [1mSTEP[0m: Deleting cluster capz-e2e-z8i00k-win-vmss INFO: Waiting for the Cluster capz-e2e-z8i00k/capz-e2e-z8i00k-win-vmss to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-z8i00k-win-vmss to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-dg7q8, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-windows-xwl9c, container kube-proxy: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-z8i00k [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 31m30s on Ginkgo node 3 of 3 ... skipping 10 lines ... [1mWith 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496[0m INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Thu, 12 May 2022 20:29:32 UTC on Ginkgo node 1 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-kloipm" for hosting the cluster May 12 20:29:32.132: INFO: starting to create namespace for hosting the "capz-e2e-kloipm" test spec 2022/05/12 20:29:32 failed trying to get namespace (capz-e2e-kloipm):namespaces "capz-e2e-kloipm" not found INFO: Creating namespace capz-e2e-kloipm INFO: Creating event watcher for namespace "capz-e2e-kloipm" May 12 20:29:32.171: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-kloipm-win-ha INFO: Creating the workload cluster with name "capz-e2e-kloipm-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 12 lines ... azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-kloipm-win-ha-flannel created configmap/cni-capz-e2e-kloipm-win-ha-flannel created INFO: Waiting for the cluster infrastructure to be provisioned [1mSTEP[0m: Waiting for cluster to enter the provisioned phase E0512 20:29:50.610110 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:30:25.924893 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:31:12.004719 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-e2e-kloipm/capz-e2e-kloipm-win-ha-control-plane to be provisioned [1mSTEP[0m: Waiting for one control plane node to exist E0512 20:31:43.615288 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:32:27.812129 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:33:06.857357 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: Waiting for control plane to be ready INFO: Waiting for the remaining control plane machines managed by capz-e2e-kloipm/capz-e2e-kloipm-win-ha-control-plane to be provisioned [1mSTEP[0m: Waiting for all control plane nodes to exist E0512 20:33:57.835810 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:34:42.129860 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:35:38.801665 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:36:12.933564 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:36:57.834260 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:37:43.621974 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: Waiting for control plane capz-e2e-kloipm/capz-e2e-kloipm-win-ha-control-plane to be ready (implies underlying nodes to be ready as well) [1mSTEP[0m: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist [1mSTEP[0m: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned [1mSTEP[0m: creating a Kubernetes client to the workload cluster [1mSTEP[0m: creating an HTTP deployment [1mSTEP[0m: waiting for deployment default/weby7pcy0 to be available May 12 20:38:23.914: INFO: starting to wait for deployment to become available E0512 20:38:31.115126 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host May 12 20:38:44.276: INFO: Deployment default/weby7pcy0 is now available, took 20.361911787s [1mSTEP[0m: creating an internal Load Balancer service May 12 20:38:44.276: INFO: starting to create an internal Load Balancer service [1mSTEP[0m: waiting for service default/weby7pcy0-ilb to be available May 12 20:38:44.436: INFO: waiting for service default/weby7pcy0-ilb to be available E0512 20:39:14.866222 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host May 12 20:39:55.341: INFO: service default/weby7pcy0-ilb is available, took 1m10.90431786s [1mSTEP[0m: connecting to the internal LB service from a curl pod May 12 20:39:55.453: INFO: starting to create a curl to ilb job [1mSTEP[0m: waiting for job default/curl-to-ilb-job7s97q to be complete May 12 20:39:55.582: INFO: waiting for job default/curl-to-ilb-job7s97q to be complete May 12 20:40:05.808: INFO: job default/curl-to-ilb-job7s97q is complete, took 10.225637016s [1mSTEP[0m: deleting the ilb test resources May 12 20:40:05.808: INFO: deleting the ilb service: weby7pcy0-ilb May 12 20:40:05.976: INFO: deleting the ilb job: curl-to-ilb-job7s97q [1mSTEP[0m: creating an external Load Balancer service May 12 20:40:06.098: INFO: starting to create an external Load Balancer service [1mSTEP[0m: waiting for service default/weby7pcy0-elb to be available May 12 20:40:06.253: INFO: waiting for service default/weby7pcy0-elb to be available E0512 20:40:11.978975 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host May 12 20:40:26.592: INFO: service default/weby7pcy0-elb is available, took 20.339534726s [1mSTEP[0m: connecting to the external LB service from a curl pod May 12 20:40:26.704: INFO: starting to create curl-to-elb job [1mSTEP[0m: waiting for job default/curl-to-elb-job6dkj363zl83 to be complete May 12 20:40:26.825: INFO: waiting for job default/curl-to-elb-job6dkj363zl83 to be complete May 12 20:40:37.049: INFO: job default/curl-to-elb-job6dkj363zl83 is complete, took 10.22486174s ... skipping 6 lines ... May 12 20:40:37.431: INFO: starting to delete deployment weby7pcy0 May 12 20:40:37.551: INFO: starting to delete job curl-to-elb-job6dkj363zl83 [1mSTEP[0m: creating a Kubernetes client to the workload cluster [1mSTEP[0m: creating an HTTP deployment [1mSTEP[0m: waiting for deployment default/web-windowsodhw01 to be available May 12 20:40:37.937: INFO: starting to wait for deployment to become available E0512 20:41:11.830878 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:41:53.405063 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host May 12 20:41:59.021: INFO: Deployment default/web-windowsodhw01 is now available, took 1m21.084201181s [1mSTEP[0m: creating an internal Load Balancer service May 12 20:41:59.021: INFO: starting to create an internal Load Balancer service [1mSTEP[0m: waiting for service default/web-windowsodhw01-ilb to be available May 12 20:41:59.177: INFO: waiting for service default/web-windowsodhw01-ilb to be available May 12 20:42:09.403: INFO: service default/web-windowsodhw01-ilb is available, took 10.225977948s ... skipping 6 lines ... May 12 20:42:19.857: INFO: deleting the ilb service: web-windowsodhw01-ilb May 12 20:42:20.023: INFO: deleting the ilb job: curl-to-ilb-jobcc3cc [1mSTEP[0m: creating an external Load Balancer service May 12 20:42:20.142: INFO: starting to create an external Load Balancer service [1mSTEP[0m: waiting for service default/web-windowsodhw01-elb to be available May 12 20:42:20.295: INFO: waiting for service default/web-windowsodhw01-elb to be available E0512 20:42:26.821141 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host May 12 20:42:50.749: INFO: service default/web-windowsodhw01-elb is available, took 30.454388211s [1mSTEP[0m: connecting to the external LB service from a curl pod May 12 20:42:50.863: INFO: starting to create curl-to-elb job [1mSTEP[0m: waiting for job default/curl-to-elb-jobuaam0rtrenz to be complete May 12 20:42:50.982: INFO: waiting for job default/curl-to-elb-jobuaam0rtrenz to be complete May 12 20:43:01.207: INFO: job default/curl-to-elb-jobuaam0rtrenz is complete, took 10.224893034s ... skipping 6 lines ... May 12 20:43:01.586: INFO: starting to delete deployment web-windowsodhw01 May 12 20:43:01.707: INFO: starting to delete job curl-to-elb-jobuaam0rtrenz [1mSTEP[0m: Dumping logs from the "capz-e2e-kloipm-win-ha" workload cluster [1mSTEP[0m: Dumping workload cluster capz-e2e-kloipm/capz-e2e-kloipm-win-ha logs May 12 20:43:01.868: INFO: INFO: Collecting logs for node capz-e2e-kloipm-win-ha-control-plane-wjvzx in cluster capz-e2e-kloipm-win-ha in namespace capz-e2e-kloipm E0512 20:43:13.702873 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host May 12 20:43:16.498: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-kloipm-win-ha-control-plane-wjvzx May 12 20:43:17.907: INFO: INFO: Collecting logs for node capz-e2e-kloipm-win-ha-control-plane-hxq9t in cluster capz-e2e-kloipm-win-ha in namespace capz-e2e-kloipm May 12 20:43:28.179: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-kloipm-win-ha-control-plane-hxq9t ... skipping 4 lines ... May 12 20:43:38.079: INFO: INFO: Collecting logs for node capz-e2e-kloipm-win-ha-md-0-bs89t in cluster capz-e2e-kloipm-win-ha in namespace capz-e2e-kloipm May 12 20:43:50.070: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-kloipm-win-ha-md-0-bs89t May 12 20:43:50.493: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster capz-e2e-kloipm-win-ha in namespace capz-e2e-kloipm E0512 20:44:08.275697 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host May 12 20:44:38.491: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-kloipm-win-ha-md-win-vjcwx [1mSTEP[0m: Dumping workload cluster capz-e2e-kloipm/capz-e2e-kloipm-win-ha kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 901.841503ms [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-fxkck, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-kloipm-win-ha-control-plane-fh6jv, container kube-scheduler ... skipping 17 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-kloipm-win-ha-control-plane-fh6jv, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-cwf8g, container kube-flannel [1mSTEP[0m: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-vlk2d, container kube-flannel [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-kloipm-win-ha-control-plane-hxq9t, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-7tmlz, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-kloipm-win-ha-control-plane-wjvzx, container kube-scheduler E0512 20:44:43.970206 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Got error while iterating over activity logs for resource group capz-e2e-kloipm-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded [1mSTEP[0m: Fetching activity logs took 30.000934035s [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-kloipm" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-kloipm namespace [1mSTEP[0m: Deleting cluster capz-e2e-kloipm-win-ha INFO: Waiting for the Cluster capz-e2e-kloipm/capz-e2e-kloipm-win-ha to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-kloipm-win-ha to be deleted E0512 20:45:26.311476 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:46:23.304408 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:47:03.129481 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:47:40.759925 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:48:37.803403 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:49:34.868663 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:50:23.685074 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:50:58.997529 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:51:32.222025 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:52:05.475033 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:52:59.337790 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:53:45.138964 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:54:21.360566 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:55:00.377761 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:55:54.389397 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:56:28.983085 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:57:02.798596 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:57:42.870414 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-6mxlx, container kube-flannel: http2: server sent GOAWAY and closed the connection; LastStreamID=109, ErrCode=NO_ERROR, debug="" [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-kloipm-win-ha-control-plane-hxq9t, container kube-apiserver: http2: server sent GOAWAY and closed the connection; LastStreamID=109, ErrCode=NO_ERROR, debug="" [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-kloipm-win-ha-control-plane-hxq9t, container kube-controller-manager: http2: server sent GOAWAY and closed the connection; LastStreamID=109, ErrCode=NO_ERROR, debug="" [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-kzwgw, container kube-flannel: http2: server sent GOAWAY and closed the connection; LastStreamID=109, ErrCode=NO_ERROR, debug="" [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-vrzw9, container kube-proxy: http2: server sent GOAWAY and closed the connection; LastStreamID=109, ErrCode=NO_ERROR, debug="" [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-kloipm-win-ha-control-plane-hxq9t, container kube-scheduler: http2: server sent GOAWAY and closed the connection; LastStreamID=109, ErrCode=NO_ERROR, debug="" [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-fxkck, container kube-proxy: http2: server sent GOAWAY and closed the connection; LastStreamID=109, ErrCode=NO_ERROR, debug="" E0512 20:58:31.453228 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:59:08.098140 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 20:59:41.608753 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 21:00:32.682335 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 21:01:30.615367 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 21:02:19.510673 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 21:03:07.027335 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 21:03:55.721264 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 21:04:27.345423 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 21:05:04.072398 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 21:05:42.108428 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 21:06:18.494488 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 21:06:54.469900 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 21:07:33.628853 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host E0512 21:08:19.580565 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-kloipm [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs E0512 21:09:04.353970 24162 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-hwpo9y/events?resourceVersion=2583": dial tcp: lookup capz-e2e-hwpo9y-public-custom-vnet-622ec77b.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 39m58s on Ginkgo node 1 of 3 [32m• [SLOW TEST:2398.495 seconds][0m Workload cluster creation [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43[0m ... skipping 5 lines ... [1mSTEP[0m: Tearing down the management cluster [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mWorkload cluster creation [0m[0mCreating a private cluster [0m[91m[1m[It] Creates a public management cluster in the same vnet [0m [37m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/clusterctl/clusterctl_helpers.go:272[0m [1m[91mRan 8 of 22 Specs in 4915.974 seconds[0m [1m[91mFAIL![0m -- [32m[1m7 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m14 Skipped[0m Ginkgo ran 1 suite in 1h23m19.32814767s Test Suite Failed make[1]: *** [Makefile:173: test-e2e-run] Error 1 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:181: test-e2e] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...