This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-27 18:37
Elapsed1h45m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node 33m17s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\san\sAKS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454
Timed out after 1200.001s.
System machine pools not ready
Expected
    <bool>: false
to equal
    <bool>: true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216
				
				Click to see stdout/stderrfrom junit.e2e_suite.3.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 432 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Sat, 27 Nov 2021 18:44:11 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-xy2v3d" for hosting the cluster
Nov 27 18:44:11.949: INFO: starting to create namespace for hosting the "capz-e2e-xy2v3d" test spec
2021/11/27 18:44:11 failed trying to get namespace (capz-e2e-xy2v3d):namespaces "capz-e2e-xy2v3d" not found
INFO: Creating namespace capz-e2e-xy2v3d
INFO: Creating event watcher for namespace "capz-e2e-xy2v3d"
Nov 27 18:44:12.025: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-xy2v3d-ipv6
INFO: Creating the workload cluster with name "capz-e2e-xy2v3d-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 541.784112ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-xy2v3d" namespace
STEP: Deleting all clusters in the capz-e2e-xy2v3d namespace
STEP: Deleting cluster capz-e2e-xy2v3d-ipv6
INFO: Waiting for the Cluster capz-e2e-xy2v3d/capz-e2e-xy2v3d-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-xy2v3d-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-5ll6j, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-xy2v3d-ipv6-control-plane-hdvmf, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-xy2v3d-ipv6-control-plane-d46ft, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-xy2v3d-ipv6-control-plane-hdvmf, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-xy2v3d-ipv6-control-plane-d46ft, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-xy2v3d-ipv6-control-plane-hdvmf, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-xy2v3d-ipv6-control-plane-vl9vk, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-xy2v3d-ipv6-control-plane-vl9vk, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-gtcfz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-fv74t, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-thcm8, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-xy2v3d-ipv6-control-plane-hdvmf, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-xy2v3d-ipv6-control-plane-d46ft, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nhfsv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-72nvx, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-xy2v3d-ipv6-control-plane-d46ft, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vqcls, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ph2z2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-xy2v3d-ipv6-control-plane-vl9vk, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5bk8k, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-9tfg5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-xy2v3d-ipv6-control-plane-vl9vk, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-44lmt, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-xy2v3d
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 19m25s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Sat, 27 Nov 2021 19:03:36 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-tcgbek" for hosting the cluster
Nov 27 19:03:36.947: INFO: starting to create namespace for hosting the "capz-e2e-tcgbek" test spec
2021/11/27 19:03:36 failed trying to get namespace (capz-e2e-tcgbek):namespaces "capz-e2e-tcgbek" not found
INFO: Creating namespace capz-e2e-tcgbek
INFO: Creating event watcher for namespace "capz-e2e-tcgbek"
Nov 27 19:03:36.986: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-tcgbek-vmss
INFO: Creating the workload cluster with name "capz-e2e-tcgbek-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 106 lines ...
STEP: Fetching activity logs took 554.074946ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-tcgbek" namespace
STEP: Deleting all clusters in the capz-e2e-tcgbek namespace
STEP: Deleting cluster capz-e2e-tcgbek-vmss
INFO: Waiting for the Cluster capz-e2e-tcgbek/capz-e2e-tcgbek-vmss to be deleted
STEP: Waiting for cluster capz-e2e-tcgbek-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mgf69, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-wt5ml, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-tcgbek-vmss-control-plane-gphv7, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-tcgbek-vmss-control-plane-gphv7, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5rwgj, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-tcgbek-vmss-control-plane-gphv7, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-twx2g, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-x4kd8, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-tcgbek-vmss-control-plane-gphv7, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-tcgbek
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 17m10s on Ginkgo node 3 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Sat, 27 Nov 2021 18:44:11 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-o29rat" for hosting the cluster
Nov 27 18:44:11.949: INFO: starting to create namespace for hosting the "capz-e2e-o29rat" test spec
2021/11/27 18:44:11 failed trying to get namespace (capz-e2e-o29rat):namespaces "capz-e2e-o29rat" not found
INFO: Creating namespace capz-e2e-o29rat
INFO: Creating event watcher for namespace "capz-e2e-o29rat"
Nov 27 18:44:12.022: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-o29rat-ha
INFO: Creating the workload cluster with name "capz-e2e-o29rat-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 67 lines ...
Nov 27 18:54:37.052: INFO: starting to delete external LB service webxqfxrt-elb
Nov 27 18:54:37.209: INFO: starting to delete deployment webxqfxrt
Nov 27 18:54:37.331: INFO: starting to delete job curl-to-elb-jobe77x5rwm7zv
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 27 18:54:37.489: INFO: starting to create dev deployment namespace
2021/11/27 18:54:37 failed trying to get namespace (development):namespaces "development" not found
2021/11/27 18:54:37 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 27 18:54:37.722: INFO: starting to create prod deployment namespace
2021/11/27 18:54:37 failed trying to get namespace (production):namespaces "production" not found
2021/11/27 18:54:37 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 27 18:54:37.959: INFO: starting to create frontend-prod deployments
Nov 27 18:54:38.076: INFO: starting to create frontend-dev deployments
Nov 27 18:54:38.193: INFO: starting to create backend deployments
Nov 27 18:54:38.311: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 27 18:55:05.235: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.183.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 27 18:57:16.482: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 27 18:57:16.923: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.183.3 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.183.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 27 19:01:38.441: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 27 19:01:38.839: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.183.4 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 27 19:03:51.747: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 27 19:03:52.149: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.183.2 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.183.4 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 27 19:08:15.942: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 27 19:08:16.353: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.183.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 27 19:10:29.061: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 27 19:10:29.464: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.183.3 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-o29rat-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-o29rat/capz-e2e-o29rat-ha logs
Nov 27 19:12:41.043: INFO: INFO: Collecting logs for node capz-e2e-o29rat-ha-control-plane-jdhwg in cluster capz-e2e-o29rat-ha in namespace capz-e2e-o29rat

Nov 27 19:12:52.645: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-o29rat-ha-control-plane-jdhwg
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-o29rat-ha-control-plane-7k555, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-o29rat-ha-control-plane-7k555, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-o29rat-ha-control-plane-7k555, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-ksvpz, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-o29rat-ha-control-plane-jdhwg, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-o29rat-ha-control-plane-jdhwg, container kube-apiserver
STEP: Got error while iterating over activity logs for resource group capz-e2e-o29rat-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000330498s
STEP: Dumping all the Cluster API resources in the "capz-e2e-o29rat" namespace
STEP: Deleting all clusters in the capz-e2e-o29rat namespace
STEP: Deleting cluster capz-e2e-o29rat-ha
INFO: Waiting for the Cluster capz-e2e-o29rat/capz-e2e-o29rat-ha to be deleted
STEP: Waiting for cluster capz-e2e-o29rat-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-ksvpz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cwmhw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-o29rat-ha-control-plane-jdhwg, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-o29rat-ha-control-plane-5xmht, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-wfhkv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-o29rat-ha-control-plane-5xmht, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5rrl4, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-o29rat-ha-control-plane-7k555, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-zc42x, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-s6dd6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-o29rat-ha-control-plane-5xmht, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-o29rat-ha-control-plane-7k555, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-o29rat-ha-control-plane-7k555, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-c22nw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9xb6w, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-o29rat-ha-control-plane-jdhwg, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-o29rat-ha-control-plane-7k555, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-h7rnm, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-o29rat-ha-control-plane-jdhwg, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-o29rat-ha-control-plane-5xmht, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-hzdkz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-o29rat-ha-control-plane-jdhwg, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-fp4rl, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-o29rat
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 41m12s on Ginkgo node 2 of 3

... skipping 8 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Sat, 27 Nov 2021 19:20:46 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-usx5f0" for hosting the cluster
Nov 27 19:20:46.537: INFO: starting to create namespace for hosting the "capz-e2e-usx5f0" test spec
2021/11/27 19:20:46 failed trying to get namespace (capz-e2e-usx5f0):namespaces "capz-e2e-usx5f0" not found
INFO: Creating namespace capz-e2e-usx5f0
INFO: Creating event watcher for namespace "capz-e2e-usx5f0"
Nov 27 19:20:46.583: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-usx5f0-gpu
INFO: Creating the workload cluster with name "capz-e2e-usx5f0-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 58 lines ...
STEP: Fetching activity logs took 549.546064ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-usx5f0" namespace
STEP: Deleting all clusters in the capz-e2e-usx5f0 namespace
STEP: Deleting cluster capz-e2e-usx5f0-gpu
INFO: Waiting for the Cluster capz-e2e-usx5f0/capz-e2e-usx5f0-gpu to be deleted
STEP: Waiting for cluster capz-e2e-usx5f0-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-4pn7h, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9fq2z, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-usx5f0
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 25m9s on Ginkgo node 3 of 3

... skipping 10 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Sat, 27 Nov 2021 18:44:11 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-7ss9gc" for hosting the cluster
Nov 27 18:44:11.946: INFO: starting to create namespace for hosting the "capz-e2e-7ss9gc" test spec
2021/11/27 18:44:11 failed trying to get namespace (capz-e2e-7ss9gc):namespaces "capz-e2e-7ss9gc" not found
INFO: Creating namespace capz-e2e-7ss9gc
INFO: Creating event watcher for namespace "capz-e2e-7ss9gc"
Nov 27 18:44:12.005: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-7ss9gc-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-7ss9gc-public-custom-vnet-control-plane-4plwv, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-w7sjt, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-fxltz, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-glvp6, container calico-node
STEP: Dumping workload cluster capz-e2e-7ss9gc/capz-e2e-7ss9gc-public-custom-vnet Azure activity log
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-7ss9gc-public-custom-vnet-control-plane-4plwv, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-7ss9gc-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000221595s
STEP: Dumping all the Cluster API resources in the "capz-e2e-7ss9gc" namespace
STEP: Deleting all clusters in the capz-e2e-7ss9gc namespace
STEP: Deleting cluster capz-e2e-7ss9gc-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-7ss9gc/capz-e2e-7ss9gc-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-7ss9gc-public-custom-vnet to be deleted
W1127 19:39:30.553508   24215 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1127 19:40:02.016038   24215 trace.go:205] Trace[15865574]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (27-Nov-2021 19:39:32.014) (total time: 30001ms):
Trace[15865574]: [30.001160845s] [30.001160845s] END
E1127 19:40:02.016107   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp 20.56.232.8:6443: i/o timeout
I1127 19:40:33.998175   24215 trace.go:205] Trace[294995862]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (27-Nov-2021 19:40:03.997) (total time: 30000ms):
Trace[294995862]: [30.000576915s] [30.000576915s] END
E1127 19:40:33.998241   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp 20.56.232.8:6443: i/o timeout
I1127 19:41:08.257173   24215 trace.go:205] Trace[1240011897]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (27-Nov-2021 19:40:38.256) (total time: 30000ms):
Trace[1240011897]: [30.000940764s] [30.000940764s] END
E1127 19:41:08.257228   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp 20.56.232.8:6443: i/o timeout
I1127 19:41:50.559363   24215 trace.go:205] Trace[1775533784]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (27-Nov-2021 19:41:20.558) (total time: 30000ms):
Trace[1775533784]: [30.000959206s] [30.000959206s] END
E1127 19:41:50.559433   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp 20.56.232.8:6443: i/o timeout
I1127 19:42:40.469796   24215 trace.go:205] Trace[1078373127]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (27-Nov-2021 19:42:10.469) (total time: 30000ms):
Trace[1078373127]: [30.000593021s] [30.000593021s] END
E1127 19:42:40.469868   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp 20.56.232.8:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-7ss9gc
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 27 19:42:50.776: INFO: deleting an existing virtual network "custom-vnet"
Nov 27 19:43:01.799: INFO: deleting an existing route table "node-routetable"
Nov 27 19:43:12.425: INFO: deleting an existing network security group "node-nsg"
E1127 19:43:20.623188   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 27 19:43:23.348: INFO: deleting an existing network security group "control-plane-nsg"
Nov 27 19:43:33.962: INFO: verifying the existing resource group "capz-e2e-7ss9gc-public-custom-vnet" is empty
Nov 27 19:43:34.002: INFO: deleting the existing resource group "capz-e2e-7ss9gc-public-custom-vnet"
E1127 19:44:18.159764   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1127 19:44:58.261966   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 19:45:49.297989   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 1h2m35s on Ginkgo node 1 of 3


• [SLOW TEST:3754.609 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Sat, 27 Nov 2021 19:25:24 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-s01v8p" for hosting the cluster
Nov 27 19:25:24.338: INFO: starting to create namespace for hosting the "capz-e2e-s01v8p" test spec
2021/11/27 19:25:24 failed trying to get namespace (capz-e2e-s01v8p):namespaces "capz-e2e-s01v8p" not found
INFO: Creating namespace capz-e2e-s01v8p
INFO: Creating event watcher for namespace "capz-e2e-s01v8p"
Nov 27 19:25:24.369: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-s01v8p-oot
INFO: Creating the workload cluster with name "capz-e2e-s01v8p-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-jobwnwph9emtuc to be complete
Nov 27 19:34:10.301: INFO: waiting for job default/curl-to-elb-jobwnwph9emtuc to be complete
Nov 27 19:34:20.526: INFO: job default/curl-to-elb-jobwnwph9emtuc is complete, took 10.224999112s
STEP: connecting directly to the external LB service
Nov 27 19:34:20.526: INFO: starting attempts to connect directly to the external LB service
2021/11/27 19:34:20 [DEBUG] GET http://20.76.146.62
2021/11/27 19:34:50 [ERR] GET http://20.76.146.62 request failed: Get "http://20.76.146.62": dial tcp 20.76.146.62:80: i/o timeout
2021/11/27 19:34:50 [DEBUG] GET http://20.76.146.62: retrying in 1s (4 left)
2021/11/27 19:35:21 [ERR] GET http://20.76.146.62 request failed: Get "http://20.76.146.62": dial tcp 20.76.146.62:80: i/o timeout
2021/11/27 19:35:21 [DEBUG] GET http://20.76.146.62: retrying in 2s (3 left)
Nov 27 19:35:23.750: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 27 19:35:23.750: INFO: starting to delete external LB service web8g3i1e-elb
Nov 27 19:35:23.888: INFO: starting to delete deployment web8g3i1e
Nov 27 19:35:24.005: INFO: starting to delete job curl-to-elb-jobwnwph9emtuc
... skipping 56 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" ran for 25m9s on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-os6q5t" for hosting the cluster
Nov 27 19:45:56.032: INFO: starting to create namespace for hosting the "capz-e2e-os6q5t" test spec
2021/11/27 19:45:56 failed trying to get namespace (capz-e2e-os6q5t):namespaces "capz-e2e-os6q5t" not found
INFO: Creating namespace capz-e2e-os6q5t
INFO: Creating event watcher for namespace "capz-e2e-os6q5t"
Nov 27 19:45:56.095: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-os6q5t-aks
INFO: Creating the workload cluster with name "capz-e2e-os6q5t-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 107 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Sat, 27 Nov 2021 19:48:09 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-1jqvqn" for hosting the cluster
Nov 27 19:48:09.848: INFO: starting to create namespace for hosting the "capz-e2e-1jqvqn" test spec
2021/11/27 19:48:09 failed trying to get namespace (capz-e2e-1jqvqn):namespaces "capz-e2e-1jqvqn" not found
INFO: Creating namespace capz-e2e-1jqvqn
INFO: Creating event watcher for namespace "capz-e2e-1jqvqn"
Nov 27 19:48:09.879: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-1jqvqn-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-1jqvqn-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 104 lines ...
Nov 27 20:05:35.441: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-1jqvqn-win-vmss-control-plane-bgd7s

Nov 27 20:05:36.910: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-1jqvqn-win-vmss in namespace capz-e2e-1jqvqn

Nov 27 20:05:56.663: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-1jqvqn-win-vmss-mp-0

Failed to get logs for machine pool capz-e2e-1jqvqn-win-vmss-mp-0, cluster capz-e2e-1jqvqn/capz-e2e-1jqvqn-win-vmss: [running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1]
Nov 27 20:05:57.189: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-1jqvqn-win-vmss in namespace capz-e2e-1jqvqn

Nov 27 20:06:34.973: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

STEP: Dumping workload cluster capz-e2e-1jqvqn/capz-e2e-1jqvqn-win-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 774.825031ms
... skipping 7 lines ...
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-6t96b, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-t6f5v, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-gkxkt, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-1jqvqn-win-vmss-control-plane-bgd7s, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-h7v6s, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-gb9z8, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-1jqvqn-win-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000882894s
STEP: Dumping all the Cluster API resources in the "capz-e2e-1jqvqn" namespace
STEP: Deleting all clusters in the capz-e2e-1jqvqn namespace
STEP: Deleting cluster capz-e2e-1jqvqn-win-vmss
INFO: Waiting for the Cluster capz-e2e-1jqvqn/capz-e2e-1jqvqn-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-1jqvqn-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-gb9z8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-1jqvqn-win-vmss-control-plane-bgd7s, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xmmp5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-gkxkt, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-1jqvqn-win-vmss-control-plane-bgd7s, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-x2lds, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-h7v6s, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-1jqvqn-win-vmss-control-plane-bgd7s, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-t6f5v, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-6t96b, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-1jqvqn-win-vmss-control-plane-bgd7s, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mnsmn, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-1jqvqn
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 31m36s on Ginkgo node 2 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Sat, 27 Nov 2021 19:46:46 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-d7evyn" for hosting the cluster
Nov 27 19:46:46.558: INFO: starting to create namespace for hosting the "capz-e2e-d7evyn" test spec
2021/11/27 19:46:46 failed trying to get namespace (capz-e2e-d7evyn):namespaces "capz-e2e-d7evyn" not found
INFO: Creating namespace capz-e2e-d7evyn
INFO: Creating event watcher for namespace "capz-e2e-d7evyn"
Nov 27 19:46:46.591: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-d7evyn-win-ha
INFO: Creating the workload cluster with name "capz-e2e-d7evyn-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 12 lines ...
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created
clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-d7evyn-win-ha-flannel created
configmap/cni-capz-e2e-d7evyn-win-ha-flannel created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1127 19:46:48.195312   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 19:47:41.344566   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by capz-e2e-d7evyn/capz-e2e-d7evyn-win-ha-control-plane to be provisioned
STEP: Waiting for one control plane node to exist
E1127 19:48:11.469423   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 19:48:48.746467   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 19:49:18.825139   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 19:50:04.459282   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
INFO: Waiting for the remaining control plane machines managed by capz-e2e-d7evyn/capz-e2e-d7evyn-win-ha-control-plane to be provisioned
STEP: Waiting for all control plane nodes to exist
E1127 19:51:04.504759   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 19:52:02.563708   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 19:53:01.712920   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 19:53:36.152114   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 19:54:22.321777   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 19:54:55.470938   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane capz-e2e-d7evyn/capz-e2e-d7evyn-win-ha-control-plane to be ready (implies underlying nodes to be ready as well)
STEP: Waiting for the control plane to be ready
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
STEP: Waiting for the workload nodes to exist
INFO: Waiting for the machine pools to be provisioned
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/webrdf2zf to be available
Nov 27 19:55:28.958: INFO: starting to wait for deployment to become available
E1127 19:55:39.864357   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 27 19:55:49.385: INFO: Deployment default/webrdf2zf is now available, took 20.42722567s
STEP: creating an internal Load Balancer service
Nov 27 19:55:49.386: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/webrdf2zf-ilb to be available
Nov 27 19:55:49.545: INFO: waiting for service default/webrdf2zf-ilb to be available
E1127 19:56:21.433428   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 27 19:56:50.333: INFO: service default/webrdf2zf-ilb is available, took 1m0.787835158s
STEP: connecting to the internal LB service from a curl pod
Nov 27 19:56:50.504: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-jobtq5iw to be complete
Nov 27 19:56:50.636: INFO: waiting for job default/curl-to-ilb-jobtq5iw to be complete
Nov 27 19:57:00.860: INFO: job default/curl-to-ilb-jobtq5iw is complete, took 10.224336879s
STEP: deleting the ilb test resources
Nov 27 19:57:00.860: INFO: deleting the ilb service: webrdf2zf-ilb
Nov 27 19:57:01.039: INFO: deleting the ilb job: curl-to-ilb-jobtq5iw
STEP: creating an external Load Balancer service
Nov 27 19:57:01.181: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/webrdf2zf-elb to be available
Nov 27 19:57:01.331: INFO: waiting for service default/webrdf2zf-elb to be available
E1127 19:57:10.511850   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 27 19:57:21.678: INFO: service default/webrdf2zf-elb is available, took 20.346412593s
STEP: connecting to the external LB service from a curl pod
Nov 27 19:57:21.789: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-jobd0jrevnkxkj to be complete
Nov 27 19:57:21.921: INFO: waiting for job default/curl-to-elb-jobd0jrevnkxkj to be complete
Nov 27 19:57:32.148: INFO: job default/curl-to-elb-jobd0jrevnkxkj is complete, took 10.226623839s
STEP: connecting directly to the external LB service
Nov 27 19:57:32.148: INFO: starting attempts to connect directly to the external LB service
2021/11/27 19:57:32 [DEBUG] GET http://20.86.217.102
E1127 19:57:41.522024   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 27 19:57:47.643: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 27 19:57:47.643: INFO: starting to delete external LB service webrdf2zf-elb
Nov 27 19:57:47.827: INFO: starting to delete deployment webrdf2zf
Nov 27 19:57:47.960: INFO: starting to delete job curl-to-elb-jobd0jrevnkxkj
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowsmhjod1 to be available
Nov 27 19:57:48.358: INFO: starting to wait for deployment to become available
E1127 19:58:19.593425   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 19:58:58.287414   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 19:59:40.361022   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 20:00:27.064043   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 20:01:11.542197   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 27 20:01:51.364: INFO: Deployment default/web-windowsmhjod1 is now available, took 4m3.006384839s
STEP: creating an internal Load Balancer service
Nov 27 20:01:51.364: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/web-windowsmhjod1-ilb to be available
Nov 27 20:01:51.518: INFO: waiting for service default/web-windowsmhjod1-ilb to be available
E1127 20:01:53.815963   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 27 20:02:11.855: INFO: service default/web-windowsmhjod1-ilb is available, took 20.337062936s
STEP: connecting to the internal LB service from a curl pod
Nov 27 20:02:11.966: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-job7npdz to be complete
Nov 27 20:02:12.086: INFO: waiting for job default/curl-to-ilb-job7npdz to be complete
Nov 27 20:02:22.311: INFO: job default/curl-to-ilb-job7npdz is complete, took 10.224975275s
... skipping 6 lines ...
Nov 27 20:02:22.811: INFO: waiting for service default/web-windowsmhjod1-elb to be available
Nov 27 20:02:43.148: INFO: service default/web-windowsmhjod1-elb is available, took 20.336523447s
STEP: connecting to the external LB service from a curl pod
Nov 27 20:02:43.259: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-jobhhfysj8w64x to be complete
Nov 27 20:02:43.377: INFO: waiting for job default/curl-to-elb-jobhhfysj8w64x to be complete
E1127 20:02:52.029998   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 27 20:02:53.602: INFO: job default/curl-to-elb-jobhhfysj8w64x is complete, took 10.224965792s
STEP: connecting directly to the external LB service
Nov 27 20:02:53.602: INFO: starting attempts to connect directly to the external LB service
2021/11/27 20:02:53 [DEBUG] GET http://20.86.222.79
2021/11/27 20:03:23 [ERR] GET http://20.86.222.79 request failed: Get "http://20.86.222.79": dial tcp 20.86.222.79:80: i/o timeout
2021/11/27 20:03:23 [DEBUG] GET http://20.86.222.79: retrying in 1s (4 left)
Nov 27 20:03:24.827: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 27 20:03:24.827: INFO: starting to delete external LB service web-windowsmhjod1-elb
Nov 27 20:03:25.196: INFO: starting to delete deployment web-windowsmhjod1
Nov 27 20:03:25.313: INFO: starting to delete job curl-to-elb-jobhhfysj8w64x
STEP: Dumping logs from the "capz-e2e-d7evyn-win-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-d7evyn/capz-e2e-d7evyn-win-ha logs
Nov 27 20:03:26.168: INFO: INFO: Collecting logs for node capz-e2e-d7evyn-win-ha-control-plane-f6flm in cluster capz-e2e-d7evyn-win-ha in namespace capz-e2e-d7evyn

E1127 20:03:37.506305   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 27 20:03:39.109: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-d7evyn-win-ha-control-plane-f6flm

Nov 27 20:03:40.510: INFO: INFO: Collecting logs for node capz-e2e-d7evyn-win-ha-control-plane-w955c in cluster capz-e2e-d7evyn-win-ha in namespace capz-e2e-d7evyn

Nov 27 20:03:52.217: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-d7evyn-win-ha-control-plane-w955c

Nov 27 20:03:52.741: INFO: INFO: Collecting logs for node capz-e2e-d7evyn-win-ha-control-plane-hc5lw in cluster capz-e2e-d7evyn-win-ha in namespace capz-e2e-d7evyn

Nov 27 20:04:04.662: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-d7evyn-win-ha-control-plane-hc5lw

Nov 27 20:04:05.184: INFO: INFO: Collecting logs for node capz-e2e-d7evyn-win-ha-md-0-gxbx6 in cluster capz-e2e-d7evyn-win-ha in namespace capz-e2e-d7evyn

E1127 20:04:10.472860   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 27 20:04:18.459: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-d7evyn-win-ha-md-0-gxbx6

Nov 27 20:04:18.911: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster capz-e2e-d7evyn-win-ha in namespace capz-e2e-d7evyn

E1127 20:04:51.669938   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 27 20:05:02.375: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-d7evyn-win-ha-md-win-6pzmr

STEP: Dumping workload cluster capz-e2e-d7evyn/capz-e2e-d7evyn-win-ha kube-system pod logs
STEP: Fetching kube-system pod logs took 928.895921ms
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-d7evyn-win-ha-control-plane-hc5lw, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-p4fbr, container kube-flannel
... skipping 17 lines ...
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-d7evyn-win-ha-control-plane-f6flm, container etcd
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-n4x65, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-proxy-ll9f4, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-s9n4c, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-hbqd8, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-d7evyn-win-ha-control-plane-w955c, container kube-controller-manager
E1127 20:05:27.382336   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Got error while iterating over activity logs for resource group capz-e2e-d7evyn-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000231247s
STEP: Dumping all the Cluster API resources in the "capz-e2e-d7evyn" namespace
STEP: Deleting all clusters in the capz-e2e-d7evyn namespace
STEP: Deleting cluster capz-e2e-d7evyn-win-ha
INFO: Waiting for the Cluster capz-e2e-d7evyn/capz-e2e-d7evyn-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-d7evyn-win-ha to be deleted
E1127 20:06:16.038756   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 20:07:10.136434   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 20:07:41.796182   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 20:08:36.724362   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 20:09:20.113758   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 20:09:53.805627   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 20:10:48.448986   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 20:11:44.661093   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 20:12:18.942490   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-mccsf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-hbqd8, container kube-flannel: http2: client connection lost
E1127 20:13:06.181708   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 20:13:52.513674   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 20:14:33.034706   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 20:15:11.983686   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 20:15:51.318979   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 20:16:44.723641   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 20:17:36.914711   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 20:18:09.284655   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 20:19:06.554937   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-d7evyn
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1127 20:19:46.437500   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1127 20:20:17.323368   24215 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7ss9gc/events?resourceVersion=10407": dial tcp: lookup capz-e2e-7ss9gc-public-custom-vnet-77b10843.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 34m13s on Ginkgo node 1 of 3


• [SLOW TEST:2053.358 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 5 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating an AKS cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216

Ran 9 of 22 Specs in 5922.658 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 1h40m3.511654561s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...