This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-09 18:30
Elapsed2h8m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node 58m28s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\san\sAKS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454
Timed out after 1200.001s.
System machine pools not ready
Expected
    <bool>: false
to equal
    <bool>: true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 434 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Tue, 09 Nov 2021 18:37:30 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-z7tast" for hosting the cluster
Nov  9 18:37:30.884: INFO: starting to create namespace for hosting the "capz-e2e-z7tast" test spec
2021/11/09 18:37:30 failed trying to get namespace (capz-e2e-z7tast):namespaces "capz-e2e-z7tast" not found
INFO: Creating namespace capz-e2e-z7tast
INFO: Creating event watcher for namespace "capz-e2e-z7tast"
Nov  9 18:37:30.953: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-z7tast-ipv6
INFO: Creating the workload cluster with name "capz-e2e-z7tast-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 565.163121ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-z7tast" namespace
STEP: Deleting all clusters in the capz-e2e-z7tast namespace
STEP: Deleting cluster capz-e2e-z7tast-ipv6
INFO: Waiting for the Cluster capz-e2e-z7tast/capz-e2e-z7tast-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-z7tast-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-58v8r, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-7szcd, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-z7tast-ipv6-control-plane-wh2sw, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-m5xch, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-z7tast-ipv6-control-plane-wh2sw, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-z7tast-ipv6-control-plane-949rl, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-z7tast-ipv6-control-plane-n9bjq, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-z7tast-ipv6-control-plane-n9bjq, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-z7tast-ipv6-control-plane-949rl, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-tlbcq, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-z7tast-ipv6-control-plane-n9bjq, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-z7tast-ipv6-control-plane-949rl, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-z7tast-ipv6-control-plane-949rl, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dwmbd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-n6qpk, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-bk6r9, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-spvcx, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5kbql, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-z7tast-ipv6-control-plane-wh2sw, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ffr7p, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-sxv2x, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-z7tast-ipv6-control-plane-wh2sw, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-z7tast-ipv6-control-plane-n9bjq, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-z7tast
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 17m52s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Tue, 09 Nov 2021 18:55:22 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-c6pygm" for hosting the cluster
Nov  9 18:55:22.512: INFO: starting to create namespace for hosting the "capz-e2e-c6pygm" test spec
2021/11/09 18:55:22 failed trying to get namespace (capz-e2e-c6pygm):namespaces "capz-e2e-c6pygm" not found
INFO: Creating namespace capz-e2e-c6pygm
INFO: Creating event watcher for namespace "capz-e2e-c6pygm"
Nov  9 18:55:22.543: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-c6pygm-vmss
INFO: Creating the workload cluster with name "capz-e2e-c6pygm-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 128 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Tue, 09 Nov 2021 18:37:30 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-1kwip8" for hosting the cluster
Nov  9 18:37:30.880: INFO: starting to create namespace for hosting the "capz-e2e-1kwip8" test spec
2021/11/09 18:37:30 failed trying to get namespace (capz-e2e-1kwip8):namespaces "capz-e2e-1kwip8" not found
INFO: Creating namespace capz-e2e-1kwip8
INFO: Creating event watcher for namespace "capz-e2e-1kwip8"
Nov  9 18:37:30.955: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-1kwip8-ha
INFO: Creating the workload cluster with name "capz-e2e-1kwip8-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 67 lines ...
Nov  9 18:48:46.135: INFO: starting to delete external LB service webnna16w-elb
Nov  9 18:48:46.282: INFO: starting to delete deployment webnna16w
Nov  9 18:48:46.396: INFO: starting to delete job curl-to-elb-job7atpu0cpq98
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov  9 18:48:46.552: INFO: starting to create dev deployment namespace
2021/11/09 18:48:46 failed trying to get namespace (development):namespaces "development" not found
2021/11/09 18:48:46 namespace development does not exist, creating...
STEP: Creating production namespace
Nov  9 18:48:46.775: INFO: starting to create prod deployment namespace
2021/11/09 18:48:46 failed trying to get namespace (production):namespaces "production" not found
2021/11/09 18:48:46 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov  9 18:48:46.992: INFO: starting to create frontend-prod deployments
Nov  9 18:48:47.102: INFO: starting to create frontend-dev deployments
Nov  9 18:48:47.218: INFO: starting to create backend deployments
Nov  9 18:48:47.327: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov  9 18:49:14.000: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.66.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  9 18:51:24.967: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov  9 18:51:25.437: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.66.132 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.66.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  9 18:55:47.110: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov  9 18:55:47.494: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.125.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  9 18:58:01.027: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov  9 18:58:01.405: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.125.1 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.125.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  9 19:02:25.219: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov  9 19:02:25.611: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.66.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov  9 19:04:37.544: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov  9 19:04:37.923: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.66.132 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-1kwip8-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-1kwip8/capz-e2e-1kwip8-ha logs
Nov  9 19:06:49.445: INFO: INFO: Collecting logs for node capz-e2e-1kwip8-ha-control-plane-wnc8s in cluster capz-e2e-1kwip8-ha in namespace capz-e2e-1kwip8

Nov  9 19:07:01.108: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-1kwip8-ha-control-plane-wnc8s
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-g5p5n, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-1kwip8-ha-control-plane-wdqlv, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-4dh2w, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-1kwip8-ha-control-plane-4pmds, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-hzpj5, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-ddfvq, container calico-node
STEP: Got error while iterating over activity logs for resource group capz-e2e-1kwip8-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00067655s
STEP: Dumping all the Cluster API resources in the "capz-e2e-1kwip8" namespace
STEP: Deleting all clusters in the capz-e2e-1kwip8 namespace
STEP: Deleting cluster capz-e2e-1kwip8-ha
INFO: Waiting for the Cluster capz-e2e-1kwip8/capz-e2e-1kwip8-ha to be deleted
STEP: Waiting for cluster capz-e2e-1kwip8-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-1kwip8-ha-control-plane-wnc8s, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-l7f66, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-tbgpn, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-gjvfz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-1kwip8-ha-control-plane-wnc8s, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-g44mv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-1kwip8-ha-control-plane-wdqlv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-vclbq, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rltfr, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5tvc4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-1kwip8-ha-control-plane-wnc8s, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-1kwip8-ha-control-plane-wdqlv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-1kwip8-ha-control-plane-wdqlv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hzpj5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4dh2w, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-1kwip8-ha-control-plane-wdqlv, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hlskz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qx7b4, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-1kwip8-ha-control-plane-wnc8s, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-1kwip8
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 43m11s on Ginkgo node 2 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Tue, 09 Nov 2021 18:37:30 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-131k0q" for hosting the cluster
Nov  9 18:37:30.852: INFO: starting to create namespace for hosting the "capz-e2e-131k0q" test spec
2021/11/09 18:37:30 failed trying to get namespace (capz-e2e-131k0q):namespaces "capz-e2e-131k0q" not found
INFO: Creating namespace capz-e2e-131k0q
INFO: Creating event watcher for namespace "capz-e2e-131k0q"
Nov  9 18:37:30.892: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-131k0q-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-131k0q-public-custom-vnet-control-plane-qdwpz, container kube-apiserver
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-2lt28, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-6lcbr, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-131k0q-public-custom-vnet-control-plane-qdwpz, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-131k0q-public-custom-vnet-control-plane-qdwpz, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-mmgj5, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-131k0q-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001217144s
STEP: Dumping all the Cluster API resources in the "capz-e2e-131k0q" namespace
STEP: Deleting all clusters in the capz-e2e-131k0q namespace
STEP: Deleting cluster capz-e2e-131k0q-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-131k0q/capz-e2e-131k0q-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-131k0q-public-custom-vnet to be deleted
W1109 19:31:24.467455   24338 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1109 19:31:55.936672   24338 trace.go:205] Trace[1438175949]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (09-Nov-2021 19:31:25.935) (total time: 30001ms):
Trace[1438175949]: [30.001009185s] [30.001009185s] END
E1109 19:31:55.936734   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp 20.82.173.155:6443: i/o timeout
I1109 19:32:28.884294   24338 trace.go:205] Trace[913686809]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (09-Nov-2021 19:31:58.883) (total time: 30000ms):
Trace[913686809]: [30.000691604s] [30.000691604s] END
E1109 19:32:28.884356   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp 20.82.173.155:6443: i/o timeout
I1109 19:33:04.354515   24338 trace.go:205] Trace[99002243]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (09-Nov-2021 19:32:34.353) (total time: 30001ms):
Trace[99002243]: [30.001113005s] [30.001113005s] END
E1109 19:33:04.354574   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp 20.82.173.155:6443: i/o timeout
I1109 19:33:47.053835   24338 trace.go:205] Trace[839545664]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (09-Nov-2021 19:33:17.051) (total time: 30002ms):
Trace[839545664]: [30.002654171s] [30.002654171s] END
E1109 19:33:47.053894   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp 20.82.173.155:6443: i/o timeout
I1109 19:34:40.936038   24338 trace.go:205] Trace[672521952]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (09-Nov-2021 19:34:10.934) (total time: 30001ms):
Trace[672521952]: [30.001574157s] [30.001574157s] END
E1109 19:34:40.936094   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp 20.82.173.155:6443: i/o timeout
I1109 19:35:36.912297   24338 trace.go:205] Trace[1339958829]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (09-Nov-2021 19:35:06.910) (total time: 30001ms):
Trace[1339958829]: [30.001562559s] [30.001562559s] END
E1109 19:35:36.912354   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp 20.82.173.155:6443: i/o timeout
E1109 19:36:19.477159   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-131k0q
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov  9 19:36:47.128: INFO: deleting an existing virtual network "custom-vnet"
Nov  9 19:36:58.309: INFO: deleting an existing route table "node-routetable"
Nov  9 19:37:09.171: INFO: deleting an existing network security group "node-nsg"
E1109 19:37:09.688723   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov  9 19:37:19.777: INFO: deleting an existing network security group "control-plane-nsg"
Nov  9 19:37:30.327: INFO: verifying the existing resource group "capz-e2e-131k0q-public-custom-vnet" is empty
Nov  9 19:37:30.554: INFO: deleting the existing resource group "capz-e2e-131k0q-public-custom-vnet"
E1109 19:37:46.250667   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:38:39.923094   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:39:24.344286   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1109 19:40:00.666181   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:40:31.007469   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 1h3m31s on Ginkgo node 1 of 3


• [SLOW TEST:3811.123 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Tue, 09 Nov 2021 19:20:42 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-fj3vh5" for hosting the cluster
Nov  9 19:20:42.044: INFO: starting to create namespace for hosting the "capz-e2e-fj3vh5" test spec
2021/11/09 19:20:42 failed trying to get namespace (capz-e2e-fj3vh5):namespaces "capz-e2e-fj3vh5" not found
INFO: Creating namespace capz-e2e-fj3vh5
INFO: Creating event watcher for namespace "capz-e2e-fj3vh5"
Nov  9 19:20:42.080: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-fj3vh5-oot
INFO: Creating the workload cluster with name "capz-e2e-fj3vh5-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-jobj1ztca82nml to be complete
Nov  9 19:28:57.263: INFO: waiting for job default/curl-to-elb-jobj1ztca82nml to be complete
Nov  9 19:29:07.471: INFO: job default/curl-to-elb-jobj1ztca82nml is complete, took 10.207905203s
STEP: connecting directly to the external LB service
Nov  9 19:29:07.471: INFO: starting attempts to connect directly to the external LB service
2021/11/09 19:29:07 [DEBUG] GET http://20.93.41.115
2021/11/09 19:29:37 [ERR] GET http://20.93.41.115 request failed: Get "http://20.93.41.115": dial tcp 20.93.41.115:80: i/o timeout
2021/11/09 19:29:37 [DEBUG] GET http://20.93.41.115: retrying in 1s (4 left)
Nov  9 19:29:38.674: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov  9 19:29:38.674: INFO: starting to delete external LB service webn5swvm-elb
Nov  9 19:29:38.808: INFO: starting to delete deployment webn5swvm
Nov  9 19:29:38.913: INFO: starting to delete job curl-to-elb-jobj1ztca82nml
... skipping 34 lines ...
STEP: Fetching activity logs took 614.287909ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-fj3vh5" namespace
STEP: Deleting all clusters in the capz-e2e-fj3vh5 namespace
STEP: Deleting cluster capz-e2e-fj3vh5-oot
INFO: Waiting for the Cluster capz-e2e-fj3vh5/capz-e2e-fj3vh5-oot to be deleted
STEP: Waiting for cluster capz-e2e-fj3vh5-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-fj3vh5-oot-control-plane-zwgq4, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mlghm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-fj3vh5-oot-control-plane-zwgq4, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-fj3vh5-oot-control-plane-zwgq4, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-controller-manager, container cloud-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rdczt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-nvtpn, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-ccm7w, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-kznjd, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ppl48, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-fj3vh5-oot-control-plane-zwgq4, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-fj3vh5
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 23m12s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Tue, 09 Nov 2021 19:14:24 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-njwdb9" for hosting the cluster
Nov  9 19:14:24.017: INFO: starting to create namespace for hosting the "capz-e2e-njwdb9" test spec
2021/11/09 19:14:24 failed trying to get namespace (capz-e2e-njwdb9):namespaces "capz-e2e-njwdb9" not found
INFO: Creating namespace capz-e2e-njwdb9
INFO: Creating event watcher for namespace "capz-e2e-njwdb9"
Nov  9 19:14:24.058: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-njwdb9-gpu
INFO: Creating the workload cluster with name "capz-e2e-njwdb9-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 80 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Tue, 09 Nov 2021 19:43:53 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-oyez67" for hosting the cluster
Nov  9 19:43:53.965: INFO: starting to create namespace for hosting the "capz-e2e-oyez67" test spec
2021/11/09 19:43:53 failed trying to get namespace (capz-e2e-oyez67):namespaces "capz-e2e-oyez67" not found
INFO: Creating namespace capz-e2e-oyez67
INFO: Creating event watcher for namespace "capz-e2e-oyez67"
Nov  9 19:43:54.036: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-oyez67-win-ha
INFO: Creating the workload cluster with name "capz-e2e-oyez67-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 91 lines ...
STEP: waiting for job default/curl-to-elb-jobavmbowc3bcv to be complete
Nov  9 19:57:33.995: INFO: waiting for job default/curl-to-elb-jobavmbowc3bcv to be complete
Nov  9 19:57:44.204: INFO: job default/curl-to-elb-jobavmbowc3bcv is complete, took 10.209184904s
STEP: connecting directly to the external LB service
Nov  9 19:57:44.204: INFO: starting attempts to connect directly to the external LB service
2021/11/09 19:57:44 [DEBUG] GET http://20.67.183.235
2021/11/09 19:58:14 [ERR] GET http://20.67.183.235 request failed: Get "http://20.67.183.235": dial tcp 20.67.183.235:80: i/o timeout
2021/11/09 19:58:14 [DEBUG] GET http://20.67.183.235: retrying in 1s (4 left)
Nov  9 19:58:15.413: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov  9 19:58:15.413: INFO: starting to delete external LB service web-windowsk7g24c-elb
Nov  9 19:58:15.570: INFO: starting to delete deployment web-windowsk7g24c
Nov  9 19:58:15.679: INFO: starting to delete job curl-to-elb-jobavmbowc3bcv
... skipping 43 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-oyez67-win-ha-control-plane-6jdfv, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-mrdzl, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-oyez67-win-ha-control-plane-xkcjb, container kube-scheduler
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-oyez67-win-ha-control-plane-xkcjb, container etcd
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-oyez67-win-ha-control-plane-pllnt, container etcd
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-w9t7k, container kube-flannel
STEP: Got error while iterating over activity logs for resource group capz-e2e-oyez67-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001413323s
STEP: Dumping all the Cluster API resources in the "capz-e2e-oyez67" namespace
STEP: Deleting all clusters in the capz-e2e-oyez67 namespace
STEP: Deleting cluster capz-e2e-oyez67-win-ha
INFO: Waiting for the Cluster capz-e2e-oyez67/capz-e2e-oyez67-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-oyez67-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-bw2gg, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-b8sqf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-oyez67-win-ha-control-plane-pllnt, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-oyez67-win-ha-control-plane-6jdfv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-z82dq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-oyez67-win-ha-control-plane-pllnt, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-wth96, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-dvr9k, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-oyez67-win-ha-control-plane-pllnt, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-n2s49, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-mrdzl, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-s2x6h, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-oyez67-win-ha-control-plane-6jdfv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-oyez67-win-ha-control-plane-pllnt, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-oyez67-win-ha-control-plane-6jdfv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-oyez67-win-ha-control-plane-6jdfv, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-oyez67
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 36m35s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Tue, 09 Nov 2021 19:44:19 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-x154wx" for hosting the cluster
Nov  9 19:44:19.381: INFO: starting to create namespace for hosting the "capz-e2e-x154wx" test spec
2021/11/09 19:44:19 failed trying to get namespace (capz-e2e-x154wx):namespaces "capz-e2e-x154wx" not found
INFO: Creating namespace capz-e2e-x154wx
INFO: Creating event watcher for namespace "capz-e2e-x154wx"
Nov  9 19:44:19.420: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-x154wx-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-x154wx-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-jobhjxgyap9pj5 to be complete
Nov  9 19:54:55.981: INFO: waiting for job default/curl-to-elb-jobhjxgyap9pj5 to be complete
Nov  9 19:55:06.187: INFO: job default/curl-to-elb-jobhjxgyap9pj5 is complete, took 10.20627941s
STEP: connecting directly to the external LB service
Nov  9 19:55:06.187: INFO: starting attempts to connect directly to the external LB service
2021/11/09 19:55:06 [DEBUG] GET http://52.155.178.119
2021/11/09 19:55:36 [ERR] GET http://52.155.178.119 request failed: Get "http://52.155.178.119": dial tcp 52.155.178.119:80: i/o timeout
2021/11/09 19:55:36 [DEBUG] GET http://52.155.178.119: retrying in 1s (4 left)
Nov  9 19:55:52.748: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov  9 19:55:52.748: INFO: starting to delete external LB service web6e1dsg-elb
Nov  9 19:55:52.870: INFO: starting to delete deployment web6e1dsg
Nov  9 19:55:52.973: INFO: starting to delete job curl-to-elb-jobhjxgyap9pj5
... skipping 25 lines ...
STEP: waiting for job default/curl-to-elb-jobb5m4jv05a2f to be complete
Nov  9 20:03:39.734: INFO: waiting for job default/curl-to-elb-jobb5m4jv05a2f to be complete
Nov  9 20:03:49.940: INFO: job default/curl-to-elb-jobb5m4jv05a2f is complete, took 10.206095132s
STEP: connecting directly to the external LB service
Nov  9 20:03:49.940: INFO: starting attempts to connect directly to the external LB service
2021/11/09 20:03:49 [DEBUG] GET http://20.67.182.88
2021/11/09 20:04:19 [ERR] GET http://20.67.182.88 request failed: Get "http://20.67.182.88": dial tcp 20.67.182.88:80: i/o timeout
2021/11/09 20:04:19 [DEBUG] GET http://20.67.182.88: retrying in 1s (4 left)
2021/11/09 20:04:50 [ERR] GET http://20.67.182.88 request failed: Get "http://20.67.182.88": dial tcp 20.67.182.88:80: i/o timeout
2021/11/09 20:04:50 [DEBUG] GET http://20.67.182.88: retrying in 2s (3 left)
Nov  9 20:04:53.150: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov  9 20:04:53.150: INFO: starting to delete external LB service web-windowsjn15fe-elb
Nov  9 20:04:53.284: INFO: starting to delete deployment web-windowsjn15fe
Nov  9 20:04:53.389: INFO: starting to delete job curl-to-elb-jobb5m4jv05a2f
... skipping 23 lines ...
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-pp5zm, container kube-flannel
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-k47kk, container coredns
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-x154wx-win-vmss-control-plane-6vpt9, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-6p5g9, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-proxy-79dnw, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-x154wx-win-vmss-control-plane-6vpt9, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-x154wx-win-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000932502s
STEP: Dumping all the Cluster API resources in the "capz-e2e-x154wx" namespace
STEP: Deleting all clusters in the capz-e2e-x154wx namespace
STEP: Deleting cluster capz-e2e-x154wx-win-vmss
INFO: Waiting for the Cluster capz-e2e-x154wx/capz-e2e-x154wx-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-x154wx-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-xr7qv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-6p5g9, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-pp5zm, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-79dnw, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-x154wx
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 41m5s on Ginkgo node 3 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows enabled VMSS cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:542
    with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2021-11-09T20:30:36Z"}
++ early_exit_handler
++ '[' -n 153 ']'
++ kill -TERM 153
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 19 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Tue, 09 Nov 2021 19:41:01 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-xvnp2k" for hosting the cluster
Nov  9 19:41:01.983: INFO: starting to create namespace for hosting the "capz-e2e-xvnp2k" test spec
2021/11/09 19:41:01 failed trying to get namespace (capz-e2e-xvnp2k):namespaces "capz-e2e-xvnp2k" not found
INFO: Creating namespace capz-e2e-xvnp2k
INFO: Creating event watcher for namespace "capz-e2e-xvnp2k"
Nov  9 19:41:02.033: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-xvnp2k-aks
INFO: Creating the workload cluster with name "capz-e2e-xvnp2k-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1109 19:41:03.810190   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:41:45.299230   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:42:27.528054   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:43:14.517301   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:44:02.582601   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:44:51.491391   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:45:27.492903   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov  9 19:45:45.613: INFO: Waiting for the first control plane machine managed by capz-e2e-xvnp2k/capz-e2e-xvnp2k-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
E1109 19:46:22.834945   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:47:04.882264   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:47:51.321982   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:48:28.848055   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:49:16.711293   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:49:55.090697   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:50:30.440687   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:51:24.809954   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:51:55.437587   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:52:48.816725   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:53:43.076663   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:54:33.025601   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:55:10.043630   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:55:42.699004   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:56:42.252033   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:57:38.146264   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:58:35.455613   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 19:59:15.870798   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:00:14.440244   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:01:13.505344   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:02:04.979434   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:02:53.371450   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:03:25.760858   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:03:59.260974   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:04:55.892666   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:05:40.008354   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Dumping logs from the "capz-e2e-xvnp2k-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-xvnp2k/capz-e2e-xvnp2k-aks logs
Nov  9 20:05:45.708: INFO: INFO: Collecting logs for node aks-agentpool1-17736427-vmss000000 in cluster capz-e2e-xvnp2k-aks in namespace capz-e2e-xvnp2k

E1109 20:06:19.841909   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:07:03.112616   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:07:38.384367   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov  9 20:07:55.494: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-xvnp2k/capz-e2e-xvnp2k-aks: [dialing public load balancer at capz-e2e-xvnp2k-aks-8f88ee50.hcp.northeurope.azmk8s.io: dial tcp 20.54.34.138:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-xvnp2k/capz-e2e-xvnp2k-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 968.656117ms
STEP: Creating log watcher for controller kube-system/calico-typha-deployment-76cb9744d8-4h4df, container calico-typha
STEP: Creating log watcher for controller kube-system/kube-proxy-2pqtc, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-nzxp9, container kube-proxy
STEP: Creating log watcher for controller kube-system/metrics-server-569f6547dd-bqbpc, container metrics-server
... skipping 8 lines ...
STEP: Fetching activity logs took 600.055624ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-xvnp2k" namespace
STEP: Deleting all clusters in the capz-e2e-xvnp2k namespace
STEP: Deleting cluster capz-e2e-xvnp2k-aks
INFO: Waiting for the Cluster capz-e2e-xvnp2k/capz-e2e-xvnp2k-aks to be deleted
STEP: Waiting for cluster capz-e2e-xvnp2k-aks to be deleted
E1109 20:08:24.560400   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:09:11.720519   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:10:06.526682   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:10:39.218829   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:11:35.480517   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:12:17.525889   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:13:04.120144   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:13:40.891996   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:14:39.145498   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:15:31.493599   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:16:29.762446   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:17:07.188803   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:18:03.164893   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:18:37.085948   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:19:29.832835   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:20:09.732039   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:20:59.379184   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:21:37.698292   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:22:24.958413   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:22:57.378282   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:23:30.653710   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:24:12.460565   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:25:05.154832   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:25:38.541293   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:26:14.423684   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:26:47.587187   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:27:36.126459   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:28:30.821085   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:29:10.731803   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:29:49.832028   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:30:31.342903   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
W1109 20:30:36.657236   24338 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: very short watch: pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
E1109 20:30:37.745610   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:41877/api/v1/namespaces/capz-e2e-xvnp2k/events?resourceVersion=60884": dial tcp 127.0.0.1:41877: connect: connection refused
E1109 20:30:40.393431   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:41877/api/v1/namespaces/capz-e2e-xvnp2k/events?resourceVersion=60884": dial tcp 127.0.0.1:41877: connect: connection refused
E1109 20:30:44.480471   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:41877/api/v1/namespaces/capz-e2e-xvnp2k/events?resourceVersion=60884": dial tcp 127.0.0.1:41877: connect: connection refused
E1109 20:30:54.847958   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:41877/api/v1/namespaces/capz-e2e-xvnp2k/events?resourceVersion=60884": dial tcp 127.0.0.1:41877: connect: connection refused
E1109 20:31:05.681200   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:31:13.879606   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:41877/api/v1/namespaces/capz-e2e-xvnp2k/events?resourceVersion=60884": dial tcp 127.0.0.1:41877: connect: connection refused
E1109 20:31:41.287862   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:41877/api/v1/namespaces/capz-e2e-xvnp2k/events?resourceVersion=60884": dial tcp 127.0.0.1:41877: connect: connection refused
E1109 20:32:01.131657   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:32:19.202658   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:41877/api/v1/namespaces/capz-e2e-xvnp2k/events?resourceVersion=60884": dial tcp 127.0.0.1:41877: connect: connection refused
E1109 20:33:01.133996   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:33:08.280389   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:41877/api/v1/namespaces/capz-e2e-xvnp2k/events?resourceVersion=60884": dial tcp 127.0.0.1:41877: connect: connection refused
E1109 20:33:31.913753   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:33:56.391784   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:41877/api/v1/namespaces/capz-e2e-xvnp2k/events?resourceVersion=60884": dial tcp 127.0.0.1:41877: connect: connection refused
E1109 20:34:31.041397   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:34:53.670988   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:41877/api/v1/namespaces/capz-e2e-xvnp2k/events?resourceVersion=60884": dial tcp 127.0.0.1:41877: connect: connection refused
E1109 20:35:26.055390   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:41877/api/v1/namespaces/capz-e2e-xvnp2k/events?resourceVersion=60884": dial tcp 127.0.0.1:41877: connect: connection refused
E1109 20:35:26.212415   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:36:01.840857   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:41877/api/v1/namespaces/capz-e2e-xvnp2k/events?resourceVersion=60884": dial tcp 127.0.0.1:41877: connect: connection refused
E1109 20:36:08.584490   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:36:38.739215   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:41877/api/v1/namespaces/capz-e2e-xvnp2k/events?resourceVersion=60884": dial tcp 127.0.0.1:41877: connect: connection refused
E1109 20:36:52.619325   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:37:12.108966   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://127.0.0.1:41877/api/v1/namespaces/capz-e2e-xvnp2k/events?resourceVersion=60884": dial tcp 127.0.0.1:41877: connect: connection refused
E1109 20:37:52.442324   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Redacting sensitive information from logs
E1109 20:38:46.885994   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1109 20:39:24.646789   24338 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-131k0q/events?resourceVersion=8077": dial tcp: lookup capz-e2e-131k0q-public-custom-vnet-d82b8c09.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host


• Failure [3508.278 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating an AKS cluster
... skipping 50 lines ...
    testing.tRunner(0xc0007da180, 0x2316318)
    	/usr/local/go/src/testing/testing.go:1193 +0xef
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1238 +0x2b3
------------------------------
STEP: Tearing down the management cluster
INFO: Deleting the kind cluster "capz-e2e" failed. You may need to remove this by hand.



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating an AKS cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216

Ran 9 of 22 Specs in 7435.530 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 2h5m20.548055755s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
{"component":"entrypoint","file":"prow/entrypoint/run.go:252","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process gracefully exited before 15m0s grace period","severity":"error","time":"2021-11-09T20:39:30Z"}