This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-28 18:37
Elapsed1h46m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node 31m40s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\san\sAKS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454
Timed out after 1200.001s.
System machine pools not ready
Expected
    <bool>: false
to equal
    <bool>: true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 433 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Sun, 28 Nov 2021 18:44:19 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-ok1v8a" for hosting the cluster
Nov 28 18:44:19.654: INFO: starting to create namespace for hosting the "capz-e2e-ok1v8a" test spec
2021/11/28 18:44:19 failed trying to get namespace (capz-e2e-ok1v8a):namespaces "capz-e2e-ok1v8a" not found
INFO: Creating namespace capz-e2e-ok1v8a
INFO: Creating event watcher for namespace "capz-e2e-ok1v8a"
Nov 28 18:44:19.735: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-ok1v8a-ipv6
INFO: Creating the workload cluster with name "capz-e2e-ok1v8a-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 580.180754ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-ok1v8a" namespace
STEP: Deleting all clusters in the capz-e2e-ok1v8a namespace
STEP: Deleting cluster capz-e2e-ok1v8a-ipv6
INFO: Waiting for the Cluster capz-e2e-ok1v8a/capz-e2e-ok1v8a-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-ok1v8a-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ok1v8a-ipv6-control-plane-4rj8g, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ok1v8a-ipv6-control-plane-7kwh4, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-prx84, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-24jkv, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2qc8v, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-wppqr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vrwhb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ok1v8a-ipv6-control-plane-4rj8g, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-zmz5s, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ok1v8a-ipv6-control-plane-7kwh4, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7xpgq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ok1v8a-ipv6-control-plane-cw5r2, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ok1v8a-ipv6-control-plane-cw5r2, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-46x57, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ok1v8a-ipv6-control-plane-4rj8g, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-lmljs, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-6vs4d, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ok1v8a-ipv6-control-plane-cw5r2, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-scb22, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ok1v8a-ipv6-control-plane-4rj8g, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ok1v8a-ipv6-control-plane-cw5r2, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ok1v8a-ipv6-control-plane-7kwh4, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ok1v8a-ipv6-control-plane-7kwh4, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-ok1v8a
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 18m32s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Sun, 28 Nov 2021 19:02:51 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-rixhuw" for hosting the cluster
Nov 28 19:02:51.827: INFO: starting to create namespace for hosting the "capz-e2e-rixhuw" test spec
2021/11/28 19:02:51 failed trying to get namespace (capz-e2e-rixhuw):namespaces "capz-e2e-rixhuw" not found
INFO: Creating namespace capz-e2e-rixhuw
INFO: Creating event watcher for namespace "capz-e2e-rixhuw"
Nov 28 19:02:51.883: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-rixhuw-vmss
INFO: Creating the workload cluster with name "capz-e2e-rixhuw-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 106 lines ...
STEP: Fetching activity logs took 558.731751ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-rixhuw" namespace
STEP: Deleting all clusters in the capz-e2e-rixhuw namespace
STEP: Deleting cluster capz-e2e-rixhuw-vmss
INFO: Waiting for the Cluster capz-e2e-rixhuw/capz-e2e-rixhuw-vmss to be deleted
STEP: Waiting for cluster capz-e2e-rixhuw-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4g2j4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6ssbw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-rixhuw-vmss-control-plane-7nsds, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-rixhuw-vmss-control-plane-7nsds, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-rixhuw-vmss-control-plane-7nsds, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-gpng4, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-lvhcj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6lp88, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-sfcnx, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8s28k, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-rixhuw-vmss-control-plane-7nsds, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-kjb6g, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-xkdvl, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-rixhuw
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 18m49s on Ginkgo node 3 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Sun, 28 Nov 2021 18:44:19 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-stjqfj" for hosting the cluster
Nov 28 18:44:19.652: INFO: starting to create namespace for hosting the "capz-e2e-stjqfj" test spec
2021/11/28 18:44:19 failed trying to get namespace (capz-e2e-stjqfj):namespaces "capz-e2e-stjqfj" not found
INFO: Creating namespace capz-e2e-stjqfj
INFO: Creating event watcher for namespace "capz-e2e-stjqfj"
Nov 28 18:44:19.734: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-stjqfj-ha
INFO: Creating the workload cluster with name "capz-e2e-stjqfj-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 59 lines ...
STEP: waiting for job default/curl-to-elb-jobkg6p1aeftef to be complete
Nov 28 18:54:07.383: INFO: waiting for job default/curl-to-elb-jobkg6p1aeftef to be complete
Nov 28 18:54:17.606: INFO: job default/curl-to-elb-jobkg6p1aeftef is complete, took 10.222942856s
STEP: connecting directly to the external LB service
Nov 28 18:54:17.606: INFO: starting attempts to connect directly to the external LB service
2021/11/28 18:54:17 [DEBUG] GET http://20.73.120.5
2021/11/28 18:54:47 [ERR] GET http://20.73.120.5 request failed: Get "http://20.73.120.5": dial tcp 20.73.120.5:80: i/o timeout
2021/11/28 18:54:47 [DEBUG] GET http://20.73.120.5: retrying in 1s (4 left)
Nov 28 18:54:48.829: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 28 18:54:48.830: INFO: starting to delete external LB service webdilehw-elb
Nov 28 18:54:48.979: INFO: starting to delete deployment webdilehw
Nov 28 18:54:49.094: INFO: starting to delete job curl-to-elb-jobkg6p1aeftef
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 28 18:54:49.247: INFO: starting to create dev deployment namespace
2021/11/28 18:54:49 failed trying to get namespace (development):namespaces "development" not found
2021/11/28 18:54:49 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 28 18:54:49.483: INFO: starting to create prod deployment namespace
2021/11/28 18:54:49 failed trying to get namespace (production):namespaces "production" not found
2021/11/28 18:54:49 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 28 18:54:49.712: INFO: starting to create frontend-prod deployments
Nov 28 18:54:49.829: INFO: starting to create frontend-dev deployments
Nov 28 18:54:49.946: INFO: starting to create backend deployments
Nov 28 18:54:50.063: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 28 18:55:17.484: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.127.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 28 18:57:27.750: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 28 18:57:28.164: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.127.3 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.127.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 28 19:01:49.891: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 28 19:01:50.305: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.0.4 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 28 19:04:02.853: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 28 19:04:03.257: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.0.2 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.0.4 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 28 19:08:27.045: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 28 19:08:27.447: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.127.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 28 19:10:40.324: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 28 19:10:40.738: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.127.3 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-stjqfj-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-stjqfj/capz-e2e-stjqfj-ha logs
Nov 28 19:12:52.263: INFO: INFO: Collecting logs for node capz-e2e-stjqfj-ha-control-plane-klsdn in cluster capz-e2e-stjqfj-ha in namespace capz-e2e-stjqfj

Nov 28 19:13:05.096: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-stjqfj-ha-control-plane-klsdn
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-kvxm6, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-lghx5, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-vpww6, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-xsxh7, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-stjqfj-ha-control-plane-2zttm, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-stjqfj-ha-control-plane-klsdn, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-stjqfj-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000947082s
STEP: Dumping all the Cluster API resources in the "capz-e2e-stjqfj" namespace
STEP: Deleting all clusters in the capz-e2e-stjqfj namespace
STEP: Deleting cluster capz-e2e-stjqfj-ha
INFO: Waiting for the Cluster capz-e2e-stjqfj/capz-e2e-stjqfj-ha to be deleted
STEP: Waiting for cluster capz-e2e-stjqfj-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-8kp4t, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-29sgj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-95q4d, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-stjqfj-ha-control-plane-2zttm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lghx5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-stjqfj-ha-control-plane-2zttm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-stjqfj-ha-control-plane-2zttm, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-stjqfj-ha-control-plane-2zttm, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-stjqfj
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 45m34s on Ginkgo node 2 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Sun, 28 Nov 2021 18:44:19 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-mhsyqw" for hosting the cluster
Nov 28 18:44:19.650: INFO: starting to create namespace for hosting the "capz-e2e-mhsyqw" test spec
2021/11/28 18:44:19 failed trying to get namespace (capz-e2e-mhsyqw):namespaces "capz-e2e-mhsyqw" not found
INFO: Creating namespace capz-e2e-mhsyqw
INFO: Creating event watcher for namespace "capz-e2e-mhsyqw"
Nov 28 18:44:19.697: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-mhsyqw-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Dumping workload cluster capz-e2e-mhsyqw/capz-e2e-mhsyqw-public-custom-vnet Azure activity log
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-mhsyqw-public-custom-vnet-control-plane-fn8zg, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-kplqv, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-nd2zx, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-4h9cw, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-x5p9r, container coredns
STEP: Got error while iterating over activity logs for resource group capz-e2e-mhsyqw-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001089436s
STEP: Dumping all the Cluster API resources in the "capz-e2e-mhsyqw" namespace
STEP: Deleting all clusters in the capz-e2e-mhsyqw namespace
STEP: Deleting cluster capz-e2e-mhsyqw-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-mhsyqw/capz-e2e-mhsyqw-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-mhsyqw-public-custom-vnet to be deleted
W1128 19:31:54.056688   24160 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1128 19:32:24.976210   24160 trace.go:205] Trace[1098026538]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (28-Nov-2021 19:31:54.975) (total time: 30000ms):
Trace[1098026538]: [30.000937221s] [30.000937221s] END
E1128 19:32:24.976272   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp 20.103.216.233:6443: i/o timeout
I1128 19:32:57.260369   24160 trace.go:205] Trace[705753655]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (28-Nov-2021 19:32:27.259) (total time: 30001ms):
Trace[705753655]: [30.001140752s] [30.001140752s] END
E1128 19:32:57.260440   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp 20.103.216.233:6443: i/o timeout
I1128 19:33:31.062273   24160 trace.go:205] Trace[722260732]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (28-Nov-2021 19:33:01.061) (total time: 30000ms):
Trace[722260732]: [30.000900136s] [30.000900136s] END
E1128 19:33:31.062362   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp 20.103.216.233:6443: i/o timeout
I1128 19:34:08.349755   24160 trace.go:205] Trace[1275104171]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (28-Nov-2021 19:33:38.348) (total time: 30001ms):
Trace[1275104171]: [30.001482554s] [30.001482554s] END
E1128 19:34:08.349805   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp 20.103.216.233:6443: i/o timeout
I1128 19:34:59.999139   24160 trace.go:205] Trace[276076152]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (28-Nov-2021 19:34:29.997) (total time: 30001ms):
Trace[276076152]: [30.001426918s] [30.001426918s] END
E1128 19:34:59.999219   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp 20.103.216.233:6443: i/o timeout
I1128 19:35:55.907901   24160 trace.go:205] Trace[1074917353]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (28-Nov-2021 19:35:25.907) (total time: 30000ms):
Trace[1074917353]: [30.000779531s] [30.000779531s] END
E1128 19:35:55.907960   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp 20.103.216.233:6443: i/o timeout
I1128 19:36:57.254448   24160 trace.go:205] Trace[689806933]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (28-Nov-2021 19:36:27.253) (total time: 30000ms):
Trace[689806933]: [30.000877673s] [30.000877673s] END
E1128 19:36:57.254509   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp 20.103.216.233:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-mhsyqw
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 28 19:37:06.826: INFO: deleting an existing virtual network "custom-vnet"
Nov 28 19:37:17.673: INFO: deleting an existing route table "node-routetable"
Nov 28 19:37:28.294: INFO: deleting an existing network security group "node-nsg"
Nov 28 19:37:38.941: INFO: deleting an existing network security group "control-plane-nsg"
E1128 19:37:46.517099   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 28 19:37:49.878: INFO: verifying the existing resource group "capz-e2e-mhsyqw-public-custom-vnet" is empty
Nov 28 19:37:50.134: INFO: deleting the existing resource group "capz-e2e-mhsyqw-public-custom-vnet"
E1128 19:38:17.622970   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:38:49.403585   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1128 19:39:24.244917   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:40:06.211607   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 56m2s on Ginkgo node 1 of 3


• [SLOW TEST:3362.084 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Sun, 28 Nov 2021 19:21:40 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-3wh4j4" for hosting the cluster
Nov 28 19:21:40.836: INFO: starting to create namespace for hosting the "capz-e2e-3wh4j4" test spec
2021/11/28 19:21:40 failed trying to get namespace (capz-e2e-3wh4j4):namespaces "capz-e2e-3wh4j4" not found
INFO: Creating namespace capz-e2e-3wh4j4
INFO: Creating event watcher for namespace "capz-e2e-3wh4j4"
Nov 28 19:21:40.871: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-3wh4j4-gpu
INFO: Creating the workload cluster with name "capz-e2e-3wh4j4-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 58 lines ...
STEP: Fetching activity logs took 529.650487ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-3wh4j4" namespace
STEP: Deleting all clusters in the capz-e2e-3wh4j4 namespace
STEP: Deleting cluster capz-e2e-3wh4j4-gpu
INFO: Waiting for the Cluster capz-e2e-3wh4j4/capz-e2e-3wh4j4-gpu to be deleted
STEP: Waiting for cluster capz-e2e-3wh4j4-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-tl55k, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6cnck, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-v9r7q, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fmbnv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-c4bjn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-3wh4j4-gpu-control-plane-qzsq7, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-3wh4j4-gpu-control-plane-qzsq7, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4sgx7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-3wh4j4-gpu-control-plane-qzsq7, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-3wh4j4-gpu-control-plane-qzsq7, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-hd6cl, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-3wh4j4
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 22m23s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Sun, 28 Nov 2021 19:29:54 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-cgev5g" for hosting the cluster
Nov 28 19:29:54.065: INFO: starting to create namespace for hosting the "capz-e2e-cgev5g" test spec
2021/11/28 19:29:54 failed trying to get namespace (capz-e2e-cgev5g):namespaces "capz-e2e-cgev5g" not found
INFO: Creating namespace capz-e2e-cgev5g
INFO: Creating event watcher for namespace "capz-e2e-cgev5g"
Nov 28 19:29:54.100: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-cgev5g-oot
INFO: Creating the workload cluster with name "capz-e2e-cgev5g-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 92 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-xkgst, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-mpssx, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-hnmqv, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-mcdnx, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-cgev5g-oot-control-plane-96fm4, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-wdz5p, container kube-proxy
STEP: Error fetching activity logs for resource group capz-e2e-cgev5g-oot: insights.ActivityLogsClient#List: Failure sending request: StatusCode=429 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001161046s
STEP: Dumping all the Cluster API resources in the "capz-e2e-cgev5g" namespace
STEP: Deleting all clusters in the capz-e2e-cgev5g namespace
STEP: Deleting cluster capz-e2e-cgev5g-oot
INFO: Waiting for the Cluster capz-e2e-cgev5g/capz-e2e-cgev5g-oot to be deleted
STEP: Waiting for cluster capz-e2e-cgev5g-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/cloud-controller-manager, container cloud-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-cgev5g-oot-control-plane-96fm4, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kt4pk, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-cgev5g-oot-control-plane-96fm4, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-5kgz7, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-hnmqv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xkgst, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-cgev5g-oot-control-plane-96fm4, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-7t6qx, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mpssx, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-cgev5g-oot-control-plane-96fm4, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-cgev5g
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 21m44s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Sun, 28 Nov 2021 19:40:21 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-n670v3" for hosting the cluster
Nov 28 19:40:21.738: INFO: starting to create namespace for hosting the "capz-e2e-n670v3" test spec
2021/11/28 19:40:21 failed trying to get namespace (capz-e2e-n670v3):namespaces "capz-e2e-n670v3" not found
INFO: Creating namespace capz-e2e-n670v3
INFO: Creating event watcher for namespace "capz-e2e-n670v3"
Nov 28 19:40:21.784: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-n670v3-aks
INFO: Creating the workload cluster with name "capz-e2e-n670v3-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1128 19:40:44.833059   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:41:35.775441   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:42:12.894177   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:43:07.849626   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 28 19:44:03.735: INFO: Waiting for the first control plane machine managed by capz-e2e-n670v3/capz-e2e-n670v3-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
E1128 19:44:05.353308   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:44:40.033126   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:45:14.889867   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:46:08.885288   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:47:01.835205   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:47:41.782347   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:48:37.781221   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:49:37.054901   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:50:18.145748   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:50:52.635980   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:51:29.132686   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:52:21.105102   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:52:52.571264   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:53:52.373384   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:54:40.249310   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:55:32.318001   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:56:10.950695   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:56:57.020019   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:57:36.531875   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:58:18.394487   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:58:54.037255   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:59:26.964648   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:00:02.280742   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:00:49.983876   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:01:33.719164   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:02:32.114436   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:03:25.306846   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Dumping logs from the "capz-e2e-n670v3-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-n670v3/capz-e2e-n670v3-aks logs
STEP: Dumping workload cluster capz-e2e-n670v3/capz-e2e-n670v3-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 1.086107782s
STEP: Dumping workload cluster capz-e2e-n670v3/capz-e2e-n670v3-aks Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-84d976c568-77t4h, container coredns
... skipping 10 lines ...
STEP: Fetching activity logs took 775.157046ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-n670v3" namespace
STEP: Deleting all clusters in the capz-e2e-n670v3 namespace
STEP: Deleting cluster capz-e2e-n670v3-aks
INFO: Waiting for the Cluster capz-e2e-n670v3/capz-e2e-n670v3-aks to be deleted
STEP: Waiting for cluster capz-e2e-n670v3-aks to be deleted
E1128 20:04:17.203553   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:05:14.804226   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:06:10.721462   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:06:55.649251   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:07:43.395168   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:08:25.447291   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:09:03.979276   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:09:54.538960   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-n670v3
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1128 20:10:53.288717   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:11:45.020887   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 31m40s on Ginkgo node 1 of 3


• Failure [1900.383 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 59 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Sun, 28 Nov 2021 19:44:03 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-z6lq1k" for hosting the cluster
Nov 28 19:44:03.557: INFO: starting to create namespace for hosting the "capz-e2e-z6lq1k" test spec
2021/11/28 19:44:03 failed trying to get namespace (capz-e2e-z6lq1k):namespaces "capz-e2e-z6lq1k" not found
INFO: Creating namespace capz-e2e-z6lq1k
INFO: Creating event watcher for namespace "capz-e2e-z6lq1k"
Nov 28 19:44:03.596: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-z6lq1k-win-ha
INFO: Creating the workload cluster with name "capz-e2e-z6lq1k-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 145 lines ...
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-trrwg, container coredns
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-z6lq1k-win-ha-control-plane-mlqd5, container kube-scheduler
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-z6lq1k-win-ha-control-plane-mlqd5, container etcd
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-nbmrg, container coredns
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-7pfcm, container kube-flannel
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-z6lq1k-win-ha-control-plane-47s9k, container etcd
STEP: Got error while iterating over activity logs for resource group capz-e2e-z6lq1k-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000771705s
STEP: Dumping all the Cluster API resources in the "capz-e2e-z6lq1k" namespace
STEP: Deleting all clusters in the capz-e2e-z6lq1k namespace
STEP: Deleting cluster capz-e2e-z6lq1k-win-ha
INFO: Waiting for the Cluster capz-e2e-z6lq1k/capz-e2e-z6lq1k-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-z6lq1k-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-bmh7j, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-btm6t, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-z6lq1k
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 36m47s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Sun, 28 Nov 2021 19:51:37 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-73vww0" for hosting the cluster
Nov 28 19:51:37.584: INFO: starting to create namespace for hosting the "capz-e2e-73vww0" test spec
2021/11/28 19:51:37 failed trying to get namespace (capz-e2e-73vww0):namespaces "capz-e2e-73vww0" not found
INFO: Creating namespace capz-e2e-73vww0
INFO: Creating event watcher for namespace "capz-e2e-73vww0"
Nov 28 19:51:37.627: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-73vww0-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-73vww0-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 129 lines ...
STEP: Fetching activity logs took 1.153470599s
STEP: Dumping all the Cluster API resources in the "capz-e2e-73vww0" namespace
STEP: Deleting all clusters in the capz-e2e-73vww0 namespace
STEP: Deleting cluster capz-e2e-73vww0-win-vmss
INFO: Waiting for the Cluster capz-e2e-73vww0/capz-e2e-73vww0-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-73vww0-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-z5g5m, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-qm42n, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-nlxpp, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-73vww0-win-vmss-control-plane-zqq62, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-xjh8k, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-fwqrk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-gsstf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-73vww0-win-vmss-control-plane-zqq62, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-73vww0-win-vmss-control-plane-zqq62, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jgxwx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-nk5pf, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-73vww0-win-vmss-control-plane-zqq62, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-73vww0
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 30m44s on Ginkgo node 2 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows enabled VMSS cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:542
    with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543
------------------------------
E1128 20:12:16.248900   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:13:12.804675   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:13:52.523785   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:14:45.021183   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:15:35.175544   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:16:29.562616   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:17:20.850032   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:17:51.024672   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:18:25.795401   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:19:09.726095   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:20:09.343716   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:20:57.612306   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:21:41.368529   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:22:16.374433   24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-mhsyqw/events?resourceVersion=8746": dial tcp: lookup capz-e2e-mhsyqw-public-custom-vnet-3f46fc15.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating an AKS cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216

Ran 9 of 22 Specs in 5996.153 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 1h41m21.36955988s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...