This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 1 succeeded
Started2021-11-22 18:36
Elapsed2h15m
Revisionmain

Test Failures


capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node 24m28s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\san\sAKS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:490
Timed out after 1200.000s.
Expected
    <string>: Provisioning
to equal
    <string>: Provisioned
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.1/framework/cluster_helpers.go:134
				
				Click to see stdout/stderrfrom junit.e2e_suite.3.xml

Filter through log files | View test history on testgrid


Show 1 Passed Tests

Show 15 Skipped Tests

Error lines from build-log.txt

... skipping 433 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:288

INFO: "With ipv6 worker node" started at Mon, 22 Nov 2021 18:46:24 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-alupdo" for hosting the cluster
Nov 22 18:46:24.058: INFO: starting to create namespace for hosting the "capz-e2e-alupdo" test spec
2021/11/22 18:46:24 failed trying to get namespace (capz-e2e-alupdo):namespaces "capz-e2e-alupdo" not found
INFO: Creating namespace capz-e2e-alupdo
INFO: Creating event watcher for namespace "capz-e2e-alupdo"
Nov 22 18:46:24.093: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-alupdo-ipv6
INFO: Creating the workload cluster with name "capz-e2e-alupdo-ipv6" using the "ipv6" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 538.076786ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-alupdo" namespace
STEP: Deleting all clusters in the capz-e2e-alupdo namespace
STEP: Deleting cluster capz-e2e-alupdo-ipv6
INFO: Waiting for the Cluster capz-e2e-alupdo/capz-e2e-alupdo-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-alupdo-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-alupdo-ipv6-control-plane-gzcp7, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mnp68, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-alupdo-ipv6-control-plane-54nsz, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-nvwg5, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-456hl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-alupdo-ipv6-control-plane-72sgg, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-z4cx2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-m825h, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5dtnv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-m52d9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-alupdo-ipv6-control-plane-72sgg, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-alupdo-ipv6-control-plane-gzcp7, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-alupdo-ipv6-control-plane-gzcp7, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mdpnz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-alupdo-ipv6-control-plane-54nsz, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-alupdo-ipv6-control-plane-72sgg, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-alupdo-ipv6-control-plane-54nsz, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-44p2k, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-alupdo-ipv6-control-plane-54nsz, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-alupdo-ipv6-control-plane-72sgg, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-alupdo-ipv6-control-plane-gzcp7, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-h4w87, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-dtcsl, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-alupdo
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 20m19s on Ginkgo node 1 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:205

INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" started at Mon, 22 Nov 2021 18:46:19 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-yvvg82" for hosting the cluster
Nov 22 18:46:19.941: INFO: starting to create namespace for hosting the "capz-e2e-yvvg82" test spec
2021/11/22 18:46:19 failed trying to get namespace (capz-e2e-yvvg82):namespaces "capz-e2e-yvvg82" not found
INFO: Creating namespace capz-e2e-yvvg82
INFO: Creating event watcher for namespace "capz-e2e-yvvg82"
Nov 22 18:46:19.994: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-yvvg82-ha
INFO: Creating the workload cluster with name "capz-e2e-yvvg82-ha" using the "(default)" template (Kubernetes v1.22.4, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 67 lines ...
STEP: waiting for job default/curl-to-elb-jobo2va3c2a3wy to be complete
Nov 22 18:57:25.479: INFO: waiting for job default/curl-to-elb-jobo2va3c2a3wy to be complete
Nov 22 18:57:35.548: INFO: job default/curl-to-elb-jobo2va3c2a3wy is complete, took 10.069856595s
STEP: connecting directly to the external LB service
Nov 22 18:57:35.549: INFO: starting attempts to connect directly to the external LB service
2021/11/22 18:57:35 [DEBUG] GET http://20.120.50.156
2021/11/22 18:58:05 [ERR] GET http://20.120.50.156 request failed: Get "http://20.120.50.156": dial tcp 20.120.50.156:80: i/o timeout
2021/11/22 18:58:05 [DEBUG] GET http://20.120.50.156: retrying in 1s (4 left)
Nov 22 18:58:06.609: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 22 18:58:06.609: INFO: starting to delete external LB service webffwz8q-elb
Nov 22 18:58:06.684: INFO: starting to delete deployment webffwz8q
Nov 22 18:58:06.720: INFO: starting to delete job curl-to-elb-jobo2va3c2a3wy
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 22 18:58:06.798: INFO: starting to create dev deployment namespace
2021/11/22 18:58:06 failed trying to get namespace (development):namespaces "development" not found
2021/11/22 18:58:06 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 22 18:58:06.969: INFO: starting to create prod deployment namespace
2021/11/22 18:58:07 failed trying to get namespace (production):namespaces "production" not found
2021/11/22 18:58:07 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 22 18:58:07.065: INFO: starting to create frontend-prod deployments
Nov 22 18:58:07.106: INFO: starting to create frontend-dev deployments
Nov 22 18:58:07.147: INFO: starting to create backend deployments
Nov 22 18:58:07.196: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 22 18:58:30.030: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.43.66 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 22 19:00:39.996: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 22 19:00:40.150: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.43.66 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.43.66 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 22 19:05:02.275: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 22 19:05:02.749: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.43.67 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 22 19:07:13.215: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 22 19:07:13.365: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.52.3 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.43.67 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 22 19:11:35.356: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 22 19:11:35.554: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.43.66 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 22 19:13:46.431: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 22 19:13:46.589: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.43.66 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windows0v3t3t to be available
Nov 22 19:15:58.205: INFO: starting to wait for deployment to become available
Nov 22 19:16:48.424: INFO: Deployment default/web-windows0v3t3t is now available, took 50.218939185s
... skipping 20 lines ...
STEP: waiting for job default/curl-to-elb-jobe31iis60g9w to be complete
Nov 22 19:19:59.647: INFO: waiting for job default/curl-to-elb-jobe31iis60g9w to be complete
Nov 22 19:20:09.734: INFO: job default/curl-to-elb-jobe31iis60g9w is complete, took 10.086865154s
STEP: connecting directly to the external LB service
Nov 22 19:20:09.734: INFO: starting attempts to connect directly to the external LB service
2021/11/22 19:20:09 [DEBUG] GET http://20.102.25.201
2021/11/22 19:20:39 [ERR] GET http://20.102.25.201 request failed: Get "http://20.102.25.201": dial tcp 20.102.25.201:80: i/o timeout
2021/11/22 19:20:39 [DEBUG] GET http://20.102.25.201: retrying in 1s (4 left)
Nov 22 19:20:56.117: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 22 19:20:56.117: INFO: starting to delete external LB service web-windows0v3t3t-elb
Nov 22 19:20:56.210: INFO: starting to delete deployment web-windows0v3t3t
Nov 22 19:20:56.250: INFO: starting to delete job curl-to-elb-jobe31iis60g9w
... skipping 20 lines ...
Nov 22 19:21:54.476: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-yvvg82-ha-md-0-qldtn

Nov 22 19:21:54.760: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster capz-e2e-yvvg82-ha in namespace capz-e2e-yvvg82

Nov 22 19:22:16.919: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-yvvg82-ha-md-win-9c7cm

Failed to get logs for machine capz-e2e-yvvg82-ha-md-win-6fd58788c5-jlv48, cluster capz-e2e-yvvg82/capz-e2e-yvvg82-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 22 19:22:17.279: INFO: INFO: Collecting logs for node 10.1.0.7 in cluster capz-e2e-yvvg82-ha in namespace capz-e2e-yvvg82

Nov 22 19:22:45.150: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-yvvg82-ha-md-win-hbtmz

Failed to get logs for machine capz-e2e-yvvg82-ha-md-win-6fd58788c5-ql7gx, cluster capz-e2e-yvvg82/capz-e2e-yvvg82-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-yvvg82/capz-e2e-yvvg82-ha kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-node-b75f7, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-yvvg82-ha-control-plane-vnczs, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-rnh9w, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-yvvg82-ha-control-plane-xhr2c, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-g7j9b, container calico-kube-controllers
... skipping 22 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-gmdls, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-kq8rw, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-yvvg82-ha-control-plane-vnczs, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-q7k4t, container kube-proxy
STEP: Dumping workload cluster capz-e2e-yvvg82/capz-e2e-yvvg82-ha Azure activity log
STEP: Creating log watcher for controller kube-system/kube-proxy-mg25x, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-yvvg82-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001176803s
STEP: Dumping all the Cluster API resources in the "capz-e2e-yvvg82" namespace
STEP: Deleting all clusters in the capz-e2e-yvvg82 namespace
STEP: Deleting cluster capz-e2e-yvvg82-ha
INFO: Waiting for the Cluster capz-e2e-yvvg82/capz-e2e-yvvg82-ha to be deleted
STEP: Waiting for cluster capz-e2e-yvvg82-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-yvvg82-ha-control-plane-xhr2c, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-yvvg82-ha-control-plane-vnczs, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-r26gr, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-lts52, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-q7k4t, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-yvvg82-ha-control-plane-xhr2c, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-brbdv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-rnh9w, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-yvvg82-ha-control-plane-vnczs, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-yvvg82-ha-control-plane-xhr2c, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-hd6h7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-lts52, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-yvvg82-ha-control-plane-xhr2c, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-bvkvz, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-gmdls, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-yvvg82-ha-control-plane-qnx2l, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-xzw47, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-yvvg82-ha-control-plane-qnx2l, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2ffmt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-zjffp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-yvvg82-ha-control-plane-qnx2l, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-g7j9b, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-yvvg82-ha-control-plane-vnczs, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-r5lqn, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-yvvg82-ha-control-plane-vnczs, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-r5lqn, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-b75f7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mg25x, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-t2dsl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-kq8rw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-yvvg82-ha-control-plane-qnx2l, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-yvvg82
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" ran for 46m22s on Ginkgo node 2 of 3

... skipping 8 lines ...
  with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:334

INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" started at Mon, 22 Nov 2021 19:06:43 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-c8rd3w" for hosting the cluster
Nov 22 19:06:43.413: INFO: starting to create namespace for hosting the "capz-e2e-c8rd3w" test spec
2021/11/22 19:06:43 failed trying to get namespace (capz-e2e-c8rd3w):namespaces "capz-e2e-c8rd3w" not found
INFO: Creating namespace capz-e2e-c8rd3w
INFO: Creating event watcher for namespace "capz-e2e-c8rd3w"
Nov 22 19:06:43.512: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-c8rd3w-vmss
INFO: Creating the workload cluster with name "capz-e2e-c8rd3w-vmss" using the "machine-pool" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 142 lines ...
Nov 22 19:27:18.506: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-e2e-c8rd3w-vmss-mp-0

Nov 22 19:27:18.780: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-c8rd3w-vmss in namespace capz-e2e-c8rd3w

Nov 22 19:27:31.333: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-c8rd3w-vmss-mp-0

Failed to get logs for machine pool capz-e2e-c8rd3w-vmss-mp-0, cluster capz-e2e-c8rd3w/capz-e2e-c8rd3w-vmss: [[running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1], [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1]]
Nov 22 19:27:31.614: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-c8rd3w-vmss in namespace capz-e2e-c8rd3w

Nov 22 19:27:54.707: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

Nov 22 19:27:54.964: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-c8rd3w-vmss in namespace capz-e2e-c8rd3w

Nov 22 19:28:25.327: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set win-p-win

Failed to get logs for machine pool capz-e2e-c8rd3w-vmss-mp-win, cluster capz-e2e-c8rd3w/capz-e2e-c8rd3w-vmss: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-c8rd3w/capz-e2e-c8rd3w-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 436.55558ms
STEP: Creating log watcher for controller kube-system/kube-proxy-9pbfz, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-qxg4m, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-ddqkz, container calico-node-felix
STEP: Creating log watcher for controller kube-system/calico-node-windows-ddqkz, container calico-node-startup
... skipping 10 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-c8rd3w-vmss-control-plane-vrrrm, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-w6js8, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-4ff7v, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-9pzgx, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-gslxf, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-ghm56, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-c8rd3w-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00127132s
STEP: Dumping all the Cluster API resources in the "capz-e2e-c8rd3w" namespace
STEP: Deleting all clusters in the capz-e2e-c8rd3w namespace
STEP: Deleting cluster capz-e2e-c8rd3w-vmss
INFO: Waiting for the Cluster capz-e2e-c8rd3w/capz-e2e-c8rd3w-vmss to be deleted
STEP: Waiting for cluster capz-e2e-c8rd3w-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-992f4, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-zpn22, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9pzgx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-ddqkz, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-c8rd3w-vmss-control-plane-vrrrm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9pbfz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-rz6md, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-c8rd3w-vmss-control-plane-vrrrm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-c8rd3w-vmss-control-plane-vrrrm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-c8rd3w-vmss-control-plane-vrrrm, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-qxg4m, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-ddqkz, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-ffx84, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-w6js8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-ghm56, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-rz6md, container calico-node-felix: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-c8rd3w
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" ran for 29m28s on Ginkgo node 1 of 3

... skipping 10 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:144

INFO: "Creates a public management cluster in the same vnet" started at Mon, 22 Nov 2021 18:46:19 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-7lfbfa" for hosting the cluster
Nov 22 18:46:19.552: INFO: starting to create namespace for hosting the "capz-e2e-7lfbfa" test spec
2021/11/22 18:46:19 failed trying to get namespace (capz-e2e-7lfbfa):namespaces "capz-e2e-7lfbfa" not found
INFO: Creating namespace capz-e2e-7lfbfa
INFO: Creating event watcher for namespace "capz-e2e-7lfbfa"
Nov 22 18:46:19.615: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-7lfbfa-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-hqd6g, container coredns
STEP: Dumping workload cluster capz-e2e-7lfbfa/capz-e2e-7lfbfa-public-custom-vnet Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-7lctc, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-7lfbfa-public-custom-vnet-control-plane-4k4vd, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-vc7gg, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-7lfbfa-public-custom-vnet-control-plane-4k4vd, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-7lfbfa-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000559043s
STEP: Dumping all the Cluster API resources in the "capz-e2e-7lfbfa" namespace
STEP: Deleting all clusters in the capz-e2e-7lfbfa namespace
STEP: Deleting cluster capz-e2e-7lfbfa-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-7lfbfa/capz-e2e-7lfbfa-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-7lfbfa-public-custom-vnet to be deleted
W1122 19:36:50.052528   24540 reflector.go:441] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1122 19:37:20.905621   24540 trace.go:205] Trace[849802812]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (22-Nov-2021 19:36:50.904) (total time: 30001ms):
Trace[849802812]: [30.001193617s] [30.001193617s] END
E1122 19:37:20.905694   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp 20.88.175.180:6443: i/o timeout
I1122 19:37:53.150215   24540 trace.go:205] Trace[1626805101]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (22-Nov-2021 19:37:23.149) (total time: 30000ms):
Trace[1626805101]: [30.000731299s] [30.000731299s] END
E1122 19:37:53.150303   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp 20.88.175.180:6443: i/o timeout
I1122 19:38:27.352354   24540 trace.go:205] Trace[960346142]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (22-Nov-2021 19:37:57.350) (total time: 30001ms):
Trace[960346142]: [30.001662896s] [30.001662896s] END
E1122 19:38:27.352430   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp 20.88.175.180:6443: i/o timeout
I1122 19:39:05.348814   24540 trace.go:205] Trace[2128895252]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (22-Nov-2021 19:38:35.347) (total time: 30001ms):
Trace[2128895252]: [30.001549733s] [30.001549733s] END
E1122 19:39:05.348885   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp 20.88.175.180:6443: i/o timeout
I1122 19:39:55.652402   24540 trace.go:205] Trace[953471097]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (22-Nov-2021 19:39:25.651) (total time: 30000ms):
Trace[953471097]: [30.000732868s] [30.000732868s] END
E1122 19:39:55.652469   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp 20.88.175.180:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-7lfbfa
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 22 19:40:14.838: INFO: deleting an existing virtual network "custom-vnet"
Nov 22 19:40:25.310: INFO: deleting an existing route table "node-routetable"
E1122 19:40:29.801281   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 22 19:40:35.839: INFO: deleting an existing network security group "node-nsg"
Nov 22 19:40:46.376: INFO: deleting an existing network security group "control-plane-nsg"
Nov 22 19:40:56.764: INFO: verifying the existing resource group "capz-e2e-7lfbfa-public-custom-vnet" is empty
Nov 22 19:40:57.100: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 22 19:41:07.376: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 22 19:41:17.605: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
E1122 19:41:17.888938   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 22 19:41:28.036: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 22 19:41:38.329: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 22 19:41:48.550: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 22 19:41:58.753: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
E1122 19:42:04.125823   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 22 19:42:08.990: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 22 19:42:19.203: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 22 19:42:29.419: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 22 19:42:39.629: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 22 19:42:49.832: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 22 19:43:00.137: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
E1122 19:43:01.619152   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 22 19:43:10.341: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 22 19:43:20.587: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 22 19:43:30.808: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 22 19:43:41.090: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
E1122 19:43:51.111157   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 22 19:43:51.287: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 22 19:44:01.506: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 22 19:44:11.702: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 22 19:44:21.915: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
E1122 19:44:25.039773   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 22 19:44:32.124: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 22 19:44:42.364: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 22 19:44:52.561: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 22 19:45:02.779: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
E1122 19:45:03.481042   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 22 19:45:13.181: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 22 19:45:23.465: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 22 19:45:33.683: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-7lfbfa-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-jbfdky-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
E1122 19:45:38.789081   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 22 19:45:44.154: INFO: deleting the existing resource group "capz-e2e-7lfbfa-public-custom-vnet"
E1122 19:46:20.484027   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:46:53.404948   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1122 19:47:38.944167   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 1h1m31s on Ginkgo node 3 of 3


• [SLOW TEST:3690.553 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:455

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Mon, 22 Nov 2021 19:36:11 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-zyk5wm" for hosting the cluster
Nov 22 19:36:11.714: INFO: starting to create namespace for hosting the "capz-e2e-zyk5wm" test spec
2021/11/22 19:36:11 failed trying to get namespace (capz-e2e-zyk5wm):namespaces "capz-e2e-zyk5wm" not found
INFO: Creating namespace capz-e2e-zyk5wm
INFO: Creating event watcher for namespace "capz-e2e-zyk5wm"
Nov 22 19:36:11.743: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-zyk5wm-oot
INFO: Creating the workload cluster with name "capz-e2e-zyk5wm-oot" using the "external-cloud-provider" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 98 lines ...
STEP: Fetching activity logs took 595.598512ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-zyk5wm" namespace
STEP: Deleting all clusters in the capz-e2e-zyk5wm namespace
STEP: Deleting cluster capz-e2e-zyk5wm-oot
INFO: Waiting for the Cluster capz-e2e-zyk5wm/capz-e2e-zyk5wm-oot to be deleted
STEP: Waiting for cluster capz-e2e-zyk5wm-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-dqwnr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-zyk5wm-oot-control-plane-b8m29, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-controller-manager, container cloud-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-zyk5wm-oot-control-plane-b8m29, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-zyk5wm-oot-control-plane-b8m29, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-vpt4c, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-d9hm8, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xv57v, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-c52kg, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-wz5rl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-zyk5wm-oot-control-plane-b8m29, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-zyk5wm
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 16m21s on Ginkgo node 1 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413

INFO: "with a single control plane node and 1 node" started at Mon, 22 Nov 2021 19:32:41 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-74esgx" for hosting the cluster
Nov 22 19:32:41.815: INFO: starting to create namespace for hosting the "capz-e2e-74esgx" test spec
2021/11/22 19:32:41 failed trying to get namespace (capz-e2e-74esgx):namespaces "capz-e2e-74esgx" not found
INFO: Creating namespace capz-e2e-74esgx
INFO: Creating event watcher for namespace "capz-e2e-74esgx"
Nov 22 19:32:41.857: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-74esgx-gpu
INFO: Creating the workload cluster with name "capz-e2e-74esgx-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.4, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 122 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:490

INFO: "with a single control plane node and 1 node" started at Mon, 22 Nov 2021 19:47:50 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-e0kc0o" for hosting the cluster
Nov 22 19:47:50.107: INFO: starting to create namespace for hosting the "capz-e2e-e0kc0o" test spec
2021/11/22 19:47:50 failed trying to get namespace (capz-e2e-e0kc0o):namespaces "capz-e2e-e0kc0o" not found
INFO: Creating namespace capz-e2e-e0kc0o
INFO: Creating event watcher for namespace "capz-e2e-e0kc0o"
Nov 22 19:47:50.143: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-e0kc0o-aks
INFO: Creating the workload cluster with name "capz-e2e-e0kc0o-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1122 19:48:20.157994   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:49:04.014340   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:49:59.489971   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:50:49.939044   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:51:21.599730   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:52:03.021280   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:52:33.450047   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:53:22.705742   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:54:10.010714   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:55:06.708541   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:55:55.892855   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:56:52.402830   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:57:27.689429   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:58:22.842662   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:58:58.667265   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 19:59:39.212587   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:00:17.568327   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:01:15.542281   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:01:55.255851   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:02:44.599472   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:03:17.320054   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:03:51.548819   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:04:48.812909   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:05:48.523530   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:06:42.046132   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:07:37.067412   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Unable to dump workload cluster logs as the cluster is nil
STEP: Dumping all the Cluster API resources in the "capz-e2e-e0kc0o" namespace
STEP: Deleting all clusters in the capz-e2e-e0kc0o namespace
STEP: Deleting cluster capz-e2e-e0kc0o-aks
INFO: Waiting for the Cluster capz-e2e-e0kc0o/capz-e2e-e0kc0o-aks to be deleted
STEP: Waiting for cluster capz-e2e-e0kc0o-aks to be deleted
E1122 20:08:17.460950   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:09:07.424916   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:09:45.206714   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:10:24.844426   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-e0kc0o
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1122 20:11:10.104146   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1122 20:12:03.349372   24540 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-7lfbfa/events?resourceVersion=11359": dial tcp: lookup capz-e2e-7lfbfa-public-custom-vnet-68fefbd7.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 24m28s on Ginkgo node 3 of 3


• Failure [1468.219 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 54 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Mon, 22 Nov 2021 19:52:32 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-z87yk2" for hosting the cluster
Nov 22 19:52:32.276: INFO: starting to create namespace for hosting the "capz-e2e-z87yk2" test spec
2021/11/22 19:52:32 failed trying to get namespace (capz-e2e-z87yk2):namespaces "capz-e2e-z87yk2" not found
INFO: Creating namespace capz-e2e-z87yk2
INFO: Creating event watcher for namespace "capz-e2e-z87yk2"
Nov 22 19:52:32.307: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-z87yk2-win-ha
INFO: Creating the workload cluster with name "capz-e2e-z87yk2-win-ha" using the "windows" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 151 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-z87yk2-win-ha-control-plane-gfn9s, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-z87yk2-win-ha-control-plane-gfn9s, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-z87yk2-win-ha-control-plane-v8xxm, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-z87yk2-win-ha-control-plane-v8xxm, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-dbkj2, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-kpr55, container kube-flannel
STEP: Got error while iterating over activity logs for resource group capz-e2e-z87yk2-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000683948s
STEP: Dumping all the Cluster API resources in the "capz-e2e-z87yk2" namespace
STEP: Deleting all clusters in the capz-e2e-z87yk2 namespace
STEP: Deleting cluster capz-e2e-z87yk2-win-ha
INFO: Waiting for the Cluster capz-e2e-z87yk2/capz-e2e-z87yk2-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-z87yk2-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-rfxft, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-kf7lx, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-dbkj2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-kpr55, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-z87yk2-win-ha-control-plane-gfn9s, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-z87yk2-win-ha-control-plane-v8xxm, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-nxbmd, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-z87yk2-win-ha-control-plane-gfn9s, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jbq4h, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-z87yk2-win-ha-control-plane-v8xxm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-z87yk2-win-ha-control-plane-v8xxm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8ld7m, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-z87yk2-win-ha-control-plane-v8xxm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-z87yk2-win-ha-control-plane-gfn9s, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-ksjlc, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-z87yk2-win-ha-control-plane-gfn9s, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-6md74, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2rlpg, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-z87yk2
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 27m42s on Ginkgo node 1 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows Enabled cluster with dockershim
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:530
    With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2021-11-22T20:36:56Z"}
++ early_exit_handler
++ '[' -n 162 ']'
++ kill -TERM 162
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 12 lines ...
Cleaning up after docker
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
================================================================================
Done cleaning up after docker in docker.
All sensitive variables are redacted
{"component":"entrypoint","file":"prow/entrypoint/run.go:255","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2021-11-22T20:51:56Z"}
{"component":"entrypoint","error":"os: process already finished","file":"prow/entrypoint/run.go:257","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2021-11-22T20:51:56Z"}