This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-14 06:33
Elapsed2h1m
Revisionmain

Test Failures


capz-e2e Workload cluster creation Creating a Windows enabled VMSS cluster with dockershim with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node 42m53s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sWindows\senabled\sVMSS\scluster\swith\sdockershim\swith\sa\ssingle\scontrol\splane\snode\sand\san\sLinux\sAzureMachinePool\swith\s1\snodes\sand\sWindows\sAzureMachinePool\swith\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:579
Timed out after 900.000s.
Expected
    <int>: 0
to equal
    <int>: 1
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/machinepool_helpers.go:85
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 14 Skipped Tests

Error lines from build-log.txt

... skipping 426 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:288

INFO: "With ipv6 worker node" started at Sun, 14 Nov 2021 06:39:51 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-nmar1k" for hosting the cluster
Nov 14 06:39:51.765: INFO: starting to create namespace for hosting the "capz-e2e-nmar1k" test spec
2021/11/14 06:39:51 failed trying to get namespace (capz-e2e-nmar1k):namespaces "capz-e2e-nmar1k" not found
INFO: Creating namespace capz-e2e-nmar1k
INFO: Creating event watcher for namespace "capz-e2e-nmar1k"
Nov 14 06:39:51.901: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-nmar1k-ipv6
INFO: Creating the workload cluster with name "capz-e2e-nmar1k-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 601.854913ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-nmar1k" namespace
STEP: Deleting all clusters in the capz-e2e-nmar1k namespace
STEP: Deleting cluster capz-e2e-nmar1k-ipv6
INFO: Waiting for the Cluster capz-e2e-nmar1k/capz-e2e-nmar1k-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-nmar1k-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-ggl8z, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-tbvkt, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4ctdg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-nmar1k-ipv6-control-plane-sx48s, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4fhsc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-nmar1k-ipv6-control-plane-sx48s, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-9rcqk, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-nmar1k-ipv6-control-plane-sx48s, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-nmar1k-ipv6-control-plane-w5z49, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xphdb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-nmar1k-ipv6-control-plane-w5z49, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-nmar1k-ipv6-control-plane-w5z49, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-nmar1k-ipv6-control-plane-w5z49, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-spdbl, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-nmar1k-ipv6-control-plane-sx48s, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-kbnct, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wbpmz, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-nmar1k
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 15m47s on Ginkgo node 1 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:205

INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" started at Sun, 14 Nov 2021 06:39:51 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-yb9oql" for hosting the cluster
Nov 14 06:39:51.765: INFO: starting to create namespace for hosting the "capz-e2e-yb9oql" test spec
2021/11/14 06:39:51 failed trying to get namespace (capz-e2e-yb9oql):namespaces "capz-e2e-yb9oql" not found
INFO: Creating namespace capz-e2e-yb9oql
INFO: Creating event watcher for namespace "capz-e2e-yb9oql"
Nov 14 06:39:51.895: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-yb9oql-ha
INFO: Creating the workload cluster with name "capz-e2e-yb9oql-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 67 lines ...
STEP: waiting for job default/curl-to-elb-jobdevxccozsxs to be complete
Nov 14 06:49:54.829: INFO: waiting for job default/curl-to-elb-jobdevxccozsxs to be complete
Nov 14 06:50:04.987: INFO: job default/curl-to-elb-jobdevxccozsxs is complete, took 10.157500055s
STEP: connecting directly to the external LB service
Nov 14 06:50:04.987: INFO: starting attempts to connect directly to the external LB service
2021/11/14 06:50:04 [DEBUG] GET http://20.99.206.89
2021/11/14 06:50:34 [ERR] GET http://20.99.206.89 request failed: Get "http://20.99.206.89": dial tcp 20.99.206.89:80: i/o timeout
2021/11/14 06:50:34 [DEBUG] GET http://20.99.206.89: retrying in 1s (4 left)
2021/11/14 06:51:05 [ERR] GET http://20.99.206.89 request failed: Get "http://20.99.206.89": dial tcp 20.99.206.89:80: i/o timeout
2021/11/14 06:51:05 [DEBUG] GET http://20.99.206.89: retrying in 2s (3 left)
Nov 14 06:51:08.100: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 14 06:51:08.100: INFO: starting to delete external LB service webga5d6a-elb
Nov 14 06:51:08.234: INFO: starting to delete deployment webga5d6a
Nov 14 06:51:08.300: INFO: starting to delete job curl-to-elb-jobdevxccozsxs
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 14 06:51:08.400: INFO: starting to create dev deployment namespace
2021/11/14 06:51:08 failed trying to get namespace (development):namespaces "development" not found
2021/11/14 06:51:08 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 14 06:51:08.536: INFO: starting to create prod deployment namespace
2021/11/14 06:51:08 failed trying to get namespace (production):namespaces "production" not found
2021/11/14 06:51:08 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 14 06:51:08.689: INFO: starting to create frontend-prod deployments
Nov 14 06:51:08.767: INFO: starting to create frontend-dev deployments
Nov 14 06:51:08.880: INFO: starting to create backend deployments
Nov 14 06:51:08.957: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 14 06:51:33.237: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.233.197 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 14 06:53:43.090: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 14 06:53:43.328: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.233.197 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.233.197 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 14 06:58:05.070: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 14 06:58:05.315: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.245.66 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 14 07:00:16.142: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 14 07:00:16.368: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.233.195 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.245.66 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 14 07:04:38.286: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 14 07:04:38.518: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.233.197 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 14 07:06:49.527: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 14 07:06:49.764: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.233.197 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowsjde6qy to be available
Nov 14 07:09:01.331: INFO: starting to wait for deployment to become available
Nov 14 07:09:51.712: INFO: Deployment default/web-windowsjde6qy is now available, took 50.381016284s
... skipping 20 lines ...
STEP: waiting for job default/curl-to-elb-jobikbwtkme5jk to be complete
Nov 14 07:10:42.768: INFO: waiting for job default/curl-to-elb-jobikbwtkme5jk to be complete
Nov 14 07:10:52.883: INFO: job default/curl-to-elb-jobikbwtkme5jk is complete, took 10.115771674s
STEP: connecting directly to the external LB service
Nov 14 07:10:52.883: INFO: starting attempts to connect directly to the external LB service
2021/11/14 07:10:52 [DEBUG] GET http://52.149.17.182
2021/11/14 07:11:22 [ERR] GET http://52.149.17.182 request failed: Get "http://52.149.17.182": dial tcp 52.149.17.182:80: i/o timeout
2021/11/14 07:11:22 [DEBUG] GET http://52.149.17.182: retrying in 1s (4 left)
2021/11/14 07:11:53 [ERR] GET http://52.149.17.182 request failed: Get "http://52.149.17.182": dial tcp 52.149.17.182:80: i/o timeout
2021/11/14 07:11:53 [DEBUG] GET http://52.149.17.182: retrying in 2s (3 left)
Nov 14 07:11:56.001: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 14 07:11:56.001: INFO: starting to delete external LB service web-windowsjde6qy-elb
Nov 14 07:11:56.093: INFO: starting to delete deployment web-windowsjde6qy
Nov 14 07:11:56.165: INFO: starting to delete job curl-to-elb-jobikbwtkme5jk
... skipping 20 lines ...
Nov 14 07:13:02.570: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-yb9oql-ha-md-0-lxd77

Nov 14 07:13:02.952: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster capz-e2e-yb9oql-ha in namespace capz-e2e-yb9oql

Nov 14 07:13:38.457: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-yb9oql-ha-md-win-c2s22

Failed to get logs for machine capz-e2e-yb9oql-ha-md-win-6d56646f88-jctmv, cluster capz-e2e-yb9oql/capz-e2e-yb9oql-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 14 07:13:38.765: INFO: INFO: Collecting logs for node 10.1.0.7 in cluster capz-e2e-yb9oql-ha in namespace capz-e2e-yb9oql

Nov 14 07:14:11.431: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-yb9oql-ha-md-win-9xgp2

Failed to get logs for machine capz-e2e-yb9oql-ha-md-win-6d56646f88-w2csh, cluster capz-e2e-yb9oql/capz-e2e-yb9oql-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-yb9oql/capz-e2e-yb9oql-ha kube-system pod logs
STEP: Fetching kube-system pod logs took 529.318997ms
STEP: Dumping workload cluster capz-e2e-yb9oql/capz-e2e-yb9oql-ha Azure activity log
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-yb9oql-ha-control-plane-vf98g, container etcd
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-96qdf, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-proxy-fwkcl, container kube-proxy
... skipping 22 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-yb9oql-ha-control-plane-96spj, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-z2ccn, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-yb9oql-ha-control-plane-96spj, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-yb9oql-ha-control-plane-vf98g, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-yb9oql-ha-control-plane-vf98g, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-94j9j, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-yb9oql-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00031157s
STEP: Dumping all the Cluster API resources in the "capz-e2e-yb9oql" namespace
STEP: Deleting all clusters in the capz-e2e-yb9oql namespace
STEP: Deleting cluster capz-e2e-yb9oql-ha
INFO: Waiting for the Cluster capz-e2e-yb9oql/capz-e2e-yb9oql-ha to be deleted
STEP: Waiting for cluster capz-e2e-yb9oql-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-gkf66, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-6kcts, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-gl57d, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-yb9oql-ha-control-plane-96spj, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-qsmqg, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-yb9oql-ha-control-plane-5h4xk, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-yb9oql-ha-control-plane-96spj, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-76xrn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-llwn8, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-x6smb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-96qdf, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-6kcts, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-yb9oql-ha-control-plane-5h4xk, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-yb9oql-ha-control-plane-5h4xk, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-yb9oql-ha-control-plane-5h4xk, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-yb9oql-ha-control-plane-vf98g, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-hpqfv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-nmdkb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-qsmqg, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-yb9oql-ha-control-plane-vf98g, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jwwqz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-yb9oql-ha-control-plane-96spj, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8w9nx, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-4x9vw, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-yb9oql-ha-control-plane-vf98g, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ds8sc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-yb9oql-ha-control-plane-vf98g, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-yb9oql-ha-control-plane-96spj, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-94j9j, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-yb9oql
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" ran for 44m43s on Ginkgo node 2 of 3

... skipping 8 lines ...
  with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:334

INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" started at Sun, 14 Nov 2021 06:55:38 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-pl6rgd" for hosting the cluster
Nov 14 06:55:38.783: INFO: starting to create namespace for hosting the "capz-e2e-pl6rgd" test spec
2021/11/14 06:55:38 failed trying to get namespace (capz-e2e-pl6rgd):namespaces "capz-e2e-pl6rgd" not found
INFO: Creating namespace capz-e2e-pl6rgd
INFO: Creating event watcher for namespace "capz-e2e-pl6rgd"
Nov 14 06:55:38.812: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-pl6rgd-vmss
INFO: Creating the workload cluster with name "capz-e2e-pl6rgd-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 60 lines ...
STEP: waiting for job default/curl-to-elb-jobteyc8f2lmtk to be complete
Nov 14 07:08:22.152: INFO: waiting for job default/curl-to-elb-jobteyc8f2lmtk to be complete
Nov 14 07:08:32.270: INFO: job default/curl-to-elb-jobteyc8f2lmtk is complete, took 10.117952855s
STEP: connecting directly to the external LB service
Nov 14 07:08:32.270: INFO: starting attempts to connect directly to the external LB service
2021/11/14 07:08:32 [DEBUG] GET http://52.148.150.179
2021/11/14 07:09:02 [ERR] GET http://52.148.150.179 request failed: Get "http://52.148.150.179": dial tcp 52.148.150.179:80: i/o timeout
2021/11/14 07:09:02 [DEBUG] GET http://52.148.150.179: retrying in 1s (4 left)
Nov 14 07:09:18.797: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 14 07:09:18.797: INFO: starting to delete external LB service webwur9qw-elb
Nov 14 07:09:18.897: INFO: starting to delete deployment webwur9qw
Nov 14 07:09:18.971: INFO: starting to delete job curl-to-elb-jobteyc8f2lmtk
... skipping 25 lines ...
STEP: waiting for job default/curl-to-elb-jobwrxxo1rr41w to be complete
Nov 14 07:13:11.426: INFO: waiting for job default/curl-to-elb-jobwrxxo1rr41w to be complete
Nov 14 07:13:21.545: INFO: job default/curl-to-elb-jobwrxxo1rr41w is complete, took 10.11911579s
STEP: connecting directly to the external LB service
Nov 14 07:13:21.545: INFO: starting attempts to connect directly to the external LB service
2021/11/14 07:13:21 [DEBUG] GET http://52.148.146.62
2021/11/14 07:13:51 [ERR] GET http://52.148.146.62 request failed: Get "http://52.148.146.62": dial tcp 52.148.146.62:80: i/o timeout
2021/11/14 07:13:51 [DEBUG] GET http://52.148.146.62: retrying in 1s (4 left)
Nov 14 07:14:08.118: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 14 07:14:08.118: INFO: starting to delete external LB service web-windowscqn65n-elb
Nov 14 07:14:08.209: INFO: starting to delete deployment web-windowscqn65n
Nov 14 07:14:08.268: INFO: starting to delete job curl-to-elb-jobwrxxo1rr41w
... skipping 33 lines ...
Nov 14 07:21:00.744: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-pl6rgd-vmss-mp-0

Nov 14 07:21:01.123: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-pl6rgd-vmss in namespace capz-e2e-pl6rgd

Nov 14 07:21:19.528: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-e2e-pl6rgd-vmss-mp-0

Failed to get logs for machine pool capz-e2e-pl6rgd-vmss-mp-0, cluster capz-e2e-pl6rgd/capz-e2e-pl6rgd-vmss: [[running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1], [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1]]
Nov 14 07:21:19.848: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-pl6rgd-vmss in namespace capz-e2e-pl6rgd

Nov 14 07:22:03.706: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

Nov 14 07:22:04.051: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-pl6rgd-vmss in namespace capz-e2e-pl6rgd

Nov 14 07:22:49.754: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set win-p-win

Failed to get logs for machine pool capz-e2e-pl6rgd-vmss-mp-win, cluster capz-e2e-pl6rgd/capz-e2e-pl6rgd-vmss: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-pl6rgd/capz-e2e-pl6rgd-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 622.474807ms
STEP: Creating log watcher for controller kube-system/calico-node-windows-w2sss, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-pl6rgd-vmss-control-plane-gwt8v, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-7w2k4, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-4dkzk, container coredns
... skipping 10 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-76z6v, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-8j8bt, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-b87db, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-pl6rgd-vmss-control-plane-gwt8v, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-f77b7, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-bt7f7, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-pl6rgd-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000874626s
STEP: Dumping all the Cluster API resources in the "capz-e2e-pl6rgd" namespace
STEP: Deleting all clusters in the capz-e2e-pl6rgd namespace
STEP: Deleting cluster capz-e2e-pl6rgd-vmss
INFO: Waiting for the Cluster capz-e2e-pl6rgd/capz-e2e-pl6rgd-vmss to be deleted
STEP: Waiting for cluster capz-e2e-pl6rgd-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-4dkzk, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-b87db, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vh6bl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-7w2k4, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-pl6rgd-vmss-control-plane-gwt8v, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-b87db, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-w2sss, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-rmkpq, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-lr2r8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-zw477, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-pl6rgd-vmss-control-plane-gwt8v, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-pl6rgd-vmss-control-plane-gwt8v, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-w2sss, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2r4cv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-76z6v, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-pl6rgd-vmss-control-plane-gwt8v, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-f77b7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8j8bt, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-pl6rgd
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" ran for 38m6s on Ginkgo node 1 of 3

... skipping 10 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:144

INFO: "Creates a public management cluster in the same vnet" started at Sun, 14 Nov 2021 06:39:51 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-yk3fw7" for hosting the cluster
Nov 14 06:39:51.764: INFO: starting to create namespace for hosting the "capz-e2e-yk3fw7" test spec
2021/11/14 06:39:51 failed trying to get namespace (capz-e2e-yk3fw7):namespaces "capz-e2e-yk3fw7" not found
INFO: Creating namespace capz-e2e-yk3fw7
INFO: Creating event watcher for namespace "capz-e2e-yk3fw7"
Nov 14 06:39:51.887: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-yk3fw7-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-cjsv8, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-yk3fw7-public-custom-vnet-control-plane-fj5ff, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-z8zh7, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-9p5jz, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-lfn95, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-yk3fw7-public-custom-vnet-control-plane-fj5ff, container kube-controller-manager
STEP: Got error while iterating over activity logs for resource group capz-e2e-yk3fw7-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000662764s
STEP: Dumping all the Cluster API resources in the "capz-e2e-yk3fw7" namespace
STEP: Deleting all clusters in the capz-e2e-yk3fw7 namespace
STEP: Deleting cluster capz-e2e-yk3fw7-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-yk3fw7/capz-e2e-yk3fw7-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-yk3fw7-public-custom-vnet to be deleted
W1114 07:32:01.781610   24505 reflector.go:441] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1114 07:32:33.292207   24505 trace.go:205] Trace[483674372]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (14-Nov-2021 07:32:03.290) (total time: 30001ms):
Trace[483674372]: [30.001305216s] [30.001305216s] END
E1114 07:32:33.292266   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp 20.99.206.143:6443: i/o timeout
I1114 07:33:05.265106   24505 trace.go:205] Trace[1317283130]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (14-Nov-2021 07:32:35.264) (total time: 30000ms):
Trace[1317283130]: [30.00067355s] [30.00067355s] END
E1114 07:33:05.265148   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp 20.99.206.143:6443: i/o timeout
I1114 07:33:40.397190   24505 trace.go:205] Trace[1920223362]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (14-Nov-2021 07:33:10.396) (total time: 30000ms):
Trace[1920223362]: [30.000869153s] [30.000869153s] END
E1114 07:33:40.397253   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp 20.99.206.143:6443: i/o timeout
I1114 07:34:19.358669   24505 trace.go:205] Trace[995782285]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (14-Nov-2021 07:33:49.357) (total time: 30000ms):
Trace[995782285]: [30.000681447s] [30.000681447s] END
E1114 07:34:19.358733   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp 20.99.206.143:6443: i/o timeout
E1114 07:34:32.387666   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-yk3fw7
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 14 07:34:50.468: INFO: deleting an existing virtual network "custom-vnet"
E1114 07:34:59.579861   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 14 07:35:01.456: INFO: deleting an existing route table "node-routetable"
Nov 14 07:35:12.106: INFO: deleting an existing network security group "node-nsg"
Nov 14 07:35:22.609: INFO: deleting an existing network security group "control-plane-nsg"
Nov 14 07:35:33.100: INFO: verifying the existing resource group "capz-e2e-yk3fw7-public-custom-vnet" is empty
Nov 14 07:35:35.640: INFO: deleting the existing resource group "capz-e2e-yk3fw7-public-custom-vnet"
E1114 07:35:37.639995   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1114 07:36:24.937624   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 57m13s on Ginkgo node 3 of 3


• [SLOW TEST:3432.855 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413

INFO: "with a single control plane node and 1 node" started at Sun, 14 Nov 2021 07:24:34 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-7gt4zt" for hosting the cluster
Nov 14 07:24:34.320: INFO: starting to create namespace for hosting the "capz-e2e-7gt4zt" test spec
2021/11/14 07:24:34 failed trying to get namespace (capz-e2e-7gt4zt):namespaces "capz-e2e-7gt4zt" not found
INFO: Creating namespace capz-e2e-7gt4zt
INFO: Creating event watcher for namespace "capz-e2e-7gt4zt"
Nov 14 07:24:34.351: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-7gt4zt-gpu
INFO: Creating the workload cluster with name "capz-e2e-7gt4zt-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 58 lines ...
STEP: Fetching activity logs took 501.639072ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-7gt4zt" namespace
STEP: Deleting all clusters in the capz-e2e-7gt4zt namespace
STEP: Deleting cluster capz-e2e-7gt4zt-gpu
INFO: Waiting for the Cluster capz-e2e-7gt4zt/capz-e2e-7gt4zt-gpu to be deleted
STEP: Waiting for cluster capz-e2e-7gt4zt-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-t4fbw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-zvrrn, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-7gt4zt
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 19m20s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:455

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Sun, 14 Nov 2021 07:33:44 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-72pm2r" for hosting the cluster
Nov 14 07:33:44.377: INFO: starting to create namespace for hosting the "capz-e2e-72pm2r" test spec
2021/11/14 07:33:44 failed trying to get namespace (capz-e2e-72pm2r):namespaces "capz-e2e-72pm2r" not found
INFO: Creating namespace capz-e2e-72pm2r
INFO: Creating event watcher for namespace "capz-e2e-72pm2r"
Nov 14 07:33:44.419: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-72pm2r-oot
INFO: Creating the workload cluster with name "capz-e2e-72pm2r-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 98 lines ...
STEP: Fetching activity logs took 565.949824ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-72pm2r" namespace
STEP: Deleting all clusters in the capz-e2e-72pm2r namespace
STEP: Deleting cluster capz-e2e-72pm2r-oot
INFO: Waiting for the Cluster capz-e2e-72pm2r/capz-e2e-72pm2r-oot to be deleted
STEP: Waiting for cluster capz-e2e-72pm2r-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5v65v, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-mgqbj, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mfgps, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mgv7l, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-dwdn5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-controller-manager, container cloud-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qh889, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-72pm2r-oot-control-plane-gx4wf, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-fwl6n, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-skb9c, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-gcskr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-llsv5, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-wwm6n, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-72pm2r-oot-control-plane-gx4wf, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-sbq65, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-72pm2r-oot-control-plane-gx4wf, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-72pm2r-oot-control-plane-gx4wf, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-72pm2r
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 16m38s on Ginkgo node 1 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:490

INFO: "with a single control plane node and 1 node" started at Sun, 14 Nov 2021 07:37:04 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-i175t4" for hosting the cluster
Nov 14 07:37:04.621: INFO: starting to create namespace for hosting the "capz-e2e-i175t4" test spec
2021/11/14 07:37:04 failed trying to get namespace (capz-e2e-i175t4):namespaces "capz-e2e-i175t4" not found
INFO: Creating namespace capz-e2e-i175t4
INFO: Creating event watcher for namespace "capz-e2e-i175t4"
Nov 14 07:37:04.668: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-i175t4-aks
INFO: Creating the workload cluster with name "capz-e2e-i175t4-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1114 07:37:15.636806   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 07:37:59.472317   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 07:38:42.528687   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 07:39:31.197668   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 07:40:21.442758   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 07:41:03.685471   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 14 07:41:06.054: INFO: Waiting for the first control plane machine managed by capz-e2e-i175t4/capz-e2e-i175t4-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
INFO: Waiting for control plane to be ready
Nov 14 07:41:16.085: INFO: Waiting for the first control plane machine managed by capz-e2e-i175t4/capz-e2e-i175t4-aks to be provisioned
STEP: Waiting for all control plane nodes to exist
... skipping 13 lines ...
STEP: time sync OK for host aks-agentpool1-14945919-vmss000000
STEP: time sync OK for host aks-agentpool1-14945919-vmss000000
STEP: Dumping logs from the "capz-e2e-i175t4-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-i175t4/capz-e2e-i175t4-aks logs
Nov 14 07:41:23.668: INFO: INFO: Collecting logs for node aks-agentpool1-14945919-vmss000000 in cluster capz-e2e-i175t4-aks in namespace capz-e2e-i175t4

E1114 07:41:37.975819   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 07:42:26.766056   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 07:43:03.891232   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 14 07:43:34.110: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-i175t4/capz-e2e-i175t4-aks: [dialing public load balancer at capz-e2e-i175t4-aks-4ea816fe.hcp.westus2.azmk8s.io: dial tcp 20.59.2.125:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
Nov 14 07:43:34.610: INFO: INFO: Collecting logs for node aks-agentpool1-14945919-vmss000000 in cluster capz-e2e-i175t4-aks in namespace capz-e2e-i175t4

E1114 07:43:39.282231   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 07:44:14.677189   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 07:45:10.886756   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 14 07:45:45.182: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-i175t4/capz-e2e-i175t4-aks: [dialing public load balancer at capz-e2e-i175t4-aks-4ea816fe.hcp.westus2.azmk8s.io: dial tcp 20.59.2.125:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-i175t4/capz-e2e-i175t4-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 553.897873ms
STEP: Dumping workload cluster capz-e2e-i175t4/capz-e2e-i175t4-aks Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-84d976c568-bfqpp, container coredns
STEP: Creating log watcher for controller kube-system/calico-typha-deployment-76cb9744d8-p8skt, container calico-typha
STEP: Creating log watcher for controller kube-system/kube-proxy-9t5gz, container kube-proxy
... skipping 8 lines ...
STEP: Fetching activity logs took 526.219418ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-i175t4" namespace
STEP: Deleting all clusters in the capz-e2e-i175t4 namespace
STEP: Deleting cluster capz-e2e-i175t4-aks
INFO: Waiting for the Cluster capz-e2e-i175t4/capz-e2e-i175t4-aks to be deleted
STEP: Waiting for cluster capz-e2e-i175t4-aks to be deleted
E1114 07:46:00.408197   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 07:46:33.388437   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 07:47:32.327305   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 07:48:14.943705   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 07:49:10.697179   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 07:49:49.765672   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-i175t4
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1114 07:50:29.319211   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E1114 07:51:27.266700   24505 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-yk3fw7/events?resourceVersion=11775": dial tcp: lookup capz-e2e-yk3fw7-public-custom-vnet-1ebe9d1d.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 14m38s on Ginkgo node 3 of 3


• [SLOW TEST:877.693 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Sun, 14 Nov 2021 07:43:54 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-3kz00c" for hosting the cluster
Nov 14 07:43:54.275: INFO: starting to create namespace for hosting the "capz-e2e-3kz00c" test spec
2021/11/14 07:43:54 failed trying to get namespace (capz-e2e-3kz00c):namespaces "capz-e2e-3kz00c" not found
INFO: Creating namespace capz-e2e-3kz00c
INFO: Creating event watcher for namespace "capz-e2e-3kz00c"
Nov 14 07:43:54.319: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-3kz00c-win-ha
INFO: Creating the workload cluster with name "capz-e2e-3kz00c-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 157 lines ...
STEP: Fetching activity logs took 1.363935921s
STEP: Dumping all the Cluster API resources in the "capz-e2e-3kz00c" namespace
STEP: Deleting all clusters in the capz-e2e-3kz00c namespace
STEP: Deleting cluster capz-e2e-3kz00c-win-ha
INFO: Waiting for the Cluster capz-e2e-3kz00c/capz-e2e-3kz00c-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-3kz00c-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-fn4x5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-3kz00c-win-ha-control-plane-ndwnc, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-3kz00c-win-ha-control-plane-l8hxh, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-z4wrj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-fjrxq, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-3kz00c-win-ha-control-plane-ndwnc, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7fqcm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rjzm8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-3kz00c-win-ha-control-plane-ndwnc, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-3kz00c-win-ha-control-plane-l8hxh, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-r7zhq, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-x8vph, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-24j7b, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-p8hbq, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-3kz00c-win-ha-control-plane-l8hxh, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-4czgp, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-3kz00c-win-ha-control-plane-l8hxh, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-3kz00c-win-ha-control-plane-ndwnc, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-3kz00c
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 33m58s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:579

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Sun, 14 Nov 2021 07:50:22 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-9eol44" for hosting the cluster
Nov 14 07:50:22.407: INFO: starting to create namespace for hosting the "capz-e2e-9eol44" test spec
2021/11/14 07:50:22 failed trying to get namespace (capz-e2e-9eol44):namespaces "capz-e2e-9eol44" not found
INFO: Creating namespace capz-e2e-9eol44
INFO: Creating event watcher for namespace "capz-e2e-9eol44"
Nov 14 07:50:22.439: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-9eol44-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-9eol44-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 48 lines ...
STEP: Fetching activity logs took 1.046858204s
STEP: Dumping all the Cluster API resources in the "capz-e2e-9eol44" namespace
STEP: Deleting all clusters in the capz-e2e-9eol44 namespace
STEP: Deleting cluster capz-e2e-9eol44-win-vmss
INFO: Waiting for the Cluster capz-e2e-9eol44/capz-e2e-9eol44-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-9eol44-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-86wrh, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-ftxn5, container kube-flannel: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-9eol44
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 42m53s on Ginkgo node 1 of 3

... skipping 55 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a Windows enabled VMSS cluster with dockershim [It] with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/machinepool_helpers.go:85

Ran 9 of 23 Specs in 6922.454 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 14 Skipped


Ginkgo ran 1 suite in 1h56m40.39107326s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:176: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:184: test-e2e] Error 2
================ REDACTING LOGS ================
{"component":"entrypoint","file":"prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2021-11-14T08:33:21Z"}
++ early_exit_handler
++ '[' -n 160 ']'
++ kill -TERM 160
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 10 lines ...
Cleaning up after docker
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
================================================================================
Done cleaning up after docker in docker.
All sensitive variables are redacted
{"component":"entrypoint","file":"prow/entrypoint/run.go:252","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process gracefully exited before 15m0s grace period","severity":"error","time":"2021-11-14T08:34:46Z"}