This job view page is being replaced by Spyglass soon. Check out the new job view.
PRk8s-infra-cherrypick-robot: [release-1.5] support ccm to read config from secret
ResultFAILURE
Tests 1 failed / 3 succeeded
Started2022-09-28 01:50
Elapsed1h31m
Revision9b8e7f971c47e01f1b3a0cf05b96cbd4948c67ac
Refs 2676

Test Failures


capz-e2e Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes 1h18m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\shighly\savailable\scluster\s\[REQUIRED\]\sWith\s3\scontrol\-plane\snodes\sand\s2\sLinux\sand\s2\sWindows\sworker\snodes$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115
Timed out after 1800.008s.
Expected
    <bool>: false
to be true
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.1/framework/cluster_helpers.go:175
				
				Click to see stdout/stderrfrom junit.e2e_suite.9.xml

Filter through log files | View test history on testgrid


Show 3 Passed Tests

Show 22 Skipped Tests

Error lines from build-log.txt

... skipping 551 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:287

INFO: "With ipv6 worker node" started at Wed, 28 Sep 2022 02:01:32 UTC on Ginkgo node 3 of 10
STEP: Creating namespace "capz-e2e-nh7ois" for hosting the cluster
Sep 28 02:01:32.775: INFO: starting to create namespace for hosting the "capz-e2e-nh7ois" test spec
2022/09/28 02:01:32 failed trying to get namespace (capz-e2e-nh7ois):namespaces "capz-e2e-nh7ois" not found
INFO: Creating namespace capz-e2e-nh7ois
INFO: Creating event watcher for namespace "capz-e2e-nh7ois"
Sep 28 02:01:32.850: INFO: Creating cluster identity secret "cluster-identity-secret"
INFO: Cluster name is capz-e2e-nh7ois-ipv6
INFO: Creating the workload cluster with name "capz-e2e-nh7ois-ipv6" using the "ipv6" template (Kubernetes v1.23.12, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 164 lines ...
  with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:331

INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" started at Wed, 28 Sep 2022 02:01:32 UTC on Ginkgo node 4 of 10
STEP: Creating namespace "capz-e2e-j68y5i" for hosting the cluster
Sep 28 02:01:32.776: INFO: starting to create namespace for hosting the "capz-e2e-j68y5i" test spec
2022/09/28 02:01:32 failed trying to get namespace (capz-e2e-j68y5i):namespaces "capz-e2e-j68y5i" not found
INFO: Creating namespace capz-e2e-j68y5i
INFO: Creating event watcher for namespace "capz-e2e-j68y5i"
Sep 28 02:01:32.862: INFO: Creating cluster identity secret "cluster-identity-secret"
INFO: Cluster name is capz-e2e-j68y5i-vmss
INFO: Creating the workload cluster with name "capz-e2e-j68y5i-vmss" using the "machine-pool" template (Kubernetes v1.23.12, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 49 lines ...
Output of "kubescape scan --enable-host-scan --exclude-namespaces kube-system,kube-public":

Logs for pod kubescape-scan-hjcb9:
{"level":"info","ts":"2022-09-28T02:10:09Z","msg":"ARMO security scanner starting"}
{"level":"warn","ts":"2022-09-28T02:10:10Z","msg":"current version 'v2.0.167' is not updated to the latest release: 'v2.0.171'"}
{"level":"info","ts":"2022-09-28T02:10:10Z","msg":"Installing host scanner"}
{"level":"error","ts":"2022-09-28T02:11:50Z","msg":"failed to validate host-sensor pods status","error":"host-sensor pods number (3) differ than nodes number (5) after deadline exceeded. Kubescape will take data only from the pods below: map[host-scanner-5lk5z:capz-e2e-j68y5i-vmss-control-plane-4glbv host-scanner-gj22z:capz-e2e-j68y5i-vmss-mp-0000000 host-scanner-z6nvv:capz-e2e-j68y5i-vmss-mp-0000001]"}
{"level":"info","ts":"2022-09-28T02:11:56Z","msg":"Downloading/Loading policy definitions"}
{"level":"info","ts":"2022-09-28T02:11:56Z","msg":"Downloaded/Loaded policy"}
{"level":"info","ts":"2022-09-28T02:11:56Z","msg":"Accessing Kubernetes objects"}
{"level":"warn","ts":"2022-09-28T02:11:57Z","msg":"failed to collect image vulnerabilities","error":"credentials are not configured for any registry adaptor"}
{"level":"info","ts":"2022-09-28T02:12:00Z","msg":"Accessed to Kubernetes objects"}
{"level":"info","ts":"2022-09-28T02:12:00Z","msg":"Scanning","Cluster":""}
{"level":"info","ts":"2022-09-28T02:12:02Z","msg":"Done scanning","Cluster":""}

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Controls: 49 (Failed: 37, Excluded: 0, Skipped: 0)

+----------+---------------------------------------------+------------------+--------------------+---------------+--------------+
| SEVERITY |                CONTROL NAME                 | FAILED RESOURCES | EXCLUDED RESOURCES | ALL RESOURCES | % RISK-SCORE |
+----------+---------------------------------------------+------------------+--------------------+---------------+--------------+
| High     | Forbidden Container Registries              |        1         |         0          |       4       |     19%      |
| High     | HostNetwork access                          |        2         |         0          |       4       |     62%      |
| High     | HostPath mount                              |        2         |         0          |       4       |     62%      |
| High     | List Kubernetes secrets                     |        12        |         0          |      59       |     20%      |
| High     | Privileged container                        |        1         |         0          |       4       |     19%      |
... skipping 73 lines ...
STEP: waiting for job default/curl-to-elb-job0jdbc0vm5z9 to be complete
Sep 28 02:14:49.538: INFO: waiting for job default/curl-to-elb-job0jdbc0vm5z9 to be complete
Sep 28 02:14:59.648: INFO: job default/curl-to-elb-job0jdbc0vm5z9 is complete, took 10.109669177s
STEP: connecting directly to the external LB service
Sep 28 02:14:59.648: INFO: starting attempts to connect directly to the external LB service
2022/09/28 02:14:59 [DEBUG] GET http://51.143.59.240
2022/09/28 02:15:29 [ERR] GET http://51.143.59.240 request failed: Get "http://51.143.59.240": dial tcp 51.143.59.240:80: i/o timeout
2022/09/28 02:15:29 [DEBUG] GET http://51.143.59.240: retrying in 1s (4 left)
Sep 28 02:15:30.769: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Sep 28 02:15:30.769: INFO: starting to delete external LB service web92hcca-elb
Sep 28 02:15:30.820: INFO: waiting for the external LB service to be deleted: web92hcca-elb
Sep 28 02:16:19.677: INFO: starting to delete deployment web92hcca
... skipping 60 lines ...

Sep 28 02:25:43.956: INFO: Collecting logs for Windows node win-p-win000001 in cluster capz-e2e-j68y5i-vmss in namespace capz-e2e-j68y5i

Sep 28 02:29:43.169: INFO: Attempting to copy file /c:/crashdumps.tar on node win-p-win000001 to /logs/artifacts/clusters/capz-e2e-j68y5i-vmss/machine-pools/capz-e2e-j68y5i-vmss-mp-win/win-p-win000001/crashdumps.tar
Sep 28 02:29:44.879: INFO: Collecting boot logs for VMSS instance 1 of scale set win-p-win

Failed to get logs for machine pool capz-e2e-j68y5i-vmss-mp-win, cluster capz-e2e-j68y5i/capz-e2e-j68y5i-vmss: [running command "ls 'c:\localdumps' -Recurse": Process exited with status 1, getting a new sftp client connection: ssh: subsystem request failed]
STEP: Dumping workload cluster capz-e2e-j68y5i/capz-e2e-j68y5i-vmss kube-system pod logs
STEP: Collecting events for Pod kube-system/calico-node-252mw
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-j68y5i-vmss-control-plane-4glbv, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-windows-9t2pr, container calico-node-felix
STEP: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-vc6hp
STEP: Collecting events for Pod kube-system/calico-node-tp7v6
... skipping 14 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-tp7v6, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-j68y5i-vmss-control-plane-4glbv, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-fnc99, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-2ndl9, container calico-node-felix
STEP: Collecting events for Pod kube-system/etcd-capz-e2e-j68y5i-vmss-control-plane-4glbv
STEP: Collecting events for Pod kube-system/kube-proxy-fnc99
STEP: failed to find events of Pod "etcd-capz-e2e-j68y5i-vmss-control-plane-4glbv"
STEP: Creating log watcher for controller kube-system/kube-proxy-n6fzr, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-j68y5i-vmss-control-plane-4glbv, container kube-apiserver
STEP: Collecting events for Pod kube-system/kube-proxy-n6fzr
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-e2e-j68y5i-vmss-control-plane-4glbv
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-8c868, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-windows-8c868
... skipping 29 lines ...
  Creates a public management cluster in a custom vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:153

INFO: "Creates a public management cluster in a custom vnet" started at Wed, 28 Sep 2022 02:01:32 UTC on Ginkgo node 8 of 10
STEP: Creating namespace "capz-e2e-1pjck0" for hosting the cluster
Sep 28 02:01:32.774: INFO: starting to create namespace for hosting the "capz-e2e-1pjck0" test spec
2022/09/28 02:01:32 failed trying to get namespace (capz-e2e-1pjck0):namespaces "capz-e2e-1pjck0" not found
INFO: Creating namespace capz-e2e-1pjck0
INFO: Creating event watcher for namespace "capz-e2e-1pjck0"
Sep 28 02:01:32.868: INFO: Creating cluster identity secret "cluster-identity-secret"
INFO: Cluster name is capz-e2e-1pjck0-public-custom-vnet
STEP: Creating a custom virtual network
STEP: creating Azure clients with the workload cluster's subscription
... skipping 105 lines ...

STEP: Dumping workload cluster capz-e2e-1pjck0/capz-e2e-1pjck0-public-custom-vnet kube-system pod logs
STEP: Fetching kube-system pod logs took 359.549624ms
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-kqjtc, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-proxy-5zvhb, container kube-proxy
STEP: Collecting events for Pod kube-system/etcd-capz-e2e-1pjck0-public-custom-vnet-control-plane-jml78
STEP: failed to find events of Pod "etcd-capz-e2e-1pjck0-public-custom-vnet-control-plane-jml78"
STEP: Collecting events for Pod kube-system/kube-proxy-zdtds
STEP: Collecting events for Pod kube-system/kube-proxy-5zvhb
STEP: Creating log watcher for controller kube-system/kube-proxy-zdtds, container kube-proxy
STEP: Dumping workload cluster capz-e2e-1pjck0/capz-e2e-1pjck0-public-custom-vnet Azure activity log
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-1pjck0-public-custom-vnet-control-plane-jml78, container kube-scheduler
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-1pjck0-public-custom-vnet-control-plane-jml78, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-5n6mb, container calico-node
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-e2e-1pjck0-public-custom-vnet-control-plane-jml78
STEP: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-kqjtc
STEP: failed to find events of Pod "kube-scheduler-capz-e2e-1pjck0-public-custom-vnet-control-plane-jml78"
STEP: Creating log watcher for controller kube-system/calico-node-znwc7, container calico-node
STEP: Collecting events for Pod kube-system/calico-node-znwc7
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-1pjck0-public-custom-vnet-control-plane-jml78, container kube-apiserver
STEP: Creating log watcher for controller kube-system/coredns-64897985d-5h6m2, container coredns
STEP: Collecting events for Pod kube-system/calico-node-5n6mb
STEP: Collecting events for Pod kube-system/coredns-64897985d-5h6m2
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-e2e-1pjck0-public-custom-vnet-control-plane-jml78
STEP: failed to find events of Pod "kube-apiserver-capz-e2e-1pjck0-public-custom-vnet-control-plane-jml78"
STEP: Creating log watcher for controller kube-system/coredns-64897985d-t4gkk, container coredns
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-e2e-1pjck0-public-custom-vnet-control-plane-jml78
STEP: failed to find events of Pod "kube-controller-manager-capz-e2e-1pjck0-public-custom-vnet-control-plane-jml78"
STEP: Collecting events for Pod kube-system/coredns-64897985d-t4gkk
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-1pjck0-public-custom-vnet-control-plane-jml78, container kube-controller-manager
STEP: Fetching activity logs took 4.944411236s
Sep 28 02:38:05.067: INFO: Dumping all the Cluster API resources in the "capz-e2e-1pjck0" namespace
Sep 28 02:38:05.405: INFO: Deleting all clusters in the capz-e2e-1pjck0 namespace
STEP: Deleting cluster capz-e2e-1pjck0-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-1pjck0/capz-e2e-1pjck0-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-1pjck0-public-custom-vnet to be deleted
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-6c76c59d6b-glfkh, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-74b6b6b77f-q7mt4, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-7df9bc44b4-6v9zz, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-6864497789-fqwvx, container manager: http2: client connection lost
Sep 28 02:47:15.790: INFO: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-1pjck0
Sep 28 02:47:15.807: INFO: Running additional cleanup for the "create-workload-cluster" test spec
Sep 28 02:47:15.807: INFO: deleting an existing virtual network "custom-vnet"
Sep 28 02:47:26.590: INFO: deleting an existing route table "node-routetable"
Sep 28 02:47:29.174: INFO: deleting an existing network security group "node-nsg"
... skipping 17 lines ...
  With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:208

INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" started at Wed, 28 Sep 2022 02:01:32 UTC on Ginkgo node 9 of 10
STEP: Creating namespace "capz-e2e-vt77vf" for hosting the cluster
Sep 28 02:01:32.774: INFO: starting to create namespace for hosting the "capz-e2e-vt77vf" test spec
2022/09/28 02:01:32 failed trying to get namespace (capz-e2e-vt77vf):namespaces "capz-e2e-vt77vf" not found
INFO: Creating namespace capz-e2e-vt77vf
INFO: Creating event watcher for namespace "capz-e2e-vt77vf"
Sep 28 02:01:32.868: INFO: Creating cluster identity secret "cluster-identity-secret"
INFO: Cluster name is capz-e2e-vt77vf-ha
INFO: Creating the workload cluster with name "capz-e2e-vt77vf-ha" using the "(default)" template (Kubernetes v1.23.12, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 77 lines ...
STEP: waiting for job default/curl-to-elb-jobtjjxcjmig2z to be complete
Sep 28 02:16:47.149: INFO: waiting for job default/curl-to-elb-jobtjjxcjmig2z to be complete
Sep 28 02:16:57.261: INFO: job default/curl-to-elb-jobtjjxcjmig2z is complete, took 10.112436839s
STEP: connecting directly to the external LB service
Sep 28 02:16:57.261: INFO: starting attempts to connect directly to the external LB service
2022/09/28 02:16:57 [DEBUG] GET http://52.137.92.167
2022/09/28 02:17:27 [ERR] GET http://52.137.92.167 request failed: Get "http://52.137.92.167": dial tcp 52.137.92.167:80: i/o timeout
2022/09/28 02:17:27 [DEBUG] GET http://52.137.92.167: retrying in 1s (4 left)
Sep 28 02:17:43.739: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Sep 28 02:17:43.739: INFO: starting to delete external LB service webwjpmwr-elb
Sep 28 02:17:43.801: INFO: waiting for the external LB service to be deleted: webwjpmwr-elb
Sep 28 02:18:20.513: INFO: starting to delete deployment webwjpmwr
Sep 28 02:18:20.572: INFO: starting to delete job curl-to-elb-jobtjjxcjmig2z
STEP: Validating network policies
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Sep 28 02:18:20.675: INFO: starting to create dev deployment namespace
2022/09/28 02:18:20 failed trying to get namespace (development):namespaces "development" not found
2022/09/28 02:18:20 namespace development does not exist, creating...
STEP: Creating production namespace
Sep 28 02:18:20.790: INFO: starting to create prod deployment namespace
2022/09/28 02:18:20 failed trying to get namespace (production):namespaces "production" not found
2022/09/28 02:18:20 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Sep 28 02:18:20.903: INFO: starting to create frontend-prod deployments
Sep 28 02:18:20.961: INFO: starting to create frontend-dev deployments
Sep 28 02:18:21.020: INFO: starting to create backend deployments
Sep 28 02:18:21.087: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Sep 28 02:18:44.985: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.198.5 port 80: Connection timed out

STEP: Cleaning up after ourselves
Sep 28 02:20:55.285: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Sep 28 02:20:55.537: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.198.5 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.198.5 port 80: Connection timed out

STEP: Cleaning up after ourselves
Sep 28 02:25:17.420: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Sep 28 02:25:17.664: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.198.7 port 80: Connection timed out

STEP: Cleaning up after ourselves
Sep 28 02:27:30.539: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Sep 28 02:27:30.802: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.198.6 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.198.7 port 80: Connection timed out

STEP: Cleaning up after ourselves
Sep 28 02:31:52.689: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Sep 28 02:31:52.935: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.198.5 port 80: Connection timed out

STEP: Cleaning up after ourselves
Sep 28 02:34:03.764: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Sep 28 02:34:04.016: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.198.5 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Creating an accessible load balancer for windows
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowsji5mth to be available
Sep 28 02:36:15.709: INFO: starting to wait for deployment to become available
... skipping 57 lines ...

Sep 28 02:44:07.169: INFO: Collecting logs for Windows node capz-e2e-s876z in cluster capz-e2e-vt77vf-ha in namespace capz-e2e-vt77vf

Sep 28 02:45:55.256: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-e2e-s876z to /logs/artifacts/clusters/capz-e2e-vt77vf-ha/machines/capz-e2e-vt77vf-ha-md-win-867484546c-42tmm/crashdumps.tar
Sep 28 02:45:57.680: INFO: Collecting boot logs for AzureMachine capz-e2e-vt77vf-ha-md-win-s876z

Failed to get logs for machine capz-e2e-vt77vf-ha-md-win-867484546c-42tmm, cluster capz-e2e-vt77vf/capz-e2e-vt77vf-ha: running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1
Sep 28 02:45:58.633: INFO: Collecting logs for Windows node capz-e2e-z579n in cluster capz-e2e-vt77vf-ha in namespace capz-e2e-vt77vf

Sep 28 02:47:56.419: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-e2e-z579n to /logs/artifacts/clusters/capz-e2e-vt77vf-ha/machines/capz-e2e-vt77vf-ha-md-win-867484546c-l6sfb/crashdumps.tar
Sep 28 02:47:58.796: INFO: Collecting boot logs for AzureMachine capz-e2e-vt77vf-ha-md-win-z579n

Failed to get logs for machine capz-e2e-vt77vf-ha-md-win-867484546c-l6sfb, cluster capz-e2e-vt77vf/capz-e2e-vt77vf-ha: running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1
STEP: Dumping workload cluster capz-e2e-vt77vf/capz-e2e-vt77vf-ha kube-system pod logs
STEP: Collecting events for Pod kube-system/calico-node-cjc6h
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-ll5pg, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-98828, container calico-node
STEP: Fetching kube-system pod logs took 573.399689ms
STEP: Dumping workload cluster capz-e2e-vt77vf/capz-e2e-vt77vf-ha Azure activity log
... skipping 129 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation [AfterEach] Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.1/framework/cluster_helpers.go:175

Ran 4 of 26 Specs in 4864.966 seconds
FAIL! -- 3 Passed | 1 Failed | 0 Pending | 22 Skipped


Ginkgo ran 1 suite in 1h22m57.436015837s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:654: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:662: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...