This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-11-21 06:36
Elapsed1h47m
Revisionmain

Test Failures


capz-e2e Workload cluster creation Creating a GPU-enabled cluster with a single control plane node and 1 node 37m7s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sGPU\-enabled\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413
Timed out after 1200.000s.
Expected
    <bool>: false
to be true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:76
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 15 Skipped Tests

Error lines from build-log.txt

... skipping 434 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:288

INFO: "With ipv6 worker node" started at Sun, 21 Nov 2021 06:44:54 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-xe556s" for hosting the cluster
Nov 21 06:44:54.950: INFO: starting to create namespace for hosting the "capz-e2e-xe556s" test spec
2021/11/21 06:44:54 failed trying to get namespace (capz-e2e-xe556s):namespaces "capz-e2e-xe556s" not found
INFO: Creating namespace capz-e2e-xe556s
INFO: Creating event watcher for namespace "capz-e2e-xe556s"
Nov 21 06:44:54.991: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-xe556s-ipv6
INFO: Creating the workload cluster with name "capz-e2e-xe556s-ipv6" using the "ipv6" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 483.5554ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-xe556s" namespace
STEP: Deleting all clusters in the capz-e2e-xe556s namespace
STEP: Deleting cluster capz-e2e-xe556s-ipv6
INFO: Waiting for the Cluster capz-e2e-xe556s/capz-e2e-xe556s-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-xe556s-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-tn7dz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-xe556s-ipv6-control-plane-6kmw6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-xe556s-ipv6-control-plane-2xpc9, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-xe556s-ipv6-control-plane-2xpc9, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-p6cnk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-xe556s-ipv6-control-plane-skcvj, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-jdkj9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-b44vg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-xe556s-ipv6-control-plane-skcvj, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-xe556s-ipv6-control-plane-6kmw6, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-xe556s-ipv6-control-plane-skcvj, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-xe556s-ipv6-control-plane-2xpc9, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-49hhq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-h8zm8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-xe556s-ipv6-control-plane-skcvj, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-clddx, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-xe556s-ipv6-control-plane-6kmw6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-lm29z, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-cbdd7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-xe556s-ipv6-control-plane-2xpc9, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-szqmj, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-xe556s-ipv6-control-plane-6kmw6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-52f4r, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-xe556s
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 17m7s on Ginkgo node 1 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:334

INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" started at Sun, 21 Nov 2021 07:02:02 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-4dw7uc" for hosting the cluster
Nov 21 07:02:02.405: INFO: starting to create namespace for hosting the "capz-e2e-4dw7uc" test spec
2021/11/21 07:02:02 failed trying to get namespace (capz-e2e-4dw7uc):namespaces "capz-e2e-4dw7uc" not found
INFO: Creating namespace capz-e2e-4dw7uc
INFO: Creating event watcher for namespace "capz-e2e-4dw7uc"
Nov 21 07:02:02.446: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-4dw7uc-vmss
INFO: Creating the workload cluster with name "capz-e2e-4dw7uc-vmss" using the "machine-pool" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 142 lines ...
Nov 21 07:19:45.096: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-4dw7uc-vmss-mp-0

Nov 21 07:19:45.439: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-4dw7uc-vmss in namespace capz-e2e-4dw7uc

Nov 21 07:20:00.629: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-e2e-4dw7uc-vmss-mp-0

Failed to get logs for machine pool capz-e2e-4dw7uc-vmss-mp-0, cluster capz-e2e-4dw7uc/capz-e2e-4dw7uc-vmss: [[running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1], [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1]]
Nov 21 07:20:00.918: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-4dw7uc-vmss in namespace capz-e2e-4dw7uc

Nov 21 07:20:23.182: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

Nov 21 07:20:23.462: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-4dw7uc-vmss in namespace capz-e2e-4dw7uc

Nov 21 07:20:46.501: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set win-p-win

Failed to get logs for machine pool capz-e2e-4dw7uc-vmss-mp-win, cluster capz-e2e-4dw7uc/capz-e2e-4dw7uc-vmss: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-4dw7uc/capz-e2e-4dw7uc-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 321.100001ms
STEP: Dumping workload cluster capz-e2e-4dw7uc/capz-e2e-4dw7uc-vmss Azure activity log
STEP: Creating log watcher for controller kube-system/kube-proxy-8shlg, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-4dw7uc-vmss-control-plane-k882f, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-krpbg, container kube-proxy
... skipping 10 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-4dw7uc-vmss-control-plane-k882f, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-nvnrx, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-nsf45, container calico-node-felix
STEP: Creating log watcher for controller kube-system/calico-node-windows-wmsnr, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-windows-nsf45, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-windows-wmsnr, container calico-node-felix
STEP: Got error while iterating over activity logs for resource group capz-e2e-4dw7uc-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.002399845s
STEP: Dumping all the Cluster API resources in the "capz-e2e-4dw7uc" namespace
STEP: Deleting all clusters in the capz-e2e-4dw7uc namespace
STEP: Deleting cluster capz-e2e-4dw7uc-vmss
INFO: Waiting for the Cluster capz-e2e-4dw7uc/capz-e2e-4dw7uc-vmss to be deleted
STEP: Waiting for cluster capz-e2e-4dw7uc-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-whjdn, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-gbb79, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-4dw7uc-vmss-control-plane-k882f, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-428v2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-kj6xz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-nvnrx, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7wzvc, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-4dw7uc-vmss-control-plane-k882f, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-4dw7uc-vmss-control-plane-k882f, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8shlg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-frwsc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-4dw7uc-vmss-control-plane-k882f, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-nsf45, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-wmsnr, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-nsf45, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-wmsnr, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-krpbg, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-4dw7uc
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" ran for 27m13s on Ginkgo node 1 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:205

INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" started at Sun, 21 Nov 2021 06:44:54 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-vhzl8a" for hosting the cluster
Nov 21 06:44:54.892: INFO: starting to create namespace for hosting the "capz-e2e-vhzl8a" test spec
2021/11/21 06:44:54 failed trying to get namespace (capz-e2e-vhzl8a):namespaces "capz-e2e-vhzl8a" not found
INFO: Creating namespace capz-e2e-vhzl8a
INFO: Creating event watcher for namespace "capz-e2e-vhzl8a"
Nov 21 06:44:54.942: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-vhzl8a-ha
INFO: Creating the workload cluster with name "capz-e2e-vhzl8a-ha" using the "(default)" template (Kubernetes v1.22.4, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 67 lines ...
STEP: waiting for job default/curl-to-elb-jobu6i7hhvkdmc to be complete
Nov 21 06:55:02.615: INFO: waiting for job default/curl-to-elb-jobu6i7hhvkdmc to be complete
Nov 21 06:55:12.689: INFO: job default/curl-to-elb-jobu6i7hhvkdmc is complete, took 10.074282946s
STEP: connecting directly to the external LB service
Nov 21 06:55:12.689: INFO: starting attempts to connect directly to the external LB service
2021/11/21 06:55:12 [DEBUG] GET http://20.88.179.33
2021/11/21 06:55:42 [ERR] GET http://20.88.179.33 request failed: Get "http://20.88.179.33": dial tcp 20.88.179.33:80: i/o timeout
2021/11/21 06:55:42 [DEBUG] GET http://20.88.179.33: retrying in 1s (4 left)
Nov 21 06:55:43.745: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 21 06:55:43.745: INFO: starting to delete external LB service websyqwk8-elb
Nov 21 06:55:43.820: INFO: starting to delete deployment websyqwk8
Nov 21 06:55:43.859: INFO: starting to delete job curl-to-elb-jobu6i7hhvkdmc
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 21 06:55:43.951: INFO: starting to create dev deployment namespace
2021/11/21 06:55:43 failed trying to get namespace (development):namespaces "development" not found
2021/11/21 06:55:43 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 21 06:55:44.031: INFO: starting to create prod deployment namespace
2021/11/21 06:55:44 failed trying to get namespace (production):namespaces "production" not found
2021/11/21 06:55:44 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 21 06:55:44.105: INFO: starting to create frontend-prod deployments
Nov 21 06:55:44.149: INFO: starting to create frontend-dev deployments
Nov 21 06:55:44.192: INFO: starting to create backend deployments
Nov 21 06:55:44.245: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 21 06:56:07.171: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.166.195 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 21 06:58:18.437: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 21 06:58:18.598: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.166.195 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.166.195 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 21 07:02:40.583: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 21 07:02:40.753: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.236.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 21 07:04:52.037: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 21 07:04:52.203: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.166.194 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.236.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 21 07:09:14.176: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 21 07:09:14.345: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.166.195 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 21 07:11:24.867: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 21 07:11:25.037: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.166.195 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowsqb5tg6 to be available
Nov 21 07:13:36.949: INFO: starting to wait for deployment to become available
Nov 21 07:14:27.188: INFO: Deployment default/web-windowsqb5tg6 is now available, took 50.238681672s
... skipping 20 lines ...
STEP: waiting for job default/curl-to-elb-jobskix3t038vc to be complete
Nov 21 07:18:48.651: INFO: waiting for job default/curl-to-elb-jobskix3t038vc to be complete
Nov 21 07:18:58.733: INFO: job default/curl-to-elb-jobskix3t038vc is complete, took 10.082662146s
STEP: connecting directly to the external LB service
Nov 21 07:18:58.733: INFO: starting attempts to connect directly to the external LB service
2021/11/21 07:18:58 [DEBUG] GET http://20.121.225.237
2021/11/21 07:19:28 [ERR] GET http://20.121.225.237 request failed: Get "http://20.121.225.237": dial tcp 20.121.225.237:80: i/o timeout
2021/11/21 07:19:28 [DEBUG] GET http://20.121.225.237: retrying in 1s (4 left)
Nov 21 07:19:29.798: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 21 07:19:29.798: INFO: starting to delete external LB service web-windowsqb5tg6-elb
Nov 21 07:19:29.891: INFO: starting to delete deployment web-windowsqb5tg6
Nov 21 07:19:29.933: INFO: starting to delete job curl-to-elb-jobskix3t038vc
... skipping 20 lines ...
Nov 21 07:20:20.824: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-vhzl8a-ha-md-0-pmhdm

Nov 21 07:20:21.206: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster capz-e2e-vhzl8a-ha in namespace capz-e2e-vhzl8a

Nov 21 07:20:45.261: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-vhzl8a-ha-md-win-ffhtp

Failed to get logs for machine capz-e2e-vhzl8a-ha-md-win-7c7665f6cb-55kkj, cluster capz-e2e-vhzl8a/capz-e2e-vhzl8a-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 21 07:20:45.525: INFO: INFO: Collecting logs for node 10.1.0.7 in cluster capz-e2e-vhzl8a-ha in namespace capz-e2e-vhzl8a

Nov 21 07:21:14.081: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-vhzl8a-ha-md-win-gdd67

Failed to get logs for machine capz-e2e-vhzl8a-ha-md-win-7c7665f6cb-5tgjd, cluster capz-e2e-vhzl8a/capz-e2e-vhzl8a-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-vhzl8a/capz-e2e-vhzl8a-ha kube-system pod logs
STEP: Fetching kube-system pod logs took 292.394119ms
STEP: Dumping workload cluster capz-e2e-vhzl8a/capz-e2e-vhzl8a-ha Azure activity log
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-vhzl8a-ha-control-plane-4zpqp, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-xr4td, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-vhzl8a-ha-control-plane-mrmfr, container kube-controller-manager
... skipping 22 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-windows-vj4ss, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-windows-vj4ss, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-proxy-x77jz, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-c4h7l, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-vhzl8a-ha-control-plane-f6gc5, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-82kbd, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-vhzl8a-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001069005s
STEP: Dumping all the Cluster API resources in the "capz-e2e-vhzl8a" namespace
STEP: Deleting all clusters in the capz-e2e-vhzl8a namespace
STEP: Deleting cluster capz-e2e-vhzl8a-ha
INFO: Waiting for the Cluster capz-e2e-vhzl8a/capz-e2e-vhzl8a-ha to be deleted
STEP: Waiting for cluster capz-e2e-vhzl8a-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-jpf54, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-vhzl8a-ha-control-plane-f6gc5, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-wbwvq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-c4h7l, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-vhzl8a-ha-control-plane-f6gc5, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-jpf54, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-x77jz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-np2m6, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-vhzl8a-ha-control-plane-f6gc5, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-vj4ss, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-vhzl8a-ha-control-plane-4zpqp, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-vhzl8a-ha-control-plane-f6gc5, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-vhzl8a-ha-control-plane-4zpqp, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-c4pwd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-vhzl8a-ha-control-plane-4zpqp, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xr4td, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-bc72s, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-2z9wb, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-vhzl8a-ha-control-plane-4zpqp, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4lft8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-vj4ss, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-ss9cq, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-kbn52, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-vhzl8a
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" ran for 46m49s on Ginkgo node 3 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:144

INFO: "Creates a public management cluster in the same vnet" started at Sun, 21 Nov 2021 06:44:54 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-jgmz6m" for hosting the cluster
Nov 21 06:44:54.412: INFO: starting to create namespace for hosting the "capz-e2e-jgmz6m" test spec
2021/11/21 06:44:54 failed trying to get namespace (capz-e2e-jgmz6m):namespaces "capz-e2e-jgmz6m" not found
INFO: Creating namespace capz-e2e-jgmz6m
INFO: Creating event watcher for namespace "capz-e2e-jgmz6m"
Nov 21 06:44:54.466: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-jgmz6m-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-l4glh, container coredns
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-jgmz6m-public-custom-vnet-control-plane-jvlgd, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-jgmz6m-public-custom-vnet-control-plane-jvlgd, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-bcvlh, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-jgmz6m-public-custom-vnet-control-plane-jvlgd, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-rhrx2, container calico-node
STEP: Got error while iterating over activity logs for resource group capz-e2e-jgmz6m-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00022219s
STEP: Dumping all the Cluster API resources in the "capz-e2e-jgmz6m" namespace
STEP: Deleting all clusters in the capz-e2e-jgmz6m namespace
STEP: Deleting cluster capz-e2e-jgmz6m-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-jgmz6m/capz-e2e-jgmz6m-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-jgmz6m-public-custom-vnet to be deleted
W1121 07:34:26.680982   24445 reflector.go:441] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1121 07:34:59.608787   24445 trace.go:205] Trace[1195003102]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (21-Nov-2021 07:34:29.606) (total time: 30002ms):
Trace[1195003102]: [30.002553685s] [30.002553685s] END
E1121 07:34:59.608853   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp 20.120.42.45:6443: i/o timeout
I1121 07:35:35.028553   24445 trace.go:205] Trace[989045462]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (21-Nov-2021 07:35:05.027) (total time: 30000ms):
Trace[989045462]: [30.000601159s] [30.000601159s] END
E1121 07:35:35.028615   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp 20.120.42.45:6443: i/o timeout
I1121 07:36:11.819776   24445 trace.go:205] Trace[1317911284]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (21-Nov-2021 07:35:41.818) (total time: 30001ms):
Trace[1317911284]: [30.001208658s] [30.001208658s] END
E1121 07:36:11.819843   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp 20.120.42.45:6443: i/o timeout
I1121 07:36:59.159325   24445 trace.go:205] Trace[1792799775]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (21-Nov-2021 07:36:29.157) (total time: 30002ms):
Trace[1792799775]: [30.002148958s] [30.002148958s] END
E1121 07:36:59.159451   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp 20.120.42.45:6443: i/o timeout
E1121 07:37:33.992912   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-jgmz6m
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 21 07:38:03.789: INFO: deleting an existing virtual network "custom-vnet"
Nov 21 07:38:14.392: INFO: deleting an existing route table "node-routetable"
Nov 21 07:38:24.889: INFO: deleting an existing network security group "node-nsg"
E1121 07:38:34.007816   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 07:38:35.361: INFO: deleting an existing network security group "control-plane-nsg"
Nov 21 07:38:45.757: INFO: verifying the existing resource group "capz-e2e-jgmz6m-public-custom-vnet" is empty
Nov 21 07:38:45.980: INFO: failed GETing resource "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-jgmz6m-public-custom-vnet/providers/Microsoft.Network/privateDnsZones/capz-e2e-971l8g-private.capz.io/virtualNetworkLinks/custom-vnet-link" with resources.Client#GetByID: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NoRegisteredProviderFound" Message="No registered resource provider found for location 'global' and API version '2021-02-01' for type 'privateDnsZones/virtualNetworkLinks'. The supported api-versions are '2018-09-01, 2020-01-01, 2020-06-01'. The supported locations are ', global'."
Nov 21 07:38:56.343: INFO: deleting the existing resource group "capz-e2e-jgmz6m-public-custom-vnet"
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1121 07:39:12.782112   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 55m5s on Ginkgo node 2 of 3


• [SLOW TEST:3304.825 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:455

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Sun, 21 Nov 2021 07:31:44 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-aonga3" for hosting the cluster
Nov 21 07:31:44.147: INFO: starting to create namespace for hosting the "capz-e2e-aonga3" test spec
2021/11/21 07:31:44 failed trying to get namespace (capz-e2e-aonga3):namespaces "capz-e2e-aonga3" not found
INFO: Creating namespace capz-e2e-aonga3
INFO: Creating event watcher for namespace "capz-e2e-aonga3"
Nov 21 07:31:44.186: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-aonga3-oot
INFO: Creating the workload cluster with name "capz-e2e-aonga3-oot" using the "external-cloud-provider" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 120 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:490

INFO: "with a single control plane node and 1 node" started at Sun, 21 Nov 2021 07:39:59 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-l3iv72" for hosting the cluster
Nov 21 07:39:59.240: INFO: starting to create namespace for hosting the "capz-e2e-l3iv72" test spec
2021/11/21 07:39:59 failed trying to get namespace (capz-e2e-l3iv72):namespaces "capz-e2e-l3iv72" not found
INFO: Creating namespace capz-e2e-l3iv72
INFO: Creating event watcher for namespace "capz-e2e-l3iv72"
Nov 21 07:39:59.282: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-l3iv72-aks
INFO: Creating the workload cluster with name "capz-e2e-l3iv72-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1121 07:40:07.411232   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 07:40:57.113652   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 07:41:36.485168   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 07:42:09.268675   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 07:42:39.868342   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 07:43:11.346324   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 07:43:48.631957   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 21 07:44:00.835: INFO: Waiting for the first control plane machine managed by capz-e2e-l3iv72/capz-e2e-l3iv72-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
INFO: Waiting for control plane to be ready
Nov 21 07:44:10.872: INFO: Waiting for the first control plane machine managed by capz-e2e-l3iv72/capz-e2e-l3iv72-aks to be provisioned
STEP: Waiting for all control plane nodes to exist
... skipping 13 lines ...
STEP: time sync OK for host aks-agentpool1-11387157-vmss000000
STEP: time sync OK for host aks-agentpool1-11387157-vmss000000
STEP: Dumping logs from the "capz-e2e-l3iv72-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-l3iv72/capz-e2e-l3iv72-aks logs
Nov 21 07:44:17.420: INFO: INFO: Collecting logs for node aks-agentpool1-11387157-vmss000000 in cluster capz-e2e-l3iv72-aks in namespace capz-e2e-l3iv72

E1121 07:44:29.421387   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 07:45:26.706134   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 07:46:17.400218   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 07:46:28.443: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-l3iv72/capz-e2e-l3iv72-aks: [dialing public load balancer at capz-e2e-l3iv72-aks-86951959.hcp.eastus.azmk8s.io: dial tcp 52.224.134.101:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
Nov 21 07:46:29.179: INFO: INFO: Collecting logs for node aks-agentpool1-11387157-vmss000000 in cluster capz-e2e-l3iv72-aks in namespace capz-e2e-l3iv72

E1121 07:47:00.865655   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 07:47:42.760781   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 07:48:26.697389   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 07:48:39.519: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-l3iv72/capz-e2e-l3iv72-aks: [dialing public load balancer at capz-e2e-l3iv72-aks-86951959.hcp.eastus.azmk8s.io: dial tcp 52.224.134.101:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-l3iv72/capz-e2e-l3iv72-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 390.29469ms
STEP: Dumping workload cluster capz-e2e-l3iv72/capz-e2e-l3iv72-aks Azure activity log
STEP: Creating log watcher for controller kube-system/calico-typha-horizontal-autoscaler-599c7bb664-zsc7r, container autoscaler
STEP: Creating log watcher for controller kube-system/calico-node-5t2wc, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-g4gkx, container calico-node
... skipping 8 lines ...
STEP: Fetching activity logs took 476.13171ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-l3iv72" namespace
STEP: Deleting all clusters in the capz-e2e-l3iv72 namespace
STEP: Deleting cluster capz-e2e-l3iv72-aks
INFO: Waiting for the Cluster capz-e2e-l3iv72/capz-e2e-l3iv72-aks to be deleted
STEP: Waiting for cluster capz-e2e-l3iv72-aks to be deleted
E1121 07:49:22.111972   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 07:49:53.094563   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 07:50:35.929956   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 07:51:26.792061   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 07:52:25.347241   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-l3iv72
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1121 07:53:20.228894   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 07:54:05.467245   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 14m7s on Ginkgo node 2 of 3


• [SLOW TEST:847.310 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413

INFO: "with a single control plane node and 1 node" started at Sun, 21 Nov 2021 07:29:15 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-xgdh1z" for hosting the cluster
Nov 21 07:29:15.223: INFO: starting to create namespace for hosting the "capz-e2e-xgdh1z" test spec
2021/11/21 07:29:15 failed trying to get namespace (capz-e2e-xgdh1z):namespaces "capz-e2e-xgdh1z" not found
INFO: Creating namespace capz-e2e-xgdh1z
INFO: Creating event watcher for namespace "capz-e2e-xgdh1z"
Nov 21 07:29:15.259: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-xgdh1z-gpu
INFO: Creating the workload cluster with name "capz-e2e-xgdh1z-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.4, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: Fetching activity logs took 1.392121995s
STEP: Dumping all the Cluster API resources in the "capz-e2e-xgdh1z" namespace
STEP: Deleting all clusters in the capz-e2e-xgdh1z namespace
STEP: Deleting cluster capz-e2e-xgdh1z-gpu
INFO: Waiting for the Cluster capz-e2e-xgdh1z/capz-e2e-xgdh1z-gpu to be deleted
STEP: Waiting for cluster capz-e2e-xgdh1z-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-fh26r, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-lkcqx, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-xgdh1z
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 37m8s on Ginkgo node 1 of 3

... skipping 59 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:579

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Sun, 21 Nov 2021 07:54:06 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-t22d7u" for hosting the cluster
Nov 21 07:54:06.553: INFO: starting to create namespace for hosting the "capz-e2e-t22d7u" test spec
2021/11/21 07:54:06 failed trying to get namespace (capz-e2e-t22d7u):namespaces "capz-e2e-t22d7u" not found
INFO: Creating namespace capz-e2e-t22d7u
INFO: Creating event watcher for namespace "capz-e2e-t22d7u"
Nov 21 07:54:06.592: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-t22d7u-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-t22d7u-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.4, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 12 lines ...
kubeadmconfig.bootstrap.cluster.x-k8s.io/capz-e2e-t22d7u-win-vmss-mp-win created
clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-t22d7u-win-vmss-flannel created
configmap/cni-capz-e2e-t22d7u-win-vmss-flannel created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1121 07:54:45.286474   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by capz-e2e-t22d7u/capz-e2e-t22d7u-win-vmss-control-plane to be provisioned
STEP: Waiting for one control plane node to exist
E1121 07:55:19.075774   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 07:56:11.351661   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 07:56:47.537815   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
INFO: Waiting for control plane capz-e2e-t22d7u/capz-e2e-t22d7u-win-vmss-control-plane to be ready (implies underlying nodes to be ready as well)
STEP: Waiting for the control plane to be ready
INFO: Waiting for the machine deployments to be provisioned
INFO: Waiting for the machine pools to be provisioned
STEP: Waiting for the machine pool workload nodes to exist
E1121 07:57:36.737652   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 07:58:26.884132   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Waiting for the machine pool workload nodes to exist
E1121 07:59:00.381762   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 07:59:58.156143   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 08:00:48.966791   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 08:01:48.653338   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/webmqgvq6 to be available
Nov 21 08:01:58.384: INFO: starting to wait for deployment to become available
Nov 21 08:02:18.511: INFO: Deployment default/webmqgvq6 is now available, took 20.127215975s
STEP: creating an internal Load Balancer service
Nov 21 08:02:18.511: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/webmqgvq6-ilb to be available
Nov 21 08:02:18.659: INFO: waiting for service default/webmqgvq6-ilb to be available
E1121 08:02:34.205810   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 08:03:08.858: INFO: service default/webmqgvq6-ilb is available, took 50.198279638s
STEP: connecting to the internal LB service from a curl pod
Nov 21 08:03:08.890: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-job4j7zu to be complete
Nov 21 08:03:08.935: INFO: waiting for job default/curl-to-ilb-job4j7zu to be complete
E1121 08:03:16.366147   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 08:03:19.005: INFO: job default/curl-to-ilb-job4j7zu is complete, took 10.070082763s
STEP: deleting the ilb test resources
Nov 21 08:03:19.005: INFO: deleting the ilb service: webmqgvq6-ilb
Nov 21 08:03:19.064: INFO: deleting the ilb job: curl-to-ilb-job4j7zu
STEP: creating an external Load Balancer service
Nov 21 08:03:19.099: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/webmqgvq6-elb to be available
Nov 21 08:03:19.151: INFO: waiting for service default/webmqgvq6-elb to be available
E1121 08:04:12.005408   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 08:04:49.488: INFO: service default/webmqgvq6-elb is available, took 1m30.336655682s
STEP: connecting to the external LB service from a curl pod
Nov 21 08:04:49.521: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-jobadwam6cx4b7 to be complete
Nov 21 08:04:49.555: INFO: waiting for job default/curl-to-elb-jobadwam6cx4b7 to be complete
Nov 21 08:04:59.624: INFO: job default/curl-to-elb-jobadwam6cx4b7 is complete, took 10.069024972s
... skipping 6 lines ...
Nov 21 08:04:59.737: INFO: starting to delete deployment webmqgvq6
Nov 21 08:04:59.769: INFO: starting to delete job curl-to-elb-jobadwam6cx4b7
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowslbvx27 to be available
Nov 21 08:04:59.934: INFO: starting to wait for deployment to become available
E1121 08:05:06.338505   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 08:06:05.310626   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 08:06:10.237: INFO: Deployment default/web-windowslbvx27 is now available, took 1m10.30309877s
STEP: creating an internal Load Balancer service
Nov 21 08:06:10.237: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/web-windowslbvx27-ilb to be available
Nov 21 08:06:10.295: INFO: waiting for service default/web-windowslbvx27-ilb to be available
E1121 08:06:40.149635   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 08:07:00.497: INFO: service default/web-windowslbvx27-ilb is available, took 50.202102855s
STEP: connecting to the internal LB service from a curl pod
Nov 21 08:07:00.529: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-job3ab5q to be complete
Nov 21 08:07:00.564: INFO: waiting for job default/curl-to-ilb-job3ab5q to be complete
Nov 21 08:07:10.637: INFO: job default/curl-to-ilb-job3ab5q is complete, took 10.073158041s
STEP: deleting the ilb test resources
Nov 21 08:07:10.638: INFO: deleting the ilb service: web-windowslbvx27-ilb
Nov 21 08:07:10.707: INFO: deleting the ilb job: curl-to-ilb-job3ab5q
STEP: creating an external Load Balancer service
Nov 21 08:07:10.746: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/web-windowslbvx27-elb to be available
Nov 21 08:07:10.817: INFO: waiting for service default/web-windowslbvx27-elb to be available
E1121 08:07:38.846825   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 08:08:28.311923   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 08:09:21.278: INFO: service default/web-windowslbvx27-elb is available, took 2m10.461103589s
STEP: connecting to the external LB service from a curl pod
Nov 21 08:09:21.309: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-jobstoz0qjzfo4 to be complete
Nov 21 08:09:21.348: INFO: waiting for job default/curl-to-elb-jobstoz0qjzfo4 to be complete
E1121 08:09:23.445977   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 08:09:31.420: INFO: job default/curl-to-elb-jobstoz0qjzfo4 is complete, took 10.072433376s
STEP: connecting directly to the external LB service
Nov 21 08:09:31.420: INFO: starting attempts to connect directly to the external LB service
2021/11/21 08:09:31 [DEBUG] GET http://20.84.39.77
Nov 21 08:09:31.481: INFO: successfully connected to the external LB service
STEP: deleting the test resources
... skipping 9 lines ...
Nov 21 08:09:42.972: INFO: INFO: Collecting logs for node capz-e2e-t22d7u-win-vmss-mp-0000000 in cluster capz-e2e-t22d7u-win-vmss in namespace capz-e2e-t22d7u

Nov 21 08:09:54.214: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-t22d7u-win-vmss-mp-0

Nov 21 08:09:54.608: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-t22d7u-win-vmss in namespace capz-e2e-t22d7u

E1121 08:10:13.237121   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 21 08:10:25.247: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

STEP: Dumping workload cluster capz-e2e-t22d7u/capz-e2e-t22d7u-win-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 345.168911ms
STEP: Dumping workload cluster capz-e2e-t22d7u/capz-e2e-t22d7u-win-vmss Azure activity log
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-5rdll, container kube-flannel
... skipping 11 lines ...
STEP: Fetching activity logs took 890.426444ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-t22d7u" namespace
STEP: Deleting all clusters in the capz-e2e-t22d7u namespace
STEP: Deleting cluster capz-e2e-t22d7u-win-vmss
INFO: Waiting for the Cluster capz-e2e-t22d7u/capz-e2e-t22d7u-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-t22d7u-win-vmss to be deleted
E1121 08:10:55.642220   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-5zrkq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-kzssl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-mggzr, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-5rdll, container kube-flannel: http2: client connection lost
E1121 08:11:26.006040   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 08:12:14.419852   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 08:12:50.185992   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 08:13:20.495710   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 08:13:56.657998   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 08:14:54.045863   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 08:15:29.571241   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 08:16:23.249279   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 08:17:07.112274   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 08:17:54.404207   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-t22d7u
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1121 08:18:53.141637   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 08:19:25.427440   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1121 08:20:05.694462   24445 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-jgmz6m/events?resourceVersion=12467": dial tcp: lookup capz-e2e-jgmz6m-public-custom-vnet-a3822645.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 26m2s on Ginkgo node 2 of 3


• [SLOW TEST:1561.634 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Sun, 21 Nov 2021 07:47:29 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-vxdjqn" for hosting the cluster
Nov 21 07:47:29.804: INFO: starting to create namespace for hosting the "capz-e2e-vxdjqn" test spec
2021/11/21 07:47:29 failed trying to get namespace (capz-e2e-vxdjqn):namespaces "capz-e2e-vxdjqn" not found
INFO: Creating namespace capz-e2e-vxdjqn
INFO: Creating event watcher for namespace "capz-e2e-vxdjqn"
Nov 21 07:47:29.852: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-vxdjqn-win-ha
INFO: Creating the workload cluster with name "capz-e2e-vxdjqn-win-ha" using the "windows" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 55 lines ...
STEP: waiting for job default/curl-to-elb-jobm7jle7ubyyu to be complete
Nov 21 08:03:13.484: INFO: waiting for job default/curl-to-elb-jobm7jle7ubyyu to be complete
Nov 21 08:03:23.553: INFO: job default/curl-to-elb-jobm7jle7ubyyu is complete, took 10.069086863s
STEP: connecting directly to the external LB service
Nov 21 08:03:23.553: INFO: starting attempts to connect directly to the external LB service
2021/11/21 08:03:23 [DEBUG] GET http://20.85.136.5
2021/11/21 08:03:53 [ERR] GET http://20.85.136.5 request failed: Get "http://20.85.136.5": dial tcp 20.85.136.5:80: i/o timeout
2021/11/21 08:03:53 [DEBUG] GET http://20.85.136.5: retrying in 1s (4 left)
2021/11/21 08:04:24 [ERR] GET http://20.85.136.5 request failed: Get "http://20.85.136.5": dial tcp 20.85.136.5:80: i/o timeout
2021/11/21 08:04:24 [DEBUG] GET http://20.85.136.5: retrying in 2s (3 left)
Nov 21 08:04:33.683: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 21 08:04:33.683: INFO: starting to delete external LB service web89zqba-elb
Nov 21 08:04:33.759: INFO: starting to delete deployment web89zqba
Nov 21 08:04:33.798: INFO: starting to delete job curl-to-elb-jobm7jle7ubyyu
... skipping 25 lines ...
STEP: waiting for job default/curl-to-elb-jobkje9dylx3xo to be complete
Nov 21 08:08:35.426: INFO: waiting for job default/curl-to-elb-jobkje9dylx3xo to be complete
Nov 21 08:08:45.496: INFO: job default/curl-to-elb-jobkje9dylx3xo is complete, took 10.070173866s
STEP: connecting directly to the external LB service
Nov 21 08:08:45.496: INFO: starting attempts to connect directly to the external LB service
2021/11/21 08:08:45 [DEBUG] GET http://20.85.136.179
2021/11/21 08:09:15 [ERR] GET http://20.85.136.179 request failed: Get "http://20.85.136.179": dial tcp 20.85.136.179:80: i/o timeout
2021/11/21 08:09:15 [DEBUG] GET http://20.85.136.179: retrying in 1s (4 left)
Nov 21 08:09:16.561: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 21 08:09:16.561: INFO: starting to delete external LB service web-windows2j6jjo-elb
Nov 21 08:09:16.647: INFO: starting to delete deployment web-windows2j6jjo
Nov 21 08:09:16.682: INFO: starting to delete job curl-to-elb-jobkje9dylx3xo
... skipping 49 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-2mjx6, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-tmvbx, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-vxdjqn-win-ha-control-plane-v8zk4, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-7klqx, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-sbzgr, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-vxdjqn-win-ha-control-plane-v8zk4, container kube-apiserver
STEP: Got error while iterating over activity logs for resource group capz-e2e-vxdjqn-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001234991s
STEP: Dumping all the Cluster API resources in the "capz-e2e-vxdjqn" namespace
STEP: Deleting all clusters in the capz-e2e-vxdjqn namespace
STEP: Deleting cluster capz-e2e-vxdjqn-win-ha
INFO: Waiting for the Cluster capz-e2e-vxdjqn/capz-e2e-vxdjqn-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-vxdjqn-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-2mjx6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-xt2g4, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mhjt5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-vxdjqn-win-ha-control-plane-7lnbg, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-lvxwt, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-vxdjqn-win-ha-control-plane-7lnbg, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-thq9n, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-vxdjqn-win-ha-control-plane-7lnbg, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-7klqx, container kube-flannel: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-vxdjqn
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 34m43s on Ginkgo node 3 of 3

... skipping 9 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a GPU-enabled cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:76

Ran 9 of 24 Specs in 6012.690 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 15 Skipped


Ginkgo ran 1 suite in 1h41m50.976544133s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:176: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:184: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...