This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 5 succeeded
Started2021-11-28 18:39
Elapsed2h15m
Revisionmain

No Test Failures!


Show 5 Passed Tests

Show 15 Skipped Tests

Error lines from build-log.txt

... skipping 432 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:288

INFO: "With ipv6 worker node" started at Sun, 28 Nov 2021 18:48:11 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-wizb5s" for hosting the cluster
Nov 28 18:48:11.124: INFO: starting to create namespace for hosting the "capz-e2e-wizb5s" test spec
2021/11/28 18:48:11 failed trying to get namespace (capz-e2e-wizb5s):namespaces "capz-e2e-wizb5s" not found
INFO: Creating namespace capz-e2e-wizb5s
INFO: Creating event watcher for namespace "capz-e2e-wizb5s"
Nov 28 18:48:11.169: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-wizb5s-ipv6
INFO: Creating the workload cluster with name "capz-e2e-wizb5s-ipv6" using the "ipv6" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 526.849407ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-wizb5s" namespace
STEP: Deleting all clusters in the capz-e2e-wizb5s namespace
STEP: Deleting cluster capz-e2e-wizb5s-ipv6
INFO: Waiting for the Cluster capz-e2e-wizb5s/capz-e2e-wizb5s-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-wizb5s-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-wizb5s-ipv6-control-plane-g78f4, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-wizb5s-ipv6-control-plane-g78f4, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-9spk8, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5hbs7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-wizb5s-ipv6-control-plane-j79jb, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-p4gd8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-wizb5s-ipv6-control-plane-698tp, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-wizb5s-ipv6-control-plane-698tp, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-hsngw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-wizb5s-ipv6-control-plane-j79jb, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-wizb5s-ipv6-control-plane-g78f4, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-wizb5s-ipv6-control-plane-g78f4, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-wizb5s-ipv6-control-plane-698tp, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-n6tzr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-z278c, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lmkx5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-wizb5s-ipv6-control-plane-j79jb, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-zk7j4, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-wizb5s-ipv6-control-plane-698tp, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-wizb5s-ipv6-control-plane-j79jb, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-545pp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fv4sh, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ssv72, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-wizb5s
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 18m17s on Ginkgo node 1 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:334

INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" started at Sun, 28 Nov 2021 19:06:28 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-3asd5p" for hosting the cluster
Nov 28 19:06:28.037: INFO: starting to create namespace for hosting the "capz-e2e-3asd5p" test spec
2021/11/28 19:06:28 failed trying to get namespace (capz-e2e-3asd5p):namespaces "capz-e2e-3asd5p" not found
INFO: Creating namespace capz-e2e-3asd5p
INFO: Creating event watcher for namespace "capz-e2e-3asd5p"
Nov 28 19:06:28.072: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-3asd5p-vmss
INFO: Creating the workload cluster with name "capz-e2e-3asd5p-vmss" using the "machine-pool" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 96 lines ...
STEP: waiting for job default/curl-to-elb-joblj20fd9o7ep to be complete
Nov 28 19:20:58.232: INFO: waiting for job default/curl-to-elb-joblj20fd9o7ep to be complete
Nov 28 19:21:08.457: INFO: job default/curl-to-elb-joblj20fd9o7ep is complete, took 10.224802328s
STEP: connecting directly to the external LB service
Nov 28 19:21:08.457: INFO: starting attempts to connect directly to the external LB service
2021/11/28 19:21:08 [DEBUG] GET http://20.54.239.186
2021/11/28 19:21:38 [ERR] GET http://20.54.239.186 request failed: Get "http://20.54.239.186": dial tcp 20.54.239.186:80: i/o timeout
2021/11/28 19:21:38 [DEBUG] GET http://20.54.239.186: retrying in 1s (4 left)
Nov 28 19:21:46.959: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 28 19:21:46.959: INFO: starting to delete external LB service web-windowsn6fifs-elb
Nov 28 19:21:47.090: INFO: starting to delete deployment web-windowsn6fifs
Nov 28 19:21:47.205: INFO: starting to delete job curl-to-elb-joblj20fd9o7ep
... skipping 41 lines ...
Nov 28 19:26:21.363: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

Nov 28 19:26:21.749: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-3asd5p-vmss in namespace capz-e2e-3asd5p

Nov 28 19:26:56.782: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set win-p-win

Failed to get logs for machine pool capz-e2e-3asd5p-vmss-mp-win, cluster capz-e2e-3asd5p/capz-e2e-3asd5p-vmss: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-3asd5p/capz-e2e-3asd5p-vmss kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-node-b2r4l, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-22cv2, container calico-node-felix
STEP: Creating log watcher for controller kube-system/calico-node-windows-6wvth, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-fk5l7, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-3asd5p-vmss-control-plane-z2748, container kube-controller-manager
... skipping 10 lines ...
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-5ztjs, container coredns
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-3asd5p-vmss-control-plane-z2748, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-596tg, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-cdv6x, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-s5xms, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-22cv2, container calico-node-startup
STEP: Got error while iterating over activity logs for resource group capz-e2e-3asd5p-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000904638s
STEP: Dumping all the Cluster API resources in the "capz-e2e-3asd5p" namespace
STEP: Deleting all clusters in the capz-e2e-3asd5p namespace
STEP: Deleting cluster capz-e2e-3asd5p-vmss
INFO: Waiting for the Cluster capz-e2e-3asd5p/capz-e2e-3asd5p-vmss to be deleted
STEP: Waiting for cluster capz-e2e-3asd5p-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-3asd5p-vmss-control-plane-z2748, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-3asd5p-vmss-control-plane-z2748, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-3asd5p-vmss-control-plane-z2748, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jp9lp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-fk5l7, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-b2r4l, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5ztjs, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4nh9q, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-s5xms, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kkzc4, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-r24j5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-3asd5p-vmss-control-plane-z2748, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5cnpr, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-3asd5p
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" ran for 28m17s on Ginkgo node 1 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:205

INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" started at Sun, 28 Nov 2021 18:48:10 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-9i3jd2" for hosting the cluster
Nov 28 18:48:10.493: INFO: starting to create namespace for hosting the "capz-e2e-9i3jd2" test spec
2021/11/28 18:48:10 failed trying to get namespace (capz-e2e-9i3jd2):namespaces "capz-e2e-9i3jd2" not found
INFO: Creating namespace capz-e2e-9i3jd2
INFO: Creating event watcher for namespace "capz-e2e-9i3jd2"
Nov 28 18:48:10.529: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-9i3jd2-ha
INFO: Creating the workload cluster with name "capz-e2e-9i3jd2-ha" using the "(default)" template (Kubernetes v1.22.4, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 75 lines ...
Nov 28 18:59:24.239: INFO: starting to delete external LB service webdk1m67-elb
Nov 28 18:59:24.423: INFO: starting to delete deployment webdk1m67
Nov 28 18:59:24.544: INFO: starting to delete job curl-to-elb-jobwtfgbyhhzlt
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 28 18:59:24.709: INFO: starting to create dev deployment namespace
2021/11/28 18:59:24 failed trying to get namespace (development):namespaces "development" not found
2021/11/28 18:59:24 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 28 18:59:24.948: INFO: starting to create prod deployment namespace
2021/11/28 18:59:25 failed trying to get namespace (production):namespaces "production" not found
2021/11/28 18:59:25 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 28 18:59:25.186: INFO: starting to create frontend-prod deployments
Nov 28 18:59:25.305: INFO: starting to create frontend-dev deployments
Nov 28 18:59:25.422: INFO: starting to create backend deployments
Nov 28 18:59:25.540: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 28 18:59:52.421: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.142.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 28 19:02:03.956: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 28 19:02:04.404: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.142.196 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.142.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 28 19:06:26.047: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 28 19:06:26.454: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.160.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 28 19:08:39.167: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 28 19:08:39.569: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.160.2 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.160.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 28 19:13:03.361: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 28 19:13:03.806: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.142.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 28 19:15:16.529: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 28 19:15:16.934: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.142.196 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowshq623f to be available
Nov 28 19:17:29.077: INFO: starting to wait for deployment to become available
Nov 28 19:18:40.058: INFO: Deployment default/web-windowshq623f is now available, took 1m10.981534779s
... skipping 20 lines ...
STEP: waiting for job default/curl-to-elb-jobey3rhh3c296 to be complete
Nov 28 19:22:03.837: INFO: waiting for job default/curl-to-elb-jobey3rhh3c296 to be complete
Nov 28 19:22:14.068: INFO: job default/curl-to-elb-jobey3rhh3c296 is complete, took 10.23080498s
STEP: connecting directly to the external LB service
Nov 28 19:22:14.068: INFO: starting attempts to connect directly to the external LB service
2021/11/28 19:22:14 [DEBUG] GET http://20.56.188.50
2021/11/28 19:22:44 [ERR] GET http://20.56.188.50 request failed: Get "http://20.56.188.50": dial tcp 20.56.188.50:80: i/o timeout
2021/11/28 19:22:44 [DEBUG] GET http://20.56.188.50: retrying in 1s (4 left)
Nov 28 19:23:00.689: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 28 19:23:00.689: INFO: starting to delete external LB service web-windowshq623f-elb
Nov 28 19:23:00.873: INFO: starting to delete deployment web-windowshq623f
Nov 28 19:23:00.995: INFO: starting to delete job curl-to-elb-jobey3rhh3c296
... skipping 20 lines ...
Nov 28 19:24:10.651: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-9i3jd2-ha-md-0-vv8bt

Nov 28 19:24:11.050: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster capz-e2e-9i3jd2-ha in namespace capz-e2e-9i3jd2

Nov 28 19:24:43.397: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-9i3jd2-ha-md-win-xkcss

Failed to get logs for machine capz-e2e-9i3jd2-ha-md-win-6499cd966-j5b68, cluster capz-e2e-9i3jd2/capz-e2e-9i3jd2-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 28 19:24:43.814: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster capz-e2e-9i3jd2-ha in namespace capz-e2e-9i3jd2

Nov 28 19:25:21.661: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-9i3jd2-ha-md-win-hz8xk

Failed to get logs for machine capz-e2e-9i3jd2-ha-md-win-6499cd966-tgkdk, cluster capz-e2e-9i3jd2/capz-e2e-9i3jd2-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-9i3jd2/capz-e2e-9i3jd2-ha kube-system pod logs
STEP: Fetching kube-system pod logs took 984.94825ms
STEP: Dumping workload cluster capz-e2e-9i3jd2/capz-e2e-9i3jd2-ha Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-kg4mm, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-bhgnm, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-9i3jd2-ha-control-plane-b48hz, container kube-apiserver
... skipping 22 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-9i3jd2-ha-control-plane-4kggj, container kube-apiserver
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-9i3jd2-ha-control-plane-4kggj, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-zqgmm, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-q9btn, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-2tff6, container calico-node-felix
STEP: Creating log watcher for controller kube-system/calico-node-windows-kx9tg, container calico-node-startup
STEP: Got error while iterating over activity logs for resource group capz-e2e-9i3jd2-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000941473s
STEP: Dumping all the Cluster API resources in the "capz-e2e-9i3jd2" namespace
STEP: Deleting all clusters in the capz-e2e-9i3jd2 namespace
STEP: Deleting cluster capz-e2e-9i3jd2-ha
INFO: Waiting for the Cluster capz-e2e-9i3jd2/capz-e2e-9i3jd2-ha to be deleted
STEP: Waiting for cluster capz-e2e-9i3jd2-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-9i3jd2-ha-control-plane-b48hz, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-9i3jd2-ha-control-plane-4kggj, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-bhgnm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-9i3jd2-ha-control-plane-b48hz, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-9i3jd2-ha-control-plane-b48hz, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hm5tq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-hh7bf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-9i3jd2-ha-control-plane-4kggj, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-zpzwj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-kx9tg, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-zqgmm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-rpsjx, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-2tff6, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-q9btn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-kx9tg, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-9i3jd2-ha-control-plane-b48hz, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-9i3jd2-ha-control-plane-4kggj, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-9i3jd2-ha-control-plane-4kggj, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-kg4mm, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-2tff6, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-z7ptn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-29mc9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-t5vdb, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-9i3jd2
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" ran for 47m19s on Ginkgo node 2 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:144

INFO: "Creates a public management cluster in the same vnet" started at Sun, 28 Nov 2021 18:48:09 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-6if4zr" for hosting the cluster
Nov 28 18:48:09.926: INFO: starting to create namespace for hosting the "capz-e2e-6if4zr" test spec
2021/11/28 18:48:09 failed trying to get namespace (capz-e2e-6if4zr):namespaces "capz-e2e-6if4zr" not found
INFO: Creating namespace capz-e2e-6if4zr
INFO: Creating event watcher for namespace "capz-e2e-6if4zr"
Nov 28 18:48:09.970: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-6if4zr-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-lnp9l, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-c9k2z, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-6if4zr-public-custom-vnet-control-plane-p8725, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-mj7wl, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-6if4zr-public-custom-vnet-control-plane-p8725, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-6if4zr-public-custom-vnet-control-plane-p8725, container kube-apiserver
STEP: Got error while iterating over activity logs for resource group capz-e2e-6if4zr-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000924947s
STEP: Dumping all the Cluster API resources in the "capz-e2e-6if4zr" namespace
STEP: Deleting all clusters in the capz-e2e-6if4zr namespace
STEP: Deleting cluster capz-e2e-6if4zr-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-6if4zr/capz-e2e-6if4zr-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-6if4zr-public-custom-vnet to be deleted
W1128 19:42:24.025985   24378 reflector.go:441] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1128 19:42:55.274774   24378 trace.go:205] Trace[391892447]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (28-Nov-2021 19:42:25.273) (total time: 30001ms):
Trace[391892447]: [30.001053715s] [30.001053715s] END
E1128 19:42:55.274827   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp 20.73.122.169:6443: i/o timeout
I1128 19:43:27.473440   24378 trace.go:205] Trace[1336910951]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (28-Nov-2021 19:42:57.472) (total time: 30000ms):
Trace[1336910951]: [30.00099073s] [30.00099073s] END
E1128 19:43:27.473500   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp 20.73.122.169:6443: i/o timeout
I1128 19:44:02.957332   24378 trace.go:205] Trace[156801872]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (28-Nov-2021 19:43:32.956) (total time: 30000ms):
Trace[156801872]: [30.000765384s] [30.000765384s] END
E1128 19:44:02.957407   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp 20.73.122.169:6443: i/o timeout
I1128 19:44:41.283042   24378 trace.go:205] Trace[349100351]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (28-Nov-2021 19:44:11.281) (total time: 30001ms):
Trace[349100351]: [30.001417191s] [30.001417191s] END
E1128 19:44:41.283085   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp 20.73.122.169:6443: i/o timeout
E1128 19:44:55.798014   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-6if4zr
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 28 19:45:04.779: INFO: deleting an existing virtual network "custom-vnet"
Nov 28 19:45:16.055: INFO: deleting an existing route table "node-routetable"
Nov 28 19:45:26.688: INFO: deleting an existing network security group "node-nsg"
Nov 28 19:45:37.956: INFO: deleting an existing network security group "control-plane-nsg"
E1128 19:45:45.890344   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 28 19:45:49.252: INFO: verifying the existing resource group "capz-e2e-6if4zr-public-custom-vnet" is empty
Nov 28 19:45:49.292: INFO: deleting the existing resource group "capz-e2e-6if4zr-public-custom-vnet"
E1128 19:46:41.184121   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1128 19:47:39.581020   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 59m55s on Ginkgo node 3 of 3


• [SLOW TEST:3594.575 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:455

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Sun, 28 Nov 2021 19:35:29 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-u15m3v" for hosting the cluster
Nov 28 19:35:29.910: INFO: starting to create namespace for hosting the "capz-e2e-u15m3v" test spec
2021/11/28 19:35:29 failed trying to get namespace (capz-e2e-u15m3v):namespaces "capz-e2e-u15m3v" not found
INFO: Creating namespace capz-e2e-u15m3v
INFO: Creating event watcher for namespace "capz-e2e-u15m3v"
Nov 28 19:35:29.940: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-u15m3v-oot
INFO: Creating the workload cluster with name "capz-e2e-u15m3v-oot" using the "external-cloud-provider" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 98 lines ...
STEP: Fetching activity logs took 507.30859ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-u15m3v" namespace
STEP: Deleting all clusters in the capz-e2e-u15m3v namespace
STEP: Deleting cluster capz-e2e-u15m3v-oot
INFO: Waiting for the Cluster capz-e2e-u15m3v/capz-e2e-u15m3v-oot to be deleted
STEP: Waiting for cluster capz-e2e-u15m3v-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lfj2n, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-llmj6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-2rfg6, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ggllc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-vbngr, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-txw2m, container cloud-node-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-u15m3v
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 16m59s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413

INFO: "with a single control plane node and 1 node" started at Sun, 28 Nov 2021 19:34:45 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-u00ga4" for hosting the cluster
Nov 28 19:34:45.346: INFO: starting to create namespace for hosting the "capz-e2e-u00ga4" test spec
2021/11/28 19:34:45 failed trying to get namespace (capz-e2e-u00ga4):namespaces "capz-e2e-u00ga4" not found
INFO: Creating namespace capz-e2e-u00ga4
INFO: Creating event watcher for namespace "capz-e2e-u00ga4"
Nov 28 19:34:45.372: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-u00ga4-gpu
INFO: Creating the workload cluster with name "capz-e2e-u00ga4-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.4, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: Fetching activity logs took 933.188082ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-u00ga4" namespace
STEP: Deleting all clusters in the capz-e2e-u00ga4 namespace
STEP: Deleting cluster capz-e2e-u00ga4-gpu
INFO: Waiting for the Cluster capz-e2e-u00ga4/capz-e2e-u00ga4-gpu to be deleted
STEP: Waiting for cluster capz-e2e-u00ga4-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-u00ga4-gpu-control-plane-vmbvp, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-frcz9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-cpwsc, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-u00ga4-gpu-control-plane-vmbvp, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-u00ga4-gpu-control-plane-vmbvp, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-b78x4, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-br6t8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-u00ga4-gpu-control-plane-vmbvp, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-tds7q, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4fqm6, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qpp6g, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-u00ga4
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 36m28s on Ginkgo node 1 of 3

... skipping 57 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Sun, 28 Nov 2021 19:52:28 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-yj7qys" for hosting the cluster
Nov 28 19:52:28.434: INFO: starting to create namespace for hosting the "capz-e2e-yj7qys" test spec
2021/11/28 19:52:28 failed trying to get namespace (capz-e2e-yj7qys):namespaces "capz-e2e-yj7qys" not found
INFO: Creating namespace capz-e2e-yj7qys
INFO: Creating event watcher for namespace "capz-e2e-yj7qys"
Nov 28 19:52:28.466: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-yj7qys-win-ha
INFO: Creating the workload cluster with name "capz-e2e-yj7qys-win-ha" using the "windows" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 55 lines ...
STEP: waiting for job default/curl-to-elb-joblcfaxx8sxch to be complete
Nov 28 20:03:04.293: INFO: waiting for job default/curl-to-elb-joblcfaxx8sxch to be complete
Nov 28 20:03:14.525: INFO: job default/curl-to-elb-joblcfaxx8sxch is complete, took 10.231474029s
STEP: connecting directly to the external LB service
Nov 28 20:03:14.525: INFO: starting attempts to connect directly to the external LB service
2021/11/28 20:03:14 [DEBUG] GET http://20.86.249.168
2021/11/28 20:03:44 [ERR] GET http://20.86.249.168 request failed: Get "http://20.86.249.168": dial tcp 20.86.249.168:80: i/o timeout
2021/11/28 20:03:44 [DEBUG] GET http://20.86.249.168: retrying in 1s (4 left)
Nov 28 20:03:52.907: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 28 20:03:52.907: INFO: starting to delete external LB service web12h3z6-elb
Nov 28 20:03:53.083: INFO: starting to delete deployment web12h3z6
Nov 28 20:03:53.205: INFO: starting to delete job curl-to-elb-joblcfaxx8sxch
... skipping 25 lines ...
STEP: waiting for job default/curl-to-elb-joblzcdbn41jvz to be complete
Nov 28 20:08:48.625: INFO: waiting for job default/curl-to-elb-joblzcdbn41jvz to be complete
Nov 28 20:08:58.859: INFO: job default/curl-to-elb-joblzcdbn41jvz is complete, took 10.234507237s
STEP: connecting directly to the external LB service
Nov 28 20:08:58.859: INFO: starting attempts to connect directly to the external LB service
2021/11/28 20:08:58 [DEBUG] GET http://20.86.249.168
2021/11/28 20:09:28 [ERR] GET http://20.86.249.168 request failed: Get "http://20.86.249.168": dial tcp 20.86.249.168:80: i/o timeout
2021/11/28 20:09:28 [DEBUG] GET http://20.86.249.168: retrying in 1s (4 left)
Nov 28 20:09:45.423: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 28 20:09:45.423: INFO: starting to delete external LB service web-windowsbkkv4p-elb
Nov 28 20:09:45.615: INFO: starting to delete deployment web-windowsbkkv4p
Nov 28 20:09:45.737: INFO: starting to delete job curl-to-elb-joblzcdbn41jvz
... skipping 49 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-yj7qys-win-ha-control-plane-lj769, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-f8rsw, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-yj7qys-win-ha-control-plane-zr9jz, container etcd
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-yj7qys-win-ha-control-plane-sw64k, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-qgbs5, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-yj7qys-win-ha-control-plane-sw64k, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-yj7qys-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000614675s
STEP: Dumping all the Cluster API resources in the "capz-e2e-yj7qys" namespace
STEP: Deleting all clusters in the capz-e2e-yj7qys namespace
STEP: Deleting cluster capz-e2e-yj7qys-win-ha
INFO: Waiting for the Cluster capz-e2e-yj7qys/capz-e2e-yj7qys-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-yj7qys-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-dnsqd, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-sslmb, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-f8rsw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-8vgvv, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-yj7qys
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 31m0s on Ginkgo node 2 of 3

... skipping 12 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:490

INFO: "with a single control plane node and 1 node" started at Sun, 28 Nov 2021 19:48:04 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-f2a5dt" for hosting the cluster
Nov 28 19:48:04.503: INFO: starting to create namespace for hosting the "capz-e2e-f2a5dt" test spec
2021/11/28 19:48:04 failed trying to get namespace (capz-e2e-f2a5dt):namespaces "capz-e2e-f2a5dt" not found
INFO: Creating namespace capz-e2e-f2a5dt
INFO: Creating event watcher for namespace "capz-e2e-f2a5dt"
Nov 28 19:48:04.545: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-f2a5dt-aks
INFO: Creating the workload cluster with name "capz-e2e-f2a5dt-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1128 19:48:31.983213   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:49:13.399690   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:50:06.848154   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:50:43.263142   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:51:36.555992   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 28 19:51:45.638: INFO: Waiting for the first control plane machine managed by capz-e2e-f2a5dt/capz-e2e-f2a5dt-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
E1128 19:52:33.640965   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:53:30.453569   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:54:20.884816   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:55:15.249675   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:56:13.554747   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:56:59.127877   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:57:33.298888   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:58:18.054881   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 19:59:14.098317   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:00:10.020972   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:00:53.373020   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:01:46.866787   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:02:21.530609   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:03:18.471860   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
Nov 28 20:03:46.303: INFO: Waiting for the first control plane machine managed by capz-e2e-f2a5dt/capz-e2e-f2a5dt-aks to be provisioned
STEP: Waiting for all control plane nodes to exist
INFO: Waiting for the machine deployments to be provisioned
INFO: Waiting for the machine pools to be provisioned
STEP: Waiting for the machine pool workload nodes to exist
... skipping 12 lines ...
STEP: Dumping logs from the "capz-e2e-f2a5dt-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-f2a5dt/capz-e2e-f2a5dt-aks logs
Nov 28 20:03:54.747: INFO: INFO: Collecting logs for node aks-agentpool1-49997399-vmss000000 in cluster capz-e2e-f2a5dt-aks in namespace capz-e2e-f2a5dt

Nov 28 20:03:54.834: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-f2a5dt/capz-e2e-f2a5dt-aks: [dialing public load balancer at capz-e2e-f2a5dt-aks-e2f498a6.hcp.westeurope.azmk8s.io: dial tcp: lookup capz-e2e-f2a5dt-aks-e2f498a6.hcp.westeurope.azmk8s.io on 10.63.240.10:53: no such host, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
Nov 28 20:03:55.322: INFO: INFO: Collecting logs for node aks-agentpool1-49997399-vmss000000 in cluster capz-e2e-f2a5dt-aks in namespace capz-e2e-f2a5dt

Nov 28 20:03:55.425: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-f2a5dt/capz-e2e-f2a5dt-aks: [dialing public load balancer at capz-e2e-f2a5dt-aks-e2f498a6.hcp.westeurope.azmk8s.io: dial tcp: lookup capz-e2e-f2a5dt-aks-e2f498a6.hcp.westeurope.azmk8s.io on 10.63.240.10:53: no such host, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-f2a5dt/capz-e2e-f2a5dt-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 686.472682ms
STEP: Dumping workload cluster capz-e2e-f2a5dt/capz-e2e-f2a5dt-aks Azure activity log
STEP: Creating log watcher for controller kube-system/calico-typha-horizontal-autoscaler-599c7bb664-wp4n5, container autoscaler
STEP: Creating log watcher for controller kube-system/coredns-84d976c568-n9x8g, container coredns
STEP: Creating log watcher for controller kube-system/coredns-autoscaler-54d55c8b75-hf6tk, container autoscaler
... skipping 8 lines ...
STEP: Fetching activity logs took 482.265238ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-f2a5dt" namespace
STEP: Deleting all clusters in the capz-e2e-f2a5dt namespace
STEP: Deleting cluster capz-e2e-f2a5dt-aks
INFO: Waiting for the Cluster capz-e2e-f2a5dt/capz-e2e-f2a5dt-aks to be deleted
STEP: Waiting for cluster capz-e2e-f2a5dt-aks to be deleted
E1128 20:04:06.178992   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:04:40.271100   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:05:22.947413   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:06:18.275675   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:07:04.928421   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:08:03.167361   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:08:40.870687   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:09:34.469322   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:10:04.869552   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:10:54.969679   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:11:44.726908   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:12:26.224366   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:13:16.675422   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:13:52.640448   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:14:51.246130   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:15:46.502179   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:16:29.274827   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:17:13.023400   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:18:12.709702   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:18:52.309810   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:19:41.764517   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:20:20.396598   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:20:57.708120   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:21:45.145431   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:22:20.013002   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:23:03.164278   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:23:46.577427   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:24:37.980617   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:25:10.484341   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:25:48.275414   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:26:35.436670   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:27:21.312636   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:28:14.621879   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:28:55.856897   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:29:53.871658   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:30:25.126340   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-f2a5dt
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1128 20:31:07.083724   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:31:47.173291   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1128 20:32:25.404534   24378 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-6if4zr/events?resourceVersion=12012": dial tcp: lookup capz-e2e-6if4zr-public-custom-vnet-d13f33d1.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 44m39s on Ginkgo node 3 of 3


• [SLOW TEST:2678.962 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating an AKS cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:489
    with a single control plane node and 1 node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:490
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2021-11-28T20:39:38Z"}
++ early_exit_handler
++ '[' -n 158 ']'
++ kill -TERM 158
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 12 lines ...
Cleaning up after docker
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
================================================================================
Done cleaning up after docker in docker.
All sensitive variables are redacted
{"component":"entrypoint","file":"prow/entrypoint/run.go:255","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2021-11-28T20:54:39Z"}
{"component":"entrypoint","error":"os: process already finished","file":"prow/entrypoint/run.go:257","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2021-11-28T20:54:39Z"}