This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 1 succeeded
Started2021-11-29 06:39
Elapsed2h15m
Revisionmain

Test Failures


capz-e2e Workload cluster creation Creating a GPU-enabled cluster with a single control plane node and 1 node 37m43s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sGPU\-enabled\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413
Timed out after 1200.000s.
Expected
    <bool>: false
to be true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:76
				
				Click to see stdout/stderrfrom junit.e2e_suite.2.xml

Filter through log files | View test history on testgrid


Show 1 Passed Tests

Show 15 Skipped Tests

Error lines from build-log.txt

... skipping 435 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:288

INFO: "With ipv6 worker node" started at Mon, 29 Nov 2021 06:48:26 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-510kq6" for hosting the cluster
Nov 29 06:48:26.204: INFO: starting to create namespace for hosting the "capz-e2e-510kq6" test spec
2021/11/29 06:48:26 failed trying to get namespace (capz-e2e-510kq6):namespaces "capz-e2e-510kq6" not found
INFO: Creating namespace capz-e2e-510kq6
INFO: Creating event watcher for namespace "capz-e2e-510kq6"
Nov 29 06:48:26.247: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-510kq6-ipv6
INFO: Creating the workload cluster with name "capz-e2e-510kq6-ipv6" using the "ipv6" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 630.19265ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-510kq6" namespace
STEP: Deleting all clusters in the capz-e2e-510kq6 namespace
STEP: Deleting cluster capz-e2e-510kq6-ipv6
INFO: Waiting for the Cluster capz-e2e-510kq6/capz-e2e-510kq6-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-510kq6-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-510kq6-ipv6-control-plane-wpmst, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-510kq6-ipv6-control-plane-wpmst, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8hhhg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-510kq6-ipv6-control-plane-lcq7s, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-wz6mm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-47nhx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-510kq6-ipv6-control-plane-gjjtw, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-vggch, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-510kq6-ipv6-control-plane-gjjtw, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-sx6pl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-9nv2p, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-cq58v, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-510kq6-ipv6-control-plane-lcq7s, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-510kq6-ipv6-control-plane-gjjtw, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8fbs7, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-2qxzw, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-510kq6-ipv6-control-plane-wpmst, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-510kq6-ipv6-control-plane-lcq7s, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-j9kdh, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-510kq6-ipv6-control-plane-gjjtw, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-510kq6-ipv6-control-plane-lcq7s, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mvqkn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-510kq6-ipv6-control-plane-wpmst, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-510kq6
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 17m5s on Ginkgo node 1 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:205

INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" started at Mon, 29 Nov 2021 06:48:25 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-3tc3ki" for hosting the cluster
Nov 29 06:48:25.552: INFO: starting to create namespace for hosting the "capz-e2e-3tc3ki" test spec
2021/11/29 06:48:25 failed trying to get namespace (capz-e2e-3tc3ki):namespaces "capz-e2e-3tc3ki" not found
INFO: Creating namespace capz-e2e-3tc3ki
INFO: Creating event watcher for namespace "capz-e2e-3tc3ki"
Nov 29 06:48:25.593: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-3tc3ki-ha
INFO: Creating the workload cluster with name "capz-e2e-3tc3ki-ha" using the "(default)" template (Kubernetes v1.22.4, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 67 lines ...
STEP: waiting for job default/curl-to-elb-jobsu9h6dz8zdn to be complete
Nov 29 07:00:05.895: INFO: waiting for job default/curl-to-elb-jobsu9h6dz8zdn to be complete
Nov 29 07:00:16.109: INFO: job default/curl-to-elb-jobsu9h6dz8zdn is complete, took 10.214246534s
STEP: connecting directly to the external LB service
Nov 29 07:00:16.109: INFO: starting attempts to connect directly to the external LB service
2021/11/29 07:00:16 [DEBUG] GET http://20.67.153.68
2021/11/29 07:00:46 [ERR] GET http://20.67.153.68 request failed: Get "http://20.67.153.68": dial tcp 20.67.153.68:80: i/o timeout
2021/11/29 07:00:46 [DEBUG] GET http://20.67.153.68: retrying in 1s (4 left)
Nov 29 07:00:54.596: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 29 07:00:54.596: INFO: starting to delete external LB service webhwn8e1-elb
Nov 29 07:00:54.747: INFO: starting to delete deployment webhwn8e1
Nov 29 07:00:54.860: INFO: starting to delete job curl-to-elb-jobsu9h6dz8zdn
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 29 07:00:55.020: INFO: starting to create dev deployment namespace
2021/11/29 07:00:55 failed trying to get namespace (development):namespaces "development" not found
2021/11/29 07:00:55 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 29 07:00:55.240: INFO: starting to create prod deployment namespace
2021/11/29 07:00:55 failed trying to get namespace (production):namespaces "production" not found
2021/11/29 07:00:55 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 29 07:00:55.459: INFO: starting to create frontend-prod deployments
Nov 29 07:00:55.570: INFO: starting to create frontend-dev deployments
Nov 29 07:00:55.687: INFO: starting to create backend deployments
Nov 29 07:00:55.800: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 29 07:01:22.332: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.237.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 29 07:03:33.548: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 29 07:03:33.936: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.237.196 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.237.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 29 07:07:55.977: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 29 07:07:56.355: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.208.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 29 07:10:09.096: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 29 07:10:09.472: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.237.194 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.208.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 29 07:14:33.289: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 29 07:14:33.672: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.237.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 29 07:16:46.124: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 29 07:16:46.515: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.237.196 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowsoqt053 to be available
Nov 29 07:18:58.894: INFO: starting to wait for deployment to become available
Nov 29 07:19:59.689: INFO: Deployment default/web-windowsoqt053 is now available, took 1m0.794684666s
... skipping 51 lines ...
Nov 29 07:24:14.080: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-3tc3ki-ha-md-0-8pd94

Nov 29 07:24:14.472: INFO: INFO: Collecting logs for node 10.1.0.7 in cluster capz-e2e-3tc3ki-ha in namespace capz-e2e-3tc3ki

Nov 29 07:24:49.286: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-3tc3ki-ha-md-win-w6h8r

Failed to get logs for machine capz-e2e-3tc3ki-ha-md-win-64bbc45546-7gllf, cluster capz-e2e-3tc3ki/capz-e2e-3tc3ki-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 29 07:24:49.675: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster capz-e2e-3tc3ki-ha in namespace capz-e2e-3tc3ki

Nov 29 07:25:26.955: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-3tc3ki-ha-md-win-9k7x2

Failed to get logs for machine capz-e2e-3tc3ki-ha-md-win-64bbc45546-8wfxk, cluster capz-e2e-3tc3ki/capz-e2e-3tc3ki-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-3tc3ki/capz-e2e-3tc3ki-ha kube-system pod logs
STEP: Fetching kube-system pod logs took 901.686731ms
STEP: Creating log watcher for controller kube-system/calico-node-hfrr5, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-3tc3ki-ha-control-plane-wf9v5, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-windows-fwf24, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-proxy-znk45, container kube-proxy
... skipping 22 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-wcjbn, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-qdsnf, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-windows-8qr9w, container calico-node-startup
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-3tc3ki-ha-control-plane-ns5qw, container etcd
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-3tc3ki-ha-control-plane-ns5qw, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-windows-8qr9w, container calico-node-felix
STEP: Got error while iterating over activity logs for resource group capz-e2e-3tc3ki-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000502027s
STEP: Dumping all the Cluster API resources in the "capz-e2e-3tc3ki" namespace
STEP: Deleting all clusters in the capz-e2e-3tc3ki namespace
STEP: Deleting cluster capz-e2e-3tc3ki-ha
INFO: Waiting for the Cluster capz-e2e-3tc3ki/capz-e2e-3tc3ki-ha to be deleted
STEP: Waiting for cluster capz-e2e-3tc3ki-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-3tc3ki-ha-control-plane-fw6rd, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-3tc3ki-ha-control-plane-wf9v5, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-3tc3ki-ha-control-plane-wf9v5, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-hfrr5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7nn8z, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-675r6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-vr7zm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-3tc3ki-ha-control-plane-fw6rd, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wcjbn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qgttb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-3tc3ki-ha-control-plane-fw6rd, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-3tc3ki-ha-control-plane-ns5qw, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-fwf24, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-znk45, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5kj8h, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-d8lpf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-fwf24, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qdsnf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-3tc3ki-ha-control-plane-fw6rd, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-vk7ww, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-c6cj7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-3tc3ki-ha-control-plane-ns5qw, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-3tc3ki-ha-control-plane-wf9v5, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-p8jnz, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-3tc3ki-ha-control-plane-wf9v5, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-3tc3ki-ha-control-plane-ns5qw, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-t7hqn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-3tc3ki-ha-control-plane-ns5qw, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-3tc3ki
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" ran for 48m19s on Ginkgo node 2 of 3

... skipping 8 lines ...
  with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:334

INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" started at Mon, 29 Nov 2021 07:05:31 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-hxz0tf" for hosting the cluster
Nov 29 07:05:31.504: INFO: starting to create namespace for hosting the "capz-e2e-hxz0tf" test spec
2021/11/29 07:05:31 failed trying to get namespace (capz-e2e-hxz0tf):namespaces "capz-e2e-hxz0tf" not found
INFO: Creating namespace capz-e2e-hxz0tf
INFO: Creating event watcher for namespace "capz-e2e-hxz0tf"
Nov 29 07:05:31.536: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-hxz0tf-vmss
INFO: Creating the workload cluster with name "capz-e2e-hxz0tf-vmss" using the "machine-pool" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 60 lines ...
STEP: waiting for job default/curl-to-elb-jobvkx5ptrem9c to be complete
Nov 29 07:18:46.752: INFO: waiting for job default/curl-to-elb-jobvkx5ptrem9c to be complete
Nov 29 07:18:56.963: INFO: job default/curl-to-elb-jobvkx5ptrem9c is complete, took 10.210902922s
STEP: connecting directly to the external LB service
Nov 29 07:18:56.963: INFO: starting attempts to connect directly to the external LB service
2021/11/29 07:18:56 [DEBUG] GET http://20.67.142.198
2021/11/29 07:19:26 [ERR] GET http://20.67.142.198 request failed: Get "http://20.67.142.198": dial tcp 20.67.142.198:80: i/o timeout
2021/11/29 07:19:26 [DEBUG] GET http://20.67.142.198: retrying in 1s (4 left)
Nov 29 07:19:43.551: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 29 07:19:43.551: INFO: starting to delete external LB service webg3d09j-elb
Nov 29 07:19:43.675: INFO: starting to delete deployment webg3d09j
Nov 29 07:19:43.780: INFO: starting to delete job curl-to-elb-jobvkx5ptrem9c
... skipping 25 lines ...
STEP: waiting for job default/curl-to-elb-job8b4kx1ohahj to be complete
Nov 29 07:23:27.872: INFO: waiting for job default/curl-to-elb-job8b4kx1ohahj to be complete
Nov 29 07:23:38.081: INFO: job default/curl-to-elb-job8b4kx1ohahj is complete, took 10.209670758s
STEP: connecting directly to the external LB service
Nov 29 07:23:38.082: INFO: starting attempts to connect directly to the external LB service
2021/11/29 07:23:38 [DEBUG] GET http://20.105.82.85
2021/11/29 07:24:08 [ERR] GET http://20.105.82.85 request failed: Get "http://20.105.82.85": dial tcp 20.105.82.85:80: i/o timeout
2021/11/29 07:24:08 [DEBUG] GET http://20.105.82.85: retrying in 1s (4 left)
Nov 29 07:24:09.296: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 29 07:24:09.297: INFO: starting to delete external LB service web-windowsgbdbos-elb
Nov 29 07:24:09.648: INFO: starting to delete deployment web-windowsgbdbos
Nov 29 07:24:09.794: INFO: starting to delete job curl-to-elb-job8b4kx1ohahj
... skipping 33 lines ...
Nov 29 07:28:14.430: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-hxz0tf-vmss-mp-0

Nov 29 07:28:14.930: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-hxz0tf-vmss in namespace capz-e2e-hxz0tf

Nov 29 07:28:28.325: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-e2e-hxz0tf-vmss-mp-0

Failed to get logs for machine pool capz-e2e-hxz0tf-vmss-mp-0, cluster capz-e2e-hxz0tf/capz-e2e-hxz0tf-vmss: [[running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1], [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1]]
Nov 29 07:28:28.760: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-hxz0tf-vmss in namespace capz-e2e-hxz0tf

Nov 29 07:28:59.013: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

Nov 29 07:28:59.416: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-hxz0tf-vmss in namespace capz-e2e-hxz0tf

Nov 29 07:29:25.428: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set win-p-win

Failed to get logs for machine pool capz-e2e-hxz0tf-vmss-mp-win, cluster capz-e2e-hxz0tf/capz-e2e-hxz0tf-vmss: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-hxz0tf/capz-e2e-hxz0tf-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 1.020985533s
STEP: Creating log watcher for controller kube-system/calico-node-windows-5m5cv, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-hxz0tf-vmss-control-plane-c52xx, container kube-controller-manager
STEP: Dumping workload cluster capz-e2e-hxz0tf/capz-e2e-hxz0tf-vmss Azure activity log
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-6x9zd, container kube-proxy
... skipping 10 lines ...
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-nrw2z, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-xmlgz, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-hxz0tf-vmss-control-plane-c52xx, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-7w7qt, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-hxz0tf-vmss-control-plane-c52xx, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-xft48, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-hxz0tf-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001084686s
STEP: Dumping all the Cluster API resources in the "capz-e2e-hxz0tf" namespace
STEP: Deleting all clusters in the capz-e2e-hxz0tf namespace
STEP: Deleting cluster capz-e2e-hxz0tf-vmss
INFO: Waiting for the Cluster capz-e2e-hxz0tf/capz-e2e-hxz0tf-vmss to be deleted
STEP: Waiting for cluster capz-e2e-hxz0tf-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-xmlgz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ngkcq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rmrsm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xft48, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-hxz0tf
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" ran for 33m2s on Ginkgo node 1 of 3

... skipping 10 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:144

INFO: "Creates a public management cluster in the same vnet" started at Mon, 29 Nov 2021 06:48:24 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-uinzkg" for hosting the cluster
Nov 29 06:48:24.996: INFO: starting to create namespace for hosting the "capz-e2e-uinzkg" test spec
2021/11/29 06:48:25 failed trying to get namespace (capz-e2e-uinzkg):namespaces "capz-e2e-uinzkg" not found
INFO: Creating namespace capz-e2e-uinzkg
INFO: Creating event watcher for namespace "capz-e2e-uinzkg"
Nov 29 06:48:25.055: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-uinzkg-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-uinzkg-public-custom-vnet-control-plane-4l2hr, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-uinzkg-public-custom-vnet-control-plane-4l2hr, container etcd
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-qlq8b, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-proxy-kpbf8, container kube-proxy
STEP: Dumping workload cluster capz-e2e-uinzkg/capz-e2e-uinzkg-public-custom-vnet Azure activity log
STEP: Creating log watcher for controller kube-system/kube-proxy-sqn44, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-uinzkg-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000740015s
STEP: Dumping all the Cluster API resources in the "capz-e2e-uinzkg" namespace
STEP: Deleting all clusters in the capz-e2e-uinzkg namespace
STEP: Deleting cluster capz-e2e-uinzkg-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-uinzkg/capz-e2e-uinzkg-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-uinzkg-public-custom-vnet to be deleted
W1129 07:35:05.239918   24498 reflector.go:441] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1129 07:35:36.526187   24498 trace.go:205] Trace[984042051]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (29-Nov-2021 07:35:06.524) (total time: 30001ms):
Trace[984042051]: [30.001578233s] [30.001578233s] END
E1129 07:35:36.526253   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp 20.67.154.110:6443: i/o timeout
I1129 07:36:09.580071   24498 trace.go:205] Trace[1119276223]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (29-Nov-2021 07:35:39.579) (total time: 30000ms):
Trace[1119276223]: [30.000792755s] [30.000792755s] END
E1129 07:36:09.580132   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp 20.67.154.110:6443: i/o timeout
I1129 07:36:45.160842   24498 trace.go:205] Trace[945176095]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (29-Nov-2021 07:36:15.160) (total time: 30000ms):
Trace[945176095]: [30.000759254s] [30.000759254s] END
E1129 07:36:45.160914   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp 20.67.154.110:6443: i/o timeout
I1129 07:37:22.360621   24498 trace.go:205] Trace[1334317515]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (29-Nov-2021 07:36:52.359) (total time: 30001ms):
Trace[1334317515]: [30.001220268s] [30.001220268s] END
E1129 07:37:22.360684   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp 20.67.154.110:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-uinzkg
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 29 07:38:05.961: INFO: deleting an existing virtual network "custom-vnet"
I1129 07:38:07.425314   24498 trace.go:205] Trace[2103215152]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (29-Nov-2021 07:37:37.424) (total time: 30000ms):
Trace[2103215152]: [30.000922106s] [30.000922106s] END
E1129 07:38:07.425384   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp 20.67.154.110:6443: i/o timeout
Nov 29 07:38:17.129: INFO: deleting an existing route table "node-routetable"
Nov 29 07:38:28.223: INFO: deleting an existing network security group "node-nsg"
Nov 29 07:38:39.139: INFO: deleting an existing network security group "control-plane-nsg"
E1129 07:38:49.373395   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 29 07:38:49.998: INFO: verifying the existing resource group "capz-e2e-uinzkg-public-custom-vnet" is empty
Nov 29 07:38:53.795: INFO: deleting the existing resource group "capz-e2e-uinzkg-public-custom-vnet"
E1129 07:39:32.293964   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1129 07:40:23.081278   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 52m32s on Ginkgo node 3 of 3


• [SLOW TEST:3152.476 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:455

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Mon, 29 Nov 2021 07:38:33 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-yei0f9" for hosting the cluster
Nov 29 07:38:33.667: INFO: starting to create namespace for hosting the "capz-e2e-yei0f9" test spec
2021/11/29 07:38:33 failed trying to get namespace (capz-e2e-yei0f9):namespaces "capz-e2e-yei0f9" not found
INFO: Creating namespace capz-e2e-yei0f9
INFO: Creating event watcher for namespace "capz-e2e-yei0f9"
Nov 29 07:38:33.705: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-yei0f9-oot
INFO: Creating the workload cluster with name "capz-e2e-yei0f9-oot" using the "external-cloud-provider" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 98 lines ...
STEP: Fetching activity logs took 558.966438ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-yei0f9" namespace
STEP: Deleting all clusters in the capz-e2e-yei0f9 namespace
STEP: Deleting cluster capz-e2e-yei0f9-oot
INFO: Waiting for the Cluster capz-e2e-yei0f9/capz-e2e-yei0f9-oot to be deleted
STEP: Waiting for cluster capz-e2e-yei0f9-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-tvcjq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-r6qld, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-zwpm6, container cloud-node-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-yei0f9
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 20m12s on Ginkgo node 1 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:490

INFO: "with a single control plane node and 1 node" started at Mon, 29 Nov 2021 07:40:57 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-djfuvs" for hosting the cluster
Nov 29 07:40:57.474: INFO: starting to create namespace for hosting the "capz-e2e-djfuvs" test spec
2021/11/29 07:40:57 failed trying to get namespace (capz-e2e-djfuvs):namespaces "capz-e2e-djfuvs" not found
INFO: Creating namespace capz-e2e-djfuvs
INFO: Creating event watcher for namespace "capz-e2e-djfuvs"
Nov 29 07:40:57.508: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-djfuvs-aks
INFO: Creating the workload cluster with name "capz-e2e-djfuvs-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1129 07:41:21.653870   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 07:42:12.748058   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 07:43:10.335663   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 07:43:42.811377   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 07:44:40.781994   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 29 07:45:08.829: INFO: Waiting for the first control plane machine managed by capz-e2e-djfuvs/capz-e2e-djfuvs-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
INFO: Waiting for control plane to be ready
Nov 29 07:45:18.867: INFO: Waiting for the first control plane machine managed by capz-e2e-djfuvs/capz-e2e-djfuvs-aks to be provisioned
STEP: Waiting for all control plane nodes to exist
... skipping 5 lines ...
Nov 29 07:45:25.560: INFO: want 2 instances, found 2 ready and 2 available. generation: 1, observedGeneration: 1
Nov 29 07:45:25.672: INFO: mapping nsenter pods to hostnames for host-by-host execution
Nov 29 07:45:25.672: INFO: found host aks-agentpool0-41073030-vmss000000 with pod nsenter-xnm2z
Nov 29 07:45:25.672: INFO: found host aks-agentpool1-41073030-vmss000000 with pod nsenter-r6l55
STEP: checking that time synchronization is healthy on aks-agentpool1-41073030-vmss000000
STEP: checking that time synchronization is healthy on aks-agentpool1-41073030-vmss000000
E1129 07:45:27.394369   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: checking that time synchronization is healthy on aks-agentpool1-41073030-vmss000000
STEP: checking that time synchronization is healthy on aks-agentpool1-41073030-vmss000000
E1129 07:46:12.194838   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Dumping logs from the "capz-e2e-djfuvs-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-djfuvs/capz-e2e-djfuvs-aks logs
Nov 29 07:46:34.034: INFO: INFO: Collecting logs for node aks-agentpool1-41073030-vmss000000 in cluster capz-e2e-djfuvs-aks in namespace capz-e2e-djfuvs

E1129 07:47:06.315918   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 07:48:03.532354   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 07:48:38.475575   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 29 07:48:44.403: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-djfuvs/capz-e2e-djfuvs-aks: [dialing public load balancer at capz-e2e-djfuvs-aks-e2c4971d.hcp.northeurope.azmk8s.io: dial tcp 52.158.116.41:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
Nov 29 07:48:45.226: INFO: INFO: Collecting logs for node aks-agentpool1-41073030-vmss000000 in cluster capz-e2e-djfuvs-aks in namespace capz-e2e-djfuvs

E1129 07:49:14.233649   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 07:49:59.558307   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 07:50:54.978605   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 29 07:50:55.474: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-djfuvs/capz-e2e-djfuvs-aks: [dialing public load balancer at capz-e2e-djfuvs-aks-e2c4971d.hcp.northeurope.azmk8s.io: dial tcp 52.158.116.41:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-djfuvs/capz-e2e-djfuvs-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 1.39902671s
STEP: Dumping workload cluster capz-e2e-djfuvs/capz-e2e-djfuvs-aks Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-84d976c568-qj5jm, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-dv56c, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-autoscaler-54d55c8b75-ndf7m, container autoscaler
... skipping 8 lines ...
STEP: Fetching activity logs took 1.068473553s
STEP: Dumping all the Cluster API resources in the "capz-e2e-djfuvs" namespace
STEP: Deleting all clusters in the capz-e2e-djfuvs namespace
STEP: Deleting cluster capz-e2e-djfuvs-aks
INFO: Waiting for the Cluster capz-e2e-djfuvs/capz-e2e-djfuvs-aks to be deleted
STEP: Waiting for cluster capz-e2e-djfuvs-aks to be deleted
E1129 07:51:26.555483   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 07:52:16.006264   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 07:53:05.788512   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 07:53:52.453460   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 07:54:48.968122   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 07:55:28.130471   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 07:56:24.852818   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 07:57:09.616090   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 07:58:07.181586   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 07:58:43.339697   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 07:59:32.816201   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 08:00:05.945816   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 08:00:55.222761   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 08:01:46.089384   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 08:02:41.771813   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 08:03:21.256162   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 08:04:08.912424   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 08:05:01.553282   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 08:05:51.623662   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 08:06:39.818138   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 08:07:23.955203   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 08:08:07.153856   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 08:08:51.596710   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 08:09:44.134825   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 08:10:42.174316   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-djfuvs
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1129 08:11:25.979946   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 08:12:14.864964   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 08:12:47.368056   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 08:13:20.254039   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1129 08:13:58.364219   24498 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-uinzkg/events?resourceVersion=9890": dial tcp: lookup capz-e2e-uinzkg-public-custom-vnet-e57786eb.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 33m22s on Ginkgo node 3 of 3


• Failure [2002.431 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating an AKS cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:489
    with a single control plane node and 1 node [It]
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:490

    Timed out after 68.293s.
    Expected success, but got an error:
        <*errors.withStack | 0xc0007df140>: {
            error: <errors.aggregate | len:4, cap:4>[
                <*errors.errorString | 0xc000eb2b90>{
                    s: "failed to nsenter host aks-agentpool1-41073030-vmss000000, error: 'error dialing backend: dial tcp 10.240.0.4:10250: i/o timeout', stdout:  ''",
                },
                <*errors.errorString | 0xc000eb2bf0>{
                    s: "failed to nsenter host aks-agentpool1-41073030-vmss000000, error: 'error dialing backend: dial tcp 10.240.0.4:10250: i/o timeout', stdout:  ''",
                },
                <*errors.errorString | 0xc000eb2c60>{
                    s: "failed to nsenter host aks-agentpool1-41073030-vmss000000, error: 'error dialing backend: dial tcp 10.240.0.4:10250: i/o timeout', stdout:  ''",
                },
                <*errors.errorString | 0xc000eb2cc0>{
                    s: "failed to nsenter host aks-agentpool1-41073030-vmss000000, error: 'error dialing backend: dial tcp 10.240.0.4:10250: i/o timeout', stdout:  ''",
                },
            ],
            stack: [0x1908062, 0x1907ff4, 0x190847e, 0x1d0ebf8, 0x4e5e87, 0x4e5359, 0x8267aa, 0x824a4f, 0x82513b, 0x824794, 0x1cf47a2, 0x1d16d2c, 0x8149e3, 0x82256a, 0x1d173bb, 0x7fd4c3, 0x7fd0dc, 0x7fc407, 0x8033af, 0x802a52, 0x812351, 0x811e67, 0x811657, 0x813d66, 0x821bf8, 0x821936, 0x1cffe9a, 0x52a40f, 0x474781],
        }
        failed to nsenter host aks-agentpool1-41073030-vmss000000, error: 'error dialing backend: dial tcp 10.240.0.4:10250: i/o timeout', stdout:  ''

    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_timesync.go:244

    Full Stack Trace
    sigs.k8s.io/cluster-api-provider-azure/test/e2e.AzureDaemonsetTimeSyncSpec(0x2596940, 0xc0000660d0, 0xc000e2ccc8)
    	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_timesync.go:244 +0x1242
... skipping 38 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413

INFO: "with a single control plane node and 1 node" started at Mon, 29 Nov 2021 07:36:44 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-cfg7c0" for hosting the cluster
Nov 29 07:36:44.644: INFO: starting to create namespace for hosting the "capz-e2e-cfg7c0" test spec
2021/11/29 07:36:44 failed trying to get namespace (capz-e2e-cfg7c0):namespaces "capz-e2e-cfg7c0" not found
INFO: Creating namespace capz-e2e-cfg7c0
INFO: Creating event watcher for namespace "capz-e2e-cfg7c0"
Nov 29 07:36:44.689: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-cfg7c0-gpu
INFO: Creating the workload cluster with name "capz-e2e-cfg7c0-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.4, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: Fetching activity logs took 1.135280281s
STEP: Dumping all the Cluster API resources in the "capz-e2e-cfg7c0" namespace
STEP: Deleting all clusters in the capz-e2e-cfg7c0 namespace
STEP: Deleting cluster capz-e2e-cfg7c0-gpu
INFO: Waiting for the Cluster capz-e2e-cfg7c0/capz-e2e-cfg7c0-gpu to be deleted
STEP: Waiting for cluster capz-e2e-cfg7c0-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-62qln, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mxg8t, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-cfg7c0
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 37m44s on Ginkgo node 2 of 3

... skipping 59 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Mon, 29 Nov 2021 07:58:45 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-ecql7i" for hosting the cluster
Nov 29 07:58:45.356: INFO: starting to create namespace for hosting the "capz-e2e-ecql7i" test spec
2021/11/29 07:58:45 failed trying to get namespace (capz-e2e-ecql7i):namespaces "capz-e2e-ecql7i" not found
INFO: Creating namespace capz-e2e-ecql7i
INFO: Creating event watcher for namespace "capz-e2e-ecql7i"
Nov 29 07:58:45.399: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-ecql7i-win-ha
INFO: Creating the workload cluster with name "capz-e2e-ecql7i-win-ha" using the "windows" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 55 lines ...
STEP: waiting for job default/curl-to-elb-jobnpni66rx2qh to be complete
Nov 29 08:09:51.017: INFO: waiting for job default/curl-to-elb-jobnpni66rx2qh to be complete
Nov 29 08:10:01.228: INFO: job default/curl-to-elb-jobnpni66rx2qh is complete, took 10.21110402s
STEP: connecting directly to the external LB service
Nov 29 08:10:01.228: INFO: starting attempts to connect directly to the external LB service
2021/11/29 08:10:01 [DEBUG] GET http://40.127.229.51
2021/11/29 08:10:31 [ERR] GET http://40.127.229.51 request failed: Get "http://40.127.229.51": dial tcp 40.127.229.51:80: i/o timeout
2021/11/29 08:10:31 [DEBUG] GET http://40.127.229.51: retrying in 1s (4 left)
Nov 29 08:10:35.459: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 29 08:10:35.459: INFO: starting to delete external LB service web2o2kuy-elb
Nov 29 08:10:35.625: INFO: starting to delete deployment web2o2kuy
Nov 29 08:10:35.734: INFO: starting to delete job curl-to-elb-jobnpni66rx2qh
... skipping 85 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-ecql7i-win-ha-control-plane-p4x78, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-p8bml, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-w9qjj, container kube-proxy
STEP: Dumping workload cluster capz-e2e-ecql7i/capz-e2e-ecql7i-win-ha Azure activity log
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-ecql7i-win-ha-control-plane-7r7jj, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-55jvg, container kube-flannel
STEP: Got error while iterating over activity logs for resource group capz-e2e-ecql7i-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001253695s
STEP: Dumping all the Cluster API resources in the "capz-e2e-ecql7i" namespace
STEP: Deleting all clusters in the capz-e2e-ecql7i namespace
STEP: Deleting cluster capz-e2e-ecql7i-win-ha
INFO: Waiting for the Cluster capz-e2e-ecql7i/capz-e2e-ecql7i-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-ecql7i-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-lknr9, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-m45hs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-hv9cq, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-69g6n, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ecql7i-win-ha-control-plane-7r7jj, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ecql7i-win-ha-control-plane-7r7jj, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ecql7i-win-ha-control-plane-7r7jj, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2jvmw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-brr4w, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-pp8xp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-7d8nx, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ecql7i-win-ha-control-plane-7r7jj, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-ecql7i
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 32m7s on Ginkgo node 1 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows Enabled cluster with dockershim
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:530
    With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2021-11-29T08:39:39Z"}
++ early_exit_handler
++ '[' -n 162 ']'
++ kill -TERM 162
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 12 lines ...
Cleaning up after docker
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
================================================================================
Done cleaning up after docker in docker.
All sensitive variables are redacted
{"component":"entrypoint","file":"prow/entrypoint/run.go:255","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2021-11-29T08:54:39Z"}
{"component":"entrypoint","error":"os: process already finished","file":"prow/entrypoint/run.go:257","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2021-11-29T08:54:39Z"}