This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2021-12-01 06:40
Elapsed1h49m
Revisionmain

Test Failures


capz-e2e Workload cluster creation Creating a GPU-enabled cluster with a single control plane node and 1 node 36m47s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sGPU\-enabled\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413
Timed out after 1200.003s.
Expected
    <bool>: false
to be true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:76
				
				Click to see stdout/stderrfrom junit.e2e_suite.2.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 15 Skipped Tests

Error lines from build-log.txt

... skipping 433 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:288

INFO: "With ipv6 worker node" started at Wed, 01 Dec 2021 06:48:23 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-139w7p" for hosting the cluster
Dec  1 06:48:23.257: INFO: starting to create namespace for hosting the "capz-e2e-139w7p" test spec
2021/12/01 06:48:23 failed trying to get namespace (capz-e2e-139w7p):namespaces "capz-e2e-139w7p" not found
INFO: Creating namespace capz-e2e-139w7p
INFO: Creating event watcher for namespace "capz-e2e-139w7p"
Dec  1 06:48:23.323: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-139w7p-ipv6
INFO: Creating the workload cluster with name "capz-e2e-139w7p-ipv6" using the "ipv6" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 774.717031ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-139w7p" namespace
STEP: Deleting all clusters in the capz-e2e-139w7p namespace
STEP: Deleting cluster capz-e2e-139w7p-ipv6
INFO: Waiting for the Cluster capz-e2e-139w7p/capz-e2e-139w7p-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-139w7p-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-139w7p-ipv6-control-plane-4w58p, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-2ztq9, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-n8b8s, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-139w7p-ipv6-control-plane-4w58p, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-139w7p-ipv6-control-plane-4n22h, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qsvpp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-139w7p-ipv6-control-plane-4n22h, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-dpr22, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-h75xv, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-139w7p-ipv6-control-plane-4n22h, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lx4vf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-139w7p-ipv6-control-plane-4n22h, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-139w7p-ipv6-control-plane-4w58p, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-z8979, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-139w7p-ipv6-control-plane-4w58p, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-7b9rh, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xvdlb, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-139w7p
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 17m59s on Ginkgo node 3 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:205

INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" started at Wed, 01 Dec 2021 06:48:21 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-f6k14r" for hosting the cluster
Dec  1 06:48:21.995: INFO: starting to create namespace for hosting the "capz-e2e-f6k14r" test spec
2021/12/01 06:48:22 failed trying to get namespace (capz-e2e-f6k14r):namespaces "capz-e2e-f6k14r" not found
INFO: Creating namespace capz-e2e-f6k14r
INFO: Creating event watcher for namespace "capz-e2e-f6k14r"
Dec  1 06:48:22.033: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-f6k14r-ha
INFO: Creating the workload cluster with name "capz-e2e-f6k14r-ha" using the "(default)" template (Kubernetes v1.22.4, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 75 lines ...
Dec  1 06:59:14.805: INFO: starting to delete external LB service webavrfm3-elb
Dec  1 06:59:14.880: INFO: starting to delete deployment webavrfm3
Dec  1 06:59:14.916: INFO: starting to delete job curl-to-elb-joba6hcnzsv2lq
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Dec  1 06:59:14.997: INFO: starting to create dev deployment namespace
2021/12/01 06:59:15 failed trying to get namespace (development):namespaces "development" not found
2021/12/01 06:59:15 namespace development does not exist, creating...
STEP: Creating production namespace
Dec  1 06:59:15.069: INFO: starting to create prod deployment namespace
2021/12/01 06:59:15 failed trying to get namespace (production):namespaces "production" not found
2021/12/01 06:59:15 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Dec  1 06:59:15.137: INFO: starting to create frontend-prod deployments
Dec  1 06:59:15.174: INFO: starting to create frontend-dev deployments
Dec  1 06:59:15.213: INFO: starting to create backend deployments
Dec  1 06:59:15.259: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Dec  1 06:59:38.319: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.226.133 port 80: Connection timed out

STEP: Cleaning up after ourselves
Dec  1 07:01:49.520: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Dec  1 07:01:49.677: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.226.133 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.226.133 port 80: Connection timed out

STEP: Cleaning up after ourselves
Dec  1 07:06:12.083: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Dec  1 07:06:12.270: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.220.130 port 80: Connection timed out

STEP: Cleaning up after ourselves
Dec  1 07:08:23.156: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Dec  1 07:08:23.301: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.226.131 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.220.130 port 80: Connection timed out

STEP: Cleaning up after ourselves
Dec  1 07:12:45.301: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Dec  1 07:12:45.453: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.226.133 port 80: Connection timed out

STEP: Cleaning up after ourselves
Dec  1 07:14:55.953: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Dec  1 07:14:56.129: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.226.133 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowsvpze5k to be available
Dec  1 07:17:08.037: INFO: starting to wait for deployment to become available
Dec  1 07:17:58.232: INFO: Deployment default/web-windowsvpze5k is now available, took 50.19478472s
... skipping 51 lines ...
Dec  1 07:21:49.839: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-f6k14r-ha-md-0-lwmd6

Dec  1 07:21:50.395: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster capz-e2e-f6k14r-ha in namespace capz-e2e-f6k14r

Dec  1 07:22:11.505: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-f6k14r-ha-md-win-62f6r

Failed to get logs for machine capz-e2e-f6k14r-ha-md-win-96b466967-cjntv, cluster capz-e2e-f6k14r/capz-e2e-f6k14r-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Dec  1 07:22:11.793: INFO: INFO: Collecting logs for node 10.1.0.7 in cluster capz-e2e-f6k14r-ha in namespace capz-e2e-f6k14r

Dec  1 07:22:37.423: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-f6k14r-ha-md-win-d9br2

Failed to get logs for machine capz-e2e-f6k14r-ha-md-win-96b466967-zd2hq, cluster capz-e2e-f6k14r/capz-e2e-f6k14r-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-f6k14r/capz-e2e-f6k14r-ha kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-node-9f68g, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-f6k14r-ha-control-plane-tzwkj, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-f6k14r-ha-control-plane-fwpvw, container kube-apiserver
STEP: Fetching kube-system pod logs took 284.784588ms
STEP: Dumping workload cluster capz-e2e-f6k14r/capz-e2e-f6k14r-ha Azure activity log
... skipping 22 lines ...
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-f6k14r-ha-control-plane-fwpvw, container etcd
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-f6k14r-ha-control-plane-fwpvw, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-qrzzw, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-f6k14r-ha-control-plane-tzwkj, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-dm6zl, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-cx6gp, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-f6k14r-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001274326s
STEP: Dumping all the Cluster API resources in the "capz-e2e-f6k14r" namespace
STEP: Deleting all clusters in the capz-e2e-f6k14r namespace
STEP: Deleting cluster capz-e2e-f6k14r-ha
INFO: Waiting for the Cluster capz-e2e-f6k14r/capz-e2e-f6k14r-ha to be deleted
STEP: Waiting for cluster capz-e2e-f6k14r-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nggxj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-q2ktr, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-xwvww, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5684v, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-z7clr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-ncsfp, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rcdwc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-knjp7, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-gjhcs, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-f6k14r-ha-control-plane-tzwkj, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-f6k14r-ha-control-plane-fwpvw, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-kcngm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-f6k14r-ha-control-plane-tzwkj, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qrzzw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-knjp7, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-76gbq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-f6k14r-ha-control-plane-fwpvw, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-f6k14r-ha-control-plane-tzwkj, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-f6k14r-ha-control-plane-tzwkj, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-f6k14r-ha-control-plane-6bssc, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dm6zl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-f6k14r-ha-control-plane-6bssc, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-f6k14r-ha-control-plane-fwpvw, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-ts87m, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cx6gp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-f6k14r-ha-control-plane-fwpvw, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-xwvww, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-f6k14r-ha-control-plane-6bssc, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-f6k14r-ha-control-plane-6bssc, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-f6k14r
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" ran for 44m18s on Ginkgo node 2 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:144

INFO: "Creates a public management cluster in the same vnet" started at Wed, 01 Dec 2021 06:48:21 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-oix5nm" for hosting the cluster
Dec  1 06:48:21.269: INFO: starting to create namespace for hosting the "capz-e2e-oix5nm" test spec
2021/12/01 06:48:21 failed trying to get namespace (capz-e2e-oix5nm):namespaces "capz-e2e-oix5nm" not found
INFO: Creating namespace capz-e2e-oix5nm
INFO: Creating event watcher for namespace "capz-e2e-oix5nm"
Dec  1 06:48:21.301: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-oix5nm-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-n4w57, container calico-node
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-66crk, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-oix5nm-public-custom-vnet-control-plane-jfkmj, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-oix5nm-public-custom-vnet-control-plane-jfkmj, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-cpckx, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-v694m, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-oix5nm-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00067902s
STEP: Dumping all the Cluster API resources in the "capz-e2e-oix5nm" namespace
STEP: Deleting all clusters in the capz-e2e-oix5nm namespace
STEP: Deleting cluster capz-e2e-oix5nm-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-oix5nm/capz-e2e-oix5nm-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-oix5nm-public-custom-vnet to be deleted
W1201 07:33:02.021607   24335 reflector.go:441] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1201 07:33:35.158909   24335 trace.go:205] Trace[725910890]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (01-Dec-2021 07:33:05.158) (total time: 30000ms):
Trace[725910890]: [30.000573829s] [30.000573829s] END
E1201 07:33:35.158965   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp 52.226.239.50:6443: i/o timeout
I1201 07:34:09.079487   24335 trace.go:205] Trace[1979022542]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (01-Dec-2021 07:33:39.078) (total time: 30001ms):
Trace[1979022542]: [30.001033748s] [30.001033748s] END
E1201 07:34:09.079552   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp 52.226.239.50:6443: i/o timeout
I1201 07:34:45.625855   24335 trace.go:205] Trace[2052058767]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (01-Dec-2021 07:34:15.624) (total time: 30000ms):
Trace[2052058767]: [30.000832415s] [30.000832415s] END
E1201 07:34:45.625907   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp 52.226.239.50:6443: i/o timeout
I1201 07:35:31.065954   24335 trace.go:205] Trace[1213736534]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (01-Dec-2021 07:35:01.064) (total time: 30001ms):
Trace[1213736534]: [30.001112122s] [30.001112122s] END
E1201 07:35:31.066009   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp 52.226.239.50:6443: i/o timeout
E1201 07:36:15.899709   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-oix5nm
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Dec  1 07:36:24.519: INFO: deleting an existing virtual network "custom-vnet"
Dec  1 07:36:35.010: INFO: deleting an existing route table "node-routetable"
Dec  1 07:36:45.363: INFO: deleting an existing network security group "node-nsg"
Dec  1 07:36:55.874: INFO: deleting an existing network security group "control-plane-nsg"
Dec  1 07:37:06.243: INFO: verifying the existing resource group "capz-e2e-oix5nm-public-custom-vnet" is empty
Dec  1 07:37:06.842: INFO: deleting the existing resource group "capz-e2e-oix5nm-public-custom-vnet"
E1201 07:37:07.326436   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 07:37:41.049675   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1201 07:38:30.285355   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 07:39:06.425068   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 50m49s on Ginkgo node 1 of 3


• [SLOW TEST:3049.036 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:334

INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" started at Wed, 01 Dec 2021 07:06:22 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-17d2vj" for hosting the cluster
Dec  1 07:06:22.668: INFO: starting to create namespace for hosting the "capz-e2e-17d2vj" test spec
2021/12/01 07:06:22 failed trying to get namespace (capz-e2e-17d2vj):namespaces "capz-e2e-17d2vj" not found
INFO: Creating namespace capz-e2e-17d2vj
INFO: Creating event watcher for namespace "capz-e2e-17d2vj"
Dec  1 07:06:22.697: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-17d2vj-vmss
INFO: Creating the workload cluster with name "capz-e2e-17d2vj-vmss" using the "machine-pool" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 98 lines ...
STEP: waiting for job default/curl-to-elb-jobeh2buq2l461 to be complete
Dec  1 07:22:16.002: INFO: waiting for job default/curl-to-elb-jobeh2buq2l461 to be complete
Dec  1 07:22:26.066: INFO: job default/curl-to-elb-jobeh2buq2l461 is complete, took 10.06400822s
STEP: connecting directly to the external LB service
Dec  1 07:22:26.066: INFO: starting attempts to connect directly to the external LB service
2021/12/01 07:22:26 [DEBUG] GET http://20.81.11.152
2021/12/01 07:22:56 [ERR] GET http://20.81.11.152 request failed: Get "http://20.81.11.152": dial tcp 20.81.11.152:80: i/o timeout
2021/12/01 07:22:56 [DEBUG] GET http://20.81.11.152: retrying in 1s (4 left)
2021/12/01 07:23:27 [ERR] GET http://20.81.11.152 request failed: Get "http://20.81.11.152": dial tcp 20.81.11.152:80: i/o timeout
2021/12/01 07:23:27 [DEBUG] GET http://20.81.11.152: retrying in 2s (3 left)
Dec  1 07:23:29.138: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Dec  1 07:23:29.138: INFO: starting to delete external LB service web-windowsi9utmu-elb
Dec  1 07:23:29.195: INFO: starting to delete deployment web-windowsi9utmu
Dec  1 07:23:29.227: INFO: starting to delete job curl-to-elb-jobeh2buq2l461
... skipping 33 lines ...
Dec  1 07:30:21.813: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set capz-e2e-17d2vj-vmss-mp-0

Dec  1 07:30:22.056: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-17d2vj-vmss in namespace capz-e2e-17d2vj

Dec  1 07:30:34.991: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-e2e-17d2vj-vmss-mp-0

Failed to get logs for machine pool capz-e2e-17d2vj-vmss-mp-0, cluster capz-e2e-17d2vj/capz-e2e-17d2vj-vmss: [[running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1], [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1]]
Dec  1 07:30:35.298: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-17d2vj-vmss in namespace capz-e2e-17d2vj

Dec  1 07:31:02.450: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set win-p-win

Dec  1 07:31:02.727: INFO: INFO: Collecting logs for node win-p-win000002 in cluster capz-e2e-17d2vj-vmss in namespace capz-e2e-17d2vj

Dec  1 07:31:26.916: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set win-p-win

Failed to get logs for machine pool capz-e2e-17d2vj-vmss-mp-win, cluster capz-e2e-17d2vj/capz-e2e-17d2vj-vmss: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-17d2vj/capz-e2e-17d2vj-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 319.530559ms
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-17d2vj-vmss-control-plane-tz9w9, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-6vl4m, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-2hnzq, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-17d2vj-vmss-control-plane-tz9w9, container kube-apiserver
... skipping 10 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-wjt49, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-prxl2, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-17d2vj-vmss-control-plane-tz9w9, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-gbw46, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-zzdd6, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-17d2vj-vmss-control-plane-tz9w9, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-17d2vj-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000757753s
STEP: Dumping all the Cluster API resources in the "capz-e2e-17d2vj" namespace
STEP: Deleting all clusters in the capz-e2e-17d2vj namespace
STEP: Deleting cluster capz-e2e-17d2vj-vmss
INFO: Waiting for the Cluster capz-e2e-17d2vj/capz-e2e-17d2vj-vmss to be deleted
STEP: Waiting for cluster capz-e2e-17d2vj-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-v2gzw, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6vl4m, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8m5ss, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-wjt49, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-kk6cd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-zzdd6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-v2gzw, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-2s29b, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2hnzq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-2s29b, container calico-node-felix: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-17d2vj
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" ran for 36m1s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:455

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Wed, 01 Dec 2021 07:39:10 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-k3o3qf" for hosting the cluster
Dec  1 07:39:10.308: INFO: starting to create namespace for hosting the "capz-e2e-k3o3qf" test spec
2021/12/01 07:39:10 failed trying to get namespace (capz-e2e-k3o3qf):namespaces "capz-e2e-k3o3qf" not found
INFO: Creating namespace capz-e2e-k3o3qf
INFO: Creating event watcher for namespace "capz-e2e-k3o3qf"
Dec  1 07:39:10.339: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-k3o3qf-oot
INFO: Creating the workload cluster with name "capz-e2e-k3o3qf-oot" using the "external-cloud-provider" template (Kubernetes v1.22.4, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 13 lines ...
configmap/cloud-node-manager-addon created
clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-k3o3qf-oot-calico created
configmap/cni-capz-e2e-k3o3qf-oot-calico created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1201 07:39:39.738271   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 07:40:30.676107   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by capz-e2e-k3o3qf/capz-e2e-k3o3qf-oot-control-plane to be provisioned
STEP: Waiting for one control plane node to exist
E1201 07:41:23.299845   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 07:42:21.921501   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
INFO: Waiting for control plane capz-e2e-k3o3qf/capz-e2e-k3o3qf-oot-control-plane to be ready (implies underlying nodes to be ready as well)
STEP: Waiting for the control plane to be ready
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
E1201 07:43:08.331701   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 07:43:51.905566   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for the machine pools to be provisioned
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/webh1omuu to be available
Dec  1 07:44:14.253: INFO: starting to wait for deployment to become available
Dec  1 07:44:34.351: INFO: Deployment default/webh1omuu is now available, took 20.098605801s
STEP: creating an internal Load Balancer service
Dec  1 07:44:34.351: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/webh1omuu-ilb to be available
Dec  1 07:44:34.401: INFO: waiting for service default/webh1omuu-ilb to be available
E1201 07:44:50.581457   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 07:45:38.617785   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 07:46:13.787832   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Dec  1 07:46:24.763: INFO: service default/webh1omuu-ilb is available, took 1m50.361647625s
STEP: connecting to the internal LB service from a curl pod
Dec  1 07:46:24.791: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-jobkkw8h to be complete
Dec  1 07:46:24.833: INFO: waiting for job default/curl-to-ilb-jobkkw8h to be complete
Dec  1 07:46:34.902: INFO: job default/curl-to-ilb-jobkkw8h is complete, took 10.069078852s
STEP: deleting the ilb test resources
Dec  1 07:46:34.902: INFO: deleting the ilb service: webh1omuu-ilb
Dec  1 07:46:34.952: INFO: deleting the ilb job: curl-to-ilb-jobkkw8h
STEP: creating an external Load Balancer service
Dec  1 07:46:34.981: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/webh1omuu-elb to be available
Dec  1 07:46:35.040: INFO: waiting for service default/webh1omuu-elb to be available
E1201 07:46:56.302285   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 07:47:38.760752   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 07:48:21.661873   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Dec  1 07:48:25.389: INFO: service default/webh1omuu-elb is available, took 1m50.348400052s
STEP: connecting to the external LB service from a curl pod
Dec  1 07:48:25.417: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-job43kex3lg17q to be complete
Dec  1 07:48:25.452: INFO: waiting for job default/curl-to-elb-job43kex3lg17q to be complete
Dec  1 07:48:35.511: INFO: job default/curl-to-elb-job43kex3lg17q is complete, took 10.059151091s
STEP: connecting directly to the external LB service
Dec  1 07:48:35.511: INFO: starting attempts to connect directly to the external LB service
2021/12/01 07:48:35 [DEBUG] GET http://20.102.38.79
E1201 07:49:03.397818   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
2021/12/01 07:49:05 [ERR] GET http://20.102.38.79 request failed: Get "http://20.102.38.79": dial tcp 20.102.38.79:80: i/o timeout
2021/12/01 07:49:05 [DEBUG] GET http://20.102.38.79: retrying in 1s (4 left)
Dec  1 07:49:09.586: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Dec  1 07:49:09.586: INFO: starting to delete external LB service webh1omuu-elb
Dec  1 07:49:09.645: INFO: starting to delete deployment webh1omuu
Dec  1 07:49:09.674: INFO: starting to delete job curl-to-elb-job43kex3lg17q
... skipping 34 lines ...
STEP: Fetching activity logs took 582.334385ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-k3o3qf" namespace
STEP: Deleting all clusters in the capz-e2e-k3o3qf namespace
STEP: Deleting cluster capz-e2e-k3o3qf-oot
INFO: Waiting for the Cluster capz-e2e-k3o3qf/capz-e2e-k3o3qf-oot to be deleted
STEP: Waiting for cluster capz-e2e-k3o3qf-oot to be deleted
E1201 07:49:52.943138   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 07:50:43.863534   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 07:51:19.256793   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 07:51:55.709803   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 07:52:51.422389   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 07:53:46.526605   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 07:54:40.688993   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 07:55:24.532069   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-k3o3qf
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1201 07:56:15.609799   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 17m19s on Ginkgo node 1 of 3


• [SLOW TEST:1039.048 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:490

INFO: "with a single control plane node and 1 node" started at Wed, 01 Dec 2021 07:42:23 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-4bsmmu" for hosting the cluster
Dec  1 07:42:23.608: INFO: starting to create namespace for hosting the "capz-e2e-4bsmmu" test spec
2021/12/01 07:42:23 failed trying to get namespace (capz-e2e-4bsmmu):namespaces "capz-e2e-4bsmmu" not found
INFO: Creating namespace capz-e2e-4bsmmu
INFO: Creating event watcher for namespace "capz-e2e-4bsmmu"
Dec  1 07:42:23.644: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-4bsmmu-aks
INFO: Creating the workload cluster with name "capz-e2e-4bsmmu-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 34 lines ...
STEP: Dumping logs from the "capz-e2e-4bsmmu-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-4bsmmu/capz-e2e-4bsmmu-aks logs
Dec  1 07:46:41.480: INFO: INFO: Collecting logs for node aks-agentpool1-23146771-vmss000000 in cluster capz-e2e-4bsmmu-aks in namespace capz-e2e-4bsmmu

Dec  1 07:48:50.746: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-4bsmmu/capz-e2e-4bsmmu-aks: [dialing public load balancer at capz-e2e-4bsmmu-aks-26096374.hcp.eastus.azmk8s.io: dial tcp 20.185.96.21:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
Dec  1 07:48:51.172: INFO: INFO: Collecting logs for node aks-agentpool1-23146771-vmss000000 in cluster capz-e2e-4bsmmu-aks in namespace capz-e2e-4bsmmu

Dec  1 07:51:01.826: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-4bsmmu/capz-e2e-4bsmmu-aks: [dialing public load balancer at capz-e2e-4bsmmu-aks-26096374.hcp.eastus.azmk8s.io: dial tcp 20.185.96.21:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-4bsmmu/capz-e2e-4bsmmu-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 570.442476ms
STEP: Dumping workload cluster capz-e2e-4bsmmu/capz-e2e-4bsmmu-aks Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-lg4c4, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-84d976c568-x4b69, container coredns
STEP: Creating log watcher for controller kube-system/coredns-autoscaler-54d55c8b75-lqk2v, container autoscaler
... skipping 30 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413

INFO: "with a single control plane node and 1 node" started at Wed, 01 Dec 2021 07:32:40 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-0mm1tm" for hosting the cluster
Dec  1 07:32:40.004: INFO: starting to create namespace for hosting the "capz-e2e-0mm1tm" test spec
2021/12/01 07:32:40 failed trying to get namespace (capz-e2e-0mm1tm):namespaces "capz-e2e-0mm1tm" not found
INFO: Creating namespace capz-e2e-0mm1tm
INFO: Creating event watcher for namespace "capz-e2e-0mm1tm"
Dec  1 07:32:40.037: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-0mm1tm-gpu
INFO: Creating the workload cluster with name "capz-e2e-0mm1tm-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.4, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: Fetching activity logs took 1.009744159s
STEP: Dumping all the Cluster API resources in the "capz-e2e-0mm1tm" namespace
STEP: Deleting all clusters in the capz-e2e-0mm1tm namespace
STEP: Deleting cluster capz-e2e-0mm1tm-gpu
INFO: Waiting for the Cluster capz-e2e-0mm1tm/capz-e2e-0mm1tm-gpu to be deleted
STEP: Waiting for cluster capz-e2e-0mm1tm-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-44jqd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2j85n, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-0mm1tm
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 36m47s on Ginkgo node 2 of 3

... skipping 57 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:579

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Wed, 01 Dec 2021 08:00:40 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-tljrm3" for hosting the cluster
Dec  1 08:00:40.737: INFO: starting to create namespace for hosting the "capz-e2e-tljrm3" test spec
2021/12/01 08:00:40 failed trying to get namespace (capz-e2e-tljrm3):namespaces "capz-e2e-tljrm3" not found
INFO: Creating namespace capz-e2e-tljrm3
INFO: Creating event watcher for namespace "capz-e2e-tljrm3"
Dec  1 08:00:40.915: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-tljrm3-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-tljrm3-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.4, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 89 lines ...
STEP: waiting for job default/curl-to-elb-jobvfp34oizkxi to be complete
Dec  1 08:15:16.866: INFO: waiting for job default/curl-to-elb-jobvfp34oizkxi to be complete
Dec  1 08:15:26.935: INFO: job default/curl-to-elb-jobvfp34oizkxi is complete, took 10.068753732s
STEP: connecting directly to the external LB service
Dec  1 08:15:26.935: INFO: starting attempts to connect directly to the external LB service
2021/12/01 08:15:26 [DEBUG] GET http://20.85.198.131
2021/12/01 08:15:56 [ERR] GET http://20.85.198.131 request failed: Get "http://20.85.198.131": dial tcp 20.85.198.131:80: i/o timeout
2021/12/01 08:15:56 [DEBUG] GET http://20.85.198.131: retrying in 1s (4 left)
Dec  1 08:15:58.001: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Dec  1 08:15:58.001: INFO: starting to delete external LB service web-windowsn5k1tb-elb
Dec  1 08:15:58.058: INFO: starting to delete deployment web-windowsn5k1tb
Dec  1 08:15:58.089: INFO: starting to delete job curl-to-elb-jobvfp34oizkxi
... skipping 4 lines ...
Dec  1 08:16:08.992: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-tljrm3-win-vmss-control-plane-8thlm

Dec  1 08:16:09.762: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-tljrm3-win-vmss in namespace capz-e2e-tljrm3

Dec  1 08:16:32.271: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-tljrm3-win-vmss-mp-0

Failed to get logs for machine pool capz-e2e-tljrm3-win-vmss-mp-0, cluster capz-e2e-tljrm3/capz-e2e-tljrm3-win-vmss: [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1]
Dec  1 08:16:32.548: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-tljrm3-win-vmss in namespace capz-e2e-tljrm3

Dec  1 08:17:08.685: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

STEP: Dumping workload cluster capz-e2e-tljrm3/capz-e2e-tljrm3-win-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 331.629634ms
... skipping 13 lines ...
STEP: Fetching activity logs took 1.220426497s
STEP: Dumping all the Cluster API resources in the "capz-e2e-tljrm3" namespace
STEP: Deleting all clusters in the capz-e2e-tljrm3 namespace
STEP: Deleting cluster capz-e2e-tljrm3-win-vmss
INFO: Waiting for the Cluster capz-e2e-tljrm3/capz-e2e-tljrm3-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-tljrm3-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-hqk26, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-k9gq4, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nzfnd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-tljrm3-win-vmss-control-plane-8thlm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-tljrm3-win-vmss-control-plane-8thlm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jtjk4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-rxpfn, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-tljrm3-win-vmss-control-plane-8thlm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-tljrm3-win-vmss-control-plane-8thlm, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-k4bvj, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-tljrm3
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 26m33s on Ginkgo node 3 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Wed, 01 Dec 2021 07:56:29 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-ye00zt" for hosting the cluster
Dec  1 07:56:29.360: INFO: starting to create namespace for hosting the "capz-e2e-ye00zt" test spec
2021/12/01 07:56:29 failed trying to get namespace (capz-e2e-ye00zt):namespaces "capz-e2e-ye00zt" not found
INFO: Creating namespace capz-e2e-ye00zt
INFO: Creating event watcher for namespace "capz-e2e-ye00zt"
Dec  1 07:56:29.412: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-ye00zt-win-ha
INFO: Creating the workload cluster with name "capz-e2e-ye00zt-win-ha" using the "windows" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 12 lines ...
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created
clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-ye00zt-win-ha-flannel created
configmap/cni-capz-e2e-ye00zt-win-ha-flannel created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1201 07:57:13.164092   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by capz-e2e-ye00zt/capz-e2e-ye00zt-win-ha-control-plane to be provisioned
STEP: Waiting for one control plane node to exist
E1201 07:57:58.806563   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 07:58:40.986012   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
INFO: Waiting for the remaining control plane machines managed by capz-e2e-ye00zt/capz-e2e-ye00zt-win-ha-control-plane to be provisioned
STEP: Waiting for all control plane nodes to exist
E1201 07:59:15.666360   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:00:03.710271   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:01:01.475366   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:01:47.523387   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane capz-e2e-ye00zt/capz-e2e-ye00zt-win-ha-control-plane to be ready (implies underlying nodes to be ready as well)
STEP: Waiting for the control plane to be ready
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
STEP: Waiting for the workload nodes to exist
INFO: Waiting for the machine pools to be provisioned
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/webytm0n7 to be available
Dec  1 08:02:43.180: INFO: starting to wait for deployment to become available
E1201 08:02:46.086995   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Dec  1 08:03:03.298: INFO: Deployment default/webytm0n7 is now available, took 20.118985627s
STEP: creating an internal Load Balancer service
Dec  1 08:03:03.299: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/webytm0n7-ilb to be available
Dec  1 08:03:03.391: INFO: waiting for service default/webytm0n7-ilb to be available
E1201 08:03:24.929499   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:04:19.693173   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:04:50.874710   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Dec  1 08:04:53.841: INFO: service default/webytm0n7-ilb is available, took 1m50.450554028s
STEP: connecting to the internal LB service from a curl pod
Dec  1 08:04:53.874: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-jobf70wi to be complete
Dec  1 08:04:53.941: INFO: waiting for job default/curl-to-ilb-jobf70wi to be complete
Dec  1 08:05:04.014: INFO: job default/curl-to-ilb-jobf70wi is complete, took 10.072724982s
STEP: deleting the ilb test resources
Dec  1 08:05:04.014: INFO: deleting the ilb service: webytm0n7-ilb
Dec  1 08:05:04.118: INFO: deleting the ilb job: curl-to-ilb-jobf70wi
STEP: creating an external Load Balancer service
Dec  1 08:05:04.157: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/webytm0n7-elb to be available
Dec  1 08:05:04.251: INFO: waiting for service default/webytm0n7-elb to be available
E1201 08:05:20.994337   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:06:19.850581   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Dec  1 08:06:24.723: INFO: service default/webytm0n7-elb is available, took 1m20.47206931s
STEP: connecting to the external LB service from a curl pod
Dec  1 08:06:24.756: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-job842zfr3fht3 to be complete
Dec  1 08:06:24.797: INFO: waiting for job default/curl-to-elb-job842zfr3fht3 to be complete
Dec  1 08:06:34.872: INFO: job default/curl-to-elb-job842zfr3fht3 is complete, took 10.074880234s
... skipping 6 lines ...
Dec  1 08:06:35.014: INFO: starting to delete deployment webytm0n7
Dec  1 08:06:35.063: INFO: starting to delete job curl-to-elb-job842zfr3fht3
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowsz8coh1 to be available
Dec  1 08:06:35.217: INFO: starting to wait for deployment to become available
E1201 08:06:58.035792   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Dec  1 08:07:35.498: INFO: Deployment default/web-windowsz8coh1 is now available, took 1m0.28128963s
STEP: creating an internal Load Balancer service
Dec  1 08:07:35.498: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/web-windowsz8coh1-ilb to be available
Dec  1 08:07:35.590: INFO: waiting for service default/web-windowsz8coh1-ilb to be available
E1201 08:07:38.313079   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:08:20.146034   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Dec  1 08:08:35.834: INFO: service default/web-windowsz8coh1-ilb is available, took 1m0.244115707s
STEP: connecting to the internal LB service from a curl pod
Dec  1 08:08:35.867: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-jobcyp4g to be complete
Dec  1 08:08:35.911: INFO: waiting for job default/curl-to-ilb-jobcyp4g to be complete
Dec  1 08:08:45.984: INFO: job default/curl-to-ilb-jobcyp4g is complete, took 10.072673401s
STEP: deleting the ilb test resources
Dec  1 08:08:45.984: INFO: deleting the ilb service: web-windowsz8coh1-ilb
Dec  1 08:08:46.066: INFO: deleting the ilb job: curl-to-ilb-jobcyp4g
STEP: creating an external Load Balancer service
Dec  1 08:08:46.111: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/web-windowsz8coh1-elb to be available
Dec  1 08:08:46.194: INFO: waiting for service default/web-windowsz8coh1-elb to be available
E1201 08:09:12.308949   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:09:43.301995   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:10:39.245118   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:11:19.107476   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:12:03.230027   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:12:35.415189   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:13:19.145323   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Dec  1 08:13:27.203: INFO: service default/web-windowsz8coh1-elb is available, took 4m41.008933518s
STEP: connecting to the external LB service from a curl pod
Dec  1 08:13:27.236: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-jobf9lf93aqpoj to be complete
Dec  1 08:13:27.279: INFO: waiting for job default/curl-to-elb-jobf9lf93aqpoj to be complete
Dec  1 08:13:37.362: INFO: job default/curl-to-elb-jobf9lf93aqpoj is complete, took 10.083737573s
STEP: connecting directly to the external LB service
Dec  1 08:13:37.362: INFO: starting attempts to connect directly to the external LB service
2021/12/01 08:13:37 [DEBUG] GET http://20.85.198.96
2021/12/01 08:14:07 [ERR] GET http://20.85.198.96 request failed: Get "http://20.85.198.96": dial tcp 20.85.198.96:80: i/o timeout
2021/12/01 08:14:07 [DEBUG] GET http://20.85.198.96: retrying in 1s (4 left)
Dec  1 08:14:08.418: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Dec  1 08:14:08.418: INFO: starting to delete external LB service web-windowsz8coh1-elb
Dec  1 08:14:08.506: INFO: starting to delete deployment web-windowsz8coh1
Dec  1 08:14:08.545: INFO: starting to delete job curl-to-elb-jobf9lf93aqpoj
STEP: Dumping logs from the "capz-e2e-ye00zt-win-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-ye00zt/capz-e2e-ye00zt-win-ha logs
Dec  1 08:14:08.636: INFO: INFO: Collecting logs for node capz-e2e-ye00zt-win-ha-control-plane-57jpb in cluster capz-e2e-ye00zt-win-ha in namespace capz-e2e-ye00zt

E1201 08:14:13.298162   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Dec  1 08:14:20.849: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-ye00zt-win-ha-control-plane-57jpb

Dec  1 08:14:21.519: INFO: INFO: Collecting logs for node capz-e2e-ye00zt-win-ha-control-plane-bfhjk in cluster capz-e2e-ye00zt-win-ha in namespace capz-e2e-ye00zt

Dec  1 08:14:30.832: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-ye00zt-win-ha-control-plane-bfhjk

Dec  1 08:14:31.202: INFO: INFO: Collecting logs for node capz-e2e-ye00zt-win-ha-control-plane-fchft in cluster capz-e2e-ye00zt-win-ha in namespace capz-e2e-ye00zt

Dec  1 08:14:39.190: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-ye00zt-win-ha-control-plane-fchft

Dec  1 08:14:39.471: INFO: INFO: Collecting logs for node capz-e2e-ye00zt-win-ha-md-0-cfscg in cluster capz-e2e-ye00zt-win-ha in namespace capz-e2e-ye00zt

E1201 08:14:43.455872   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Dec  1 08:14:49.999: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-ye00zt-win-ha-md-0-cfscg

Dec  1 08:14:50.337: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster capz-e2e-ye00zt-win-ha in namespace capz-e2e-ye00zt

Dec  1 08:15:14.777: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-ye00zt-win-ha-md-win-rwdkk

Dec  1 08:15:15.056: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster capz-e2e-ye00zt-win-ha in namespace capz-e2e-ye00zt

E1201 08:15:32.803913   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
Dec  1 08:15:37.918: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-ye00zt-win-ha-md-win-4vs5v

STEP: Dumping workload cluster capz-e2e-ye00zt/capz-e2e-ye00zt-win-ha kube-system pod logs
STEP: Fetching kube-system pod logs took 319.853583ms
STEP: Dumping workload cluster capz-e2e-ye00zt/capz-e2e-ye00zt-win-ha Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-9j66m, container coredns
... skipping 19 lines ...
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-7dfx8, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-ye00zt-win-ha-control-plane-fchft, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-l29bp, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-nxwxd, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-proxy-l96rd, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-ye00zt-win-ha-control-plane-57jpb, container kube-scheduler
E1201 08:16:06.327721   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Got error while iterating over activity logs for resource group capz-e2e-ye00zt-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000200577s
STEP: Dumping all the Cluster API resources in the "capz-e2e-ye00zt" namespace
STEP: Deleting all clusters in the capz-e2e-ye00zt namespace
STEP: Deleting cluster capz-e2e-ye00zt-win-ha
INFO: Waiting for the Cluster capz-e2e-ye00zt/capz-e2e-ye00zt-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-ye00zt-win-ha to be deleted
E1201 08:16:44.277893   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:17:16.511965   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:18:15.615404   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-9j66m, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-768pr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ye00zt-win-ha-control-plane-57jpb, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-zrwkg, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mfq77, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ye00zt-win-ha-control-plane-57jpb, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ye00zt-win-ha-control-plane-57jpb, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ye00zt-win-ha-control-plane-fchft, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ye00zt-win-ha-control-plane-fchft, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-7dfx8, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ye00zt-win-ha-control-plane-fchft, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ye00zt-win-ha-control-plane-57jpb, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ye00zt-win-ha-control-plane-fchft, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-n7cqs, container kube-flannel: http2: client connection lost
E1201 08:18:53.463809   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:19:40.478934   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:20:26.519760   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:21:01.538934   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:21:47.725077   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:22:30.505168   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:23:24.395330   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:24:00.449548   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:24:48.707688   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-ye00zt
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1201 08:25:21.533151   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:25:59.920800   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:26:59.288914   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
E1201 08:27:57.971545   24335 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-oix5nm/events?resourceVersion=11246": dial tcp: lookup capz-e2e-oix5nm-public-custom-vnet-3178f313.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 31m46s on Ginkgo node 1 of 3


• [SLOW TEST:1906.375 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 5 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a GPU-enabled cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:76

Ran 9 of 24 Specs in 6150.249 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 15 Skipped


Ginkgo ran 1 suite in 1h43m48.184695597s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:176: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:184: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...