This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2022-05-03 19:39
Elapsed1h56m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating a GPU-enabled cluster with a single control plane node and 1 node 34m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sGPU\-enabled\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377
Timed out after 1200.002s.
Expected
    <bool>: false
to be true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:76
				
				Click to see stdout/stderrfrom junit.e2e_suite.2.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 429 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Tue, 03 May 2022 19:46:24 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-4jwlgb" for hosting the cluster
May  3 19:46:24.914: INFO: starting to create namespace for hosting the "capz-e2e-4jwlgb" test spec
2022/05/03 19:46:24 failed trying to get namespace (capz-e2e-4jwlgb):namespaces "capz-e2e-4jwlgb" not found
INFO: Creating namespace capz-e2e-4jwlgb
INFO: Creating event watcher for namespace "capz-e2e-4jwlgb"
May  3 19:46:24.983: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-4jwlgb-ipv6
INFO: Creating the workload cluster with name "capz-e2e-4jwlgb-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 688.214597ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-4jwlgb" namespace
STEP: Deleting all clusters in the capz-e2e-4jwlgb namespace
STEP: Deleting cluster capz-e2e-4jwlgb-ipv6
INFO: Waiting for the Cluster capz-e2e-4jwlgb/capz-e2e-4jwlgb-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-4jwlgb-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-4jwlgb-ipv6-control-plane-nt4hg, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-4jwlgb-ipv6-control-plane-mztwh, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-4jwlgb-ipv6-control-plane-mztwh, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-4jwlgb-ipv6-control-plane-nt4hg, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-htbgb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-4jwlgb-ipv6-control-plane-mztwh, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7bppp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-4jwlgb-ipv6-control-plane-nt4hg, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-4jwlgb-ipv6-control-plane-v66bg, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wj4fk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-j4rw9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-4jwlgb-ipv6-control-plane-nt4hg, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-vcksg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-t25fn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-jndrg, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-dglxq, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-dtcqb, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-v427m, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vqm2j, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-4jwlgb-ipv6-control-plane-mztwh, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-4jwlgb-ipv6-control-plane-v66bg, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-4jwlgb-ipv6-control-plane-v66bg, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-4jwlgb-ipv6-control-plane-v66bg, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-4jwlgb
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 20m46s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Tue, 03 May 2022 20:07:10 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-3u03lh" for hosting the cluster
May  3 20:07:10.586: INFO: starting to create namespace for hosting the "capz-e2e-3u03lh" test spec
2022/05/03 20:07:10 failed trying to get namespace (capz-e2e-3u03lh):namespaces "capz-e2e-3u03lh" not found
INFO: Creating namespace capz-e2e-3u03lh
INFO: Creating event watcher for namespace "capz-e2e-3u03lh"
May  3 20:07:10.623: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-3u03lh-vmss
INFO: Creating the workload cluster with name "capz-e2e-3u03lh-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 128 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Tue, 03 May 2022 19:46:24 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-a0ou2c" for hosting the cluster
May  3 19:46:24.911: INFO: starting to create namespace for hosting the "capz-e2e-a0ou2c" test spec
2022/05/03 19:46:24 failed trying to get namespace (capz-e2e-a0ou2c):namespaces "capz-e2e-a0ou2c" not found
INFO: Creating namespace capz-e2e-a0ou2c
INFO: Creating event watcher for namespace "capz-e2e-a0ou2c"
May  3 19:46:24.979: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-a0ou2c-ha
INFO: Creating the workload cluster with name "capz-e2e-a0ou2c-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 65 lines ...
May  3 19:56:33.170: INFO: starting to delete external LB service webyqpsis-elb
May  3 19:56:33.231: INFO: starting to delete deployment webyqpsis
May  3 19:56:33.250: INFO: starting to delete job curl-to-elb-jobx0j7hzt29c3
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
May  3 19:56:33.320: INFO: starting to create dev deployment namespace
2022/05/03 19:56:33 failed trying to get namespace (development):namespaces "development" not found
2022/05/03 19:56:33 namespace development does not exist, creating...
STEP: Creating production namespace
May  3 19:56:33.385: INFO: starting to create prod deployment namespace
2022/05/03 19:56:33 failed trying to get namespace (production):namespaces "production" not found
2022/05/03 19:56:33 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
May  3 19:56:33.439: INFO: starting to create frontend-prod deployments
May  3 19:56:33.467: INFO: starting to create frontend-dev deployments
May  3 19:56:33.506: INFO: starting to create backend deployments
May  3 19:56:33.525: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
May  3 19:56:55.666: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.238.195 port 80: Connection timed out

STEP: Cleaning up after ourselves
May  3 19:59:06.874: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
May  3 19:59:07.035: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.238.195 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.238.195 port 80: Connection timed out

STEP: Cleaning up after ourselves
May  3 20:03:29.001: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
May  3 20:03:29.134: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.40.67 port 80: Connection timed out

STEP: Cleaning up after ourselves
May  3 20:05:40.404: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
May  3 20:05:40.533: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.40.66 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.40.67 port 80: Connection timed out

STEP: Cleaning up after ourselves
May  3 20:10:02.546: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
May  3 20:10:02.684: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.238.195 port 80: Connection timed out

STEP: Cleaning up after ourselves
May  3 20:12:13.289: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
May  3 20:12:13.395: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.238.195 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-a0ou2c-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-a0ou2c/capz-e2e-a0ou2c-ha logs
May  3 20:14:24.701: INFO: INFO: Collecting logs for node capz-e2e-a0ou2c-ha-control-plane-gq5qh in cluster capz-e2e-a0ou2c-ha in namespace capz-e2e-a0ou2c

May  3 20:14:36.281: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-a0ou2c-ha-control-plane-gq5qh
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-bn9pn, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-m57v5, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-vx4ws, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-a0ou2c-ha-control-plane-gq5qh, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-a0ou2c-ha-control-plane-sphg5, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-a0ou2c-ha-control-plane-d4d42, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-a0ou2c-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00099061s
STEP: Dumping all the Cluster API resources in the "capz-e2e-a0ou2c" namespace
STEP: Deleting all clusters in the capz-e2e-a0ou2c namespace
STEP: Deleting cluster capz-e2e-a0ou2c-ha
INFO: Waiting for the Cluster capz-e2e-a0ou2c/capz-e2e-a0ou2c-ha to be deleted
STEP: Waiting for cluster capz-e2e-a0ou2c-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-a0ou2c-ha-control-plane-gq5qh, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mdsps, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bn9pn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-a0ou2c-ha-control-plane-gq5qh, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-jdmp2, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-zz6bc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-s2kt9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-lg5vg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mpcwf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-a0ou2c-ha-control-plane-gq5qh, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-a0ou2c-ha-control-plane-d4d42, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-a0ou2c-ha-control-plane-d4d42, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-bgwx8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-a0ou2c-ha-control-plane-gq5qh, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-a0ou2c-ha-control-plane-d4d42, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-a0ou2c-ha-control-plane-d4d42, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-m57v5, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-a0ou2c
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 46m28s on Ginkgo node 3 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Tue, 03 May 2022 19:46:24 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-tlzjtq" for hosting the cluster
May  3 19:46:24.872: INFO: starting to create namespace for hosting the "capz-e2e-tlzjtq" test spec
2022/05/03 19:46:24 failed trying to get namespace (capz-e2e-tlzjtq):namespaces "capz-e2e-tlzjtq" not found
INFO: Creating namespace capz-e2e-tlzjtq
INFO: Creating event watcher for namespace "capz-e2e-tlzjtq"
May  3 19:46:24.910: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-tlzjtq-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-ct5vd, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-fjxtb, container coredns
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-tlzjtq-public-custom-vnet-control-plane-hrfct, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-tlzjtq-public-custom-vnet-control-plane-hrfct, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-tlzjtq-public-custom-vnet-control-plane-hrfct, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-v2nwc, container coredns
STEP: Got error while iterating over activity logs for resource group capz-e2e-tlzjtq-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001026401s
STEP: Dumping all the Cluster API resources in the "capz-e2e-tlzjtq" namespace
STEP: Deleting all clusters in the capz-e2e-tlzjtq namespace
STEP: Deleting cluster capz-e2e-tlzjtq-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-tlzjtq/capz-e2e-tlzjtq-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-tlzjtq-public-custom-vnet to be deleted
W0503 20:32:47.218959   24176 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I0503 20:33:18.317476   24176 trace.go:205] Trace[269488513]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (03-May-2022 20:32:48.316) (total time: 30000ms):
Trace[269488513]: [30.000678639s] [30.000678639s] END
E0503 20:33:18.317562   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp 20.25.237.208:6443: i/o timeout
I0503 20:33:50.644545   24176 trace.go:205] Trace[1837500405]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (03-May-2022 20:33:20.643) (total time: 30001ms):
Trace[1837500405]: [30.001097837s] [30.001097837s] END
E0503 20:33:50.644618   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp 20.25.237.208:6443: i/o timeout
I0503 20:34:26.971234   24176 trace.go:205] Trace[1303651311]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (03-May-2022 20:33:56.969) (total time: 30001ms):
Trace[1303651311]: [30.001335621s] [30.001335621s] END
E0503 20:34:26.971306   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp 20.25.237.208:6443: i/o timeout
I0503 20:35:04.135550   24176 trace.go:205] Trace[1550082099]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (03-May-2022 20:34:34.134) (total time: 30000ms):
Trace[1550082099]: [30.000617936s] [30.000617936s] END
E0503 20:35:04.135618   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp 20.25.237.208:6443: i/o timeout
I0503 20:35:53.999304   24176 trace.go:205] Trace[1137903146]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (03-May-2022 20:35:23.998) (total time: 30001ms):
Trace[1137903146]: [30.001198362s] [30.001198362s] END
E0503 20:35:53.999372   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp 20.25.237.208:6443: i/o timeout
I0503 20:37:14.852569   24176 trace.go:205] Trace[1413573525]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (03-May-2022 20:36:44.851) (total time: 30001ms):
Trace[1413573525]: [30.001193135s] [30.001193135s] END
E0503 20:37:14.852641   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp 20.25.237.208:6443: i/o timeout
E0503 20:37:47.178201   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-tlzjtq
STEP: Running additional cleanup for the "create-workload-cluster" test spec
May  3 20:37:56.244: INFO: deleting an existing virtual network "custom-vnet"
May  3 20:38:06.689: INFO: deleting an existing route table "node-routetable"
May  3 20:38:08.923: INFO: deleting an existing network security group "node-nsg"
May  3 20:38:19.173: INFO: deleting an existing network security group "control-plane-nsg"
May  3 20:38:29.399: INFO: verifying the existing resource group "capz-e2e-tlzjtq-public-custom-vnet" is empty
May  3 20:38:29.492: INFO: deleting the existing resource group "capz-e2e-tlzjtq-public-custom-vnet"
E0503 20:38:31.421496   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 20:39:21.798008   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E0503 20:40:06.071028   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 54m5s on Ginkgo node 1 of 3


• [SLOW TEST:3244.678 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Tue, 03 May 2022 20:23:34 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-7c961i" for hosting the cluster
May  3 20:23:34.208: INFO: starting to create namespace for hosting the "capz-e2e-7c961i" test spec
2022/05/03 20:23:34 failed trying to get namespace (capz-e2e-7c961i):namespaces "capz-e2e-7c961i" not found
INFO: Creating namespace capz-e2e-7c961i
INFO: Creating event watcher for namespace "capz-e2e-7c961i"
May  3 20:23:34.251: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-7c961i-gpu
INFO: Creating the workload cluster with name "capz-e2e-7c961i-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: Fetching activity logs took 1.140969055s
STEP: Dumping all the Cluster API resources in the "capz-e2e-7c961i" namespace
STEP: Deleting all clusters in the capz-e2e-7c961i namespace
STEP: Deleting cluster capz-e2e-7c961i-gpu
INFO: Waiting for the Cluster capz-e2e-7c961i/capz-e2e-7c961i-gpu to be deleted
STEP: Waiting for cluster capz-e2e-7c961i-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kh2cw, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-7c961i-gpu-control-plane-t2zdl, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-j7zh2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-lzpvb, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-hjp2d, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-7c961i-gpu-control-plane-t2zdl, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-7c961i-gpu-control-plane-t2zdl, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-7c961i-gpu-control-plane-t2zdl, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-5vmhw, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-7c961i
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 34m2s on Ginkgo node 2 of 3

... skipping 57 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Tue, 03 May 2022 20:32:53 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-crk3lk" for hosting the cluster
May  3 20:32:53.251: INFO: starting to create namespace for hosting the "capz-e2e-crk3lk" test spec
2022/05/03 20:32:53 failed trying to get namespace (capz-e2e-crk3lk):namespaces "capz-e2e-crk3lk" not found
INFO: Creating namespace capz-e2e-crk3lk
INFO: Creating event watcher for namespace "capz-e2e-crk3lk"
May  3 20:32:53.289: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-crk3lk-oot
INFO: Creating the workload cluster with name "capz-e2e-crk3lk-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 98 lines ...
STEP: Fetching activity logs took 988.11935ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-crk3lk" namespace
STEP: Deleting all clusters in the capz-e2e-crk3lk namespace
STEP: Deleting cluster capz-e2e-crk3lk-oot
INFO: Waiting for the Cluster capz-e2e-crk3lk/capz-e2e-crk3lk-oot to be deleted
STEP: Waiting for cluster capz-e2e-crk3lk-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-x6jhv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-v4db6, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-w7bbq, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-crk3lk
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 25m30s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Tue, 03 May 2022 20:40:29 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-x5ar5w" for hosting the cluster
May  3 20:40:29.554: INFO: starting to create namespace for hosting the "capz-e2e-x5ar5w" test spec
2022/05/03 20:40:29 failed trying to get namespace (capz-e2e-x5ar5w):namespaces "capz-e2e-x5ar5w" not found
INFO: Creating namespace capz-e2e-x5ar5w
INFO: Creating event watcher for namespace "capz-e2e-x5ar5w"
May  3 20:40:29.591: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-x5ar5w-aks
INFO: Creating the workload cluster with name "capz-e2e-x5ar5w-aks" using the "aks-multi-tenancy" template (Kubernetes v1.22.6, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E0503 20:40:48.707668   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 20:41:31.250227   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 20:42:08.011319   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 20:42:43.496889   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 20:43:14.665099   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 20:44:07.034437   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 20:44:59.160277   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 20:45:44.386318   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 20:46:14.757311   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 20:46:59.707745   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 20:47:38.491000   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
May  3 20:48:00.940: INFO: Waiting for the first control plane machine managed by capz-e2e-x5ar5w/capz-e2e-x5ar5w-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
E0503 20:48:08.955531   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 20:49:05.113458   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 20:49:49.858396   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 20:50:35.976026   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 20:51:35.204267   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
May  3 20:51:51.229: INFO: Waiting for the first control plane machine managed by capz-e2e-x5ar5w/capz-e2e-x5ar5w-aks to be provisioned
STEP: Waiting for all control plane nodes to exist
INFO: Waiting for the machine deployments to be provisioned
INFO: Waiting for the machine pools to be provisioned
STEP: Waiting for the machine pool workload nodes to exist
... skipping 10 lines ...
STEP: time sync OK for host aks-agentpool1-18835011-vmss000000
STEP: time sync OK for host aks-agentpool1-18835011-vmss000000
STEP: Dumping logs from the "capz-e2e-x5ar5w-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-x5ar5w/capz-e2e-x5ar5w-aks logs
May  3 20:51:57.475: INFO: INFO: Collecting logs for node aks-agentpool1-18835011-vmss000000 in cluster capz-e2e-x5ar5w-aks in namespace capz-e2e-x5ar5w

E0503 20:52:25.077678   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 20:52:58.703100   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 20:53:32.159587   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
May  3 20:54:06.842: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-x5ar5w/capz-e2e-x5ar5w-aks: [dialing public load balancer at capz-e2e-x5ar5w-aks-8c0108a7.hcp.northcentralus.azmk8s.io: dial tcp 20.98.9.112:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
May  3 20:54:07.357: INFO: INFO: Collecting logs for node aks-agentpool1-18835011-vmss000000 in cluster capz-e2e-x5ar5w-aks in namespace capz-e2e-x5ar5w

E0503 20:54:20.302282   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 20:55:07.291337   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 20:55:43.820516   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 20:56:15.069265   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
May  3 20:56:17.918: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-x5ar5w/capz-e2e-x5ar5w-aks: [dialing public load balancer at capz-e2e-x5ar5w-aks-8c0108a7.hcp.northcentralus.azmk8s.io: dial tcp 20.98.9.112:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-x5ar5w/capz-e2e-x5ar5w-aks kube-system pod logs
STEP: Creating log watcher for controller kube-system/cloud-node-manager-v6lnq, container cloud-node-manager
STEP: Creating log watcher for controller kube-system/coredns-69c47794-h9fzx, container coredns
STEP: Creating log watcher for controller kube-system/coredns-69c47794-ltsvm, container coredns
STEP: Creating log watcher for controller kube-system/azure-ip-masq-agent-j6kbj, container azure-ip-masq-agent
STEP: Creating log watcher for controller kube-system/tunnelfront-8d65c995b-nx5cs, container tunnel-front
... skipping 20 lines ...
STEP: Fetching activity logs took 505.533614ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-x5ar5w" namespace
STEP: Deleting all clusters in the capz-e2e-x5ar5w namespace
STEP: Deleting cluster capz-e2e-x5ar5w-aks
INFO: Waiting for the Cluster capz-e2e-x5ar5w/capz-e2e-x5ar5w-aks to be deleted
STEP: Waiting for cluster capz-e2e-x5ar5w-aks to be deleted
E0503 20:57:01.791574   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 20:57:56.564651   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 20:58:51.211717   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 20:59:45.811689   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:00:40.841830   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:01:36.484777   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:02:27.812634   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-x5ar5w
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E0503 21:03:16.043745   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:04:11.171876   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 23m42s on Ginkgo node 1 of 3


• [SLOW TEST:1421.832 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 8 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Tue, 03 May 2022 20:58:23 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-rggd29" for hosting the cluster
May  3 20:58:23.222: INFO: starting to create namespace for hosting the "capz-e2e-rggd29" test spec
2022/05/03 20:58:23 failed trying to get namespace (capz-e2e-rggd29):namespaces "capz-e2e-rggd29" not found
INFO: Creating namespace capz-e2e-rggd29
INFO: Creating event watcher for namespace "capz-e2e-rggd29"
May  3 20:58:23.266: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-rggd29-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-rggd29-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 89 lines ...
STEP: waiting for job default/curl-to-elb-jobjlge9njvy18 to be complete
May  3 21:15:26.568: INFO: waiting for job default/curl-to-elb-jobjlge9njvy18 to be complete
May  3 21:15:36.605: INFO: job default/curl-to-elb-jobjlge9njvy18 is complete, took 10.036923265s
STEP: connecting directly to the external LB service
May  3 21:15:36.605: INFO: starting attempts to connect directly to the external LB service
2022/05/03 21:15:36 [DEBUG] GET http://20.25.233.213
2022/05/03 21:16:06 [ERR] GET http://20.25.233.213 request failed: Get "http://20.25.233.213": dial tcp 20.25.233.213:80: i/o timeout
2022/05/03 21:16:06 [DEBUG] GET http://20.25.233.213: retrying in 1s (4 left)
2022/05/03 21:16:20 [ERR] GET http://20.25.233.213 request failed: Get "http://20.25.233.213": dial tcp 20.25.233.213:80: connect: connection refused
2022/05/03 21:16:20 [DEBUG] GET http://20.25.233.213: retrying in 2s (3 left)
May  3 21:16:22.489: INFO: successfully connected to the external LB service
STEP: deleting the test resources
May  3 21:16:22.489: INFO: starting to delete external LB service web-windowsfaobrm-elb
May  3 21:16:22.540: INFO: starting to delete deployment web-windowsfaobrm
May  3 21:16:22.559: INFO: starting to delete job curl-to-elb-jobjlge9njvy18
... skipping 23 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-rggd29-win-vmss-control-plane-7p5jz, container kube-controller-manager
STEP: Dumping workload cluster capz-e2e-rggd29/capz-e2e-rggd29-win-vmss Azure activity log
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-rggd29-win-vmss-control-plane-7p5jz, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-rkrc4, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-proxy-2flhf, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-bv2kp, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-rggd29-win-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00036777s
STEP: Dumping all the Cluster API resources in the "capz-e2e-rggd29" namespace
STEP: Deleting all clusters in the capz-e2e-rggd29 namespace
STEP: Deleting cluster capz-e2e-rggd29-win-vmss
INFO: Waiting for the Cluster capz-e2e-rggd29/capz-e2e-rggd29-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-rggd29-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-w78rs, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-rggd29
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 34m11s on Ginkgo node 3 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Tue, 03 May 2022 20:57:36 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-abi8h1" for hosting the cluster
May  3 20:57:36.597: INFO: starting to create namespace for hosting the "capz-e2e-abi8h1" test spec
2022/05/03 20:57:36 failed trying to get namespace (capz-e2e-abi8h1):namespaces "capz-e2e-abi8h1" not found
INFO: Creating namespace capz-e2e-abi8h1
INFO: Creating event watcher for namespace "capz-e2e-abi8h1"
May  3 20:57:36.643: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-abi8h1-win-ha
INFO: Creating the workload cluster with name "capz-e2e-abi8h1-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 55 lines ...
STEP: waiting for job default/curl-to-elb-jobmtuf2xezc41 to be complete
May  3 21:10:42.200: INFO: waiting for job default/curl-to-elb-jobmtuf2xezc41 to be complete
May  3 21:10:52.242: INFO: job default/curl-to-elb-jobmtuf2xezc41 is complete, took 10.041882334s
STEP: connecting directly to the external LB service
May  3 21:10:52.242: INFO: starting attempts to connect directly to the external LB service
2022/05/03 21:10:52 [DEBUG] GET http://52.159.103.112
2022/05/03 21:11:22 [ERR] GET http://52.159.103.112 request failed: Get "http://52.159.103.112": dial tcp 52.159.103.112:80: i/o timeout
2022/05/03 21:11:22 [DEBUG] GET http://52.159.103.112: retrying in 1s (4 left)
May  3 21:11:26.290: INFO: successfully connected to the external LB service
STEP: deleting the test resources
May  3 21:11:26.290: INFO: starting to delete external LB service webdg28r2-elb
May  3 21:11:26.377: INFO: starting to delete deployment webdg28r2
May  3 21:11:26.403: INFO: starting to delete job curl-to-elb-jobmtuf2xezc41
... skipping 79 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-hkgqw, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-7rdbg, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-abi8h1-win-ha-control-plane-r5vqm, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-8sjcf, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-abi8h1-win-ha-control-plane-wnjmn, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-abi8h1-win-ha-control-plane-5v6mb, container kube-apiserver
STEP: Got error while iterating over activity logs for resource group capz-e2e-abi8h1-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000410842s
STEP: Dumping all the Cluster API resources in the "capz-e2e-abi8h1" namespace
STEP: Deleting all clusters in the capz-e2e-abi8h1 namespace
STEP: Deleting cluster capz-e2e-abi8h1-win-ha
INFO: Waiting for the Cluster capz-e2e-abi8h1/capz-e2e-abi8h1-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-abi8h1-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-abi8h1-win-ha-control-plane-5v6mb, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-8d7kz, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-abi8h1-win-ha-control-plane-r5vqm, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-abi8h1-win-ha-control-plane-5v6mb, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-abi8h1-win-ha-control-plane-r5vqm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-abi8h1-win-ha-control-plane-5v6mb, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lx7hl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-n2s9x, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-cx2rk, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-abi8h1-win-ha-control-plane-r5vqm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-abi8h1-win-ha-control-plane-r5vqm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hkgqw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-abi8h1-win-ha-control-plane-5v6mb, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-cm5lg, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-abi8h1
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 36m0s on Ginkgo node 2 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows Enabled cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:494
    With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496
------------------------------
E0503 21:05:06.036179   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:05:50.824648   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:06:41.839048   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:07:41.384377   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:08:28.544701   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:09:14.966410   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:10:07.706237   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:10:55.334445   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:11:33.241161   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:12:32.880366   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:13:12.106626   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:13:43.943276   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:14:18.846675   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:15:05.466370   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:15:44.606018   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:16:43.593452   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:17:22.837790   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:18:21.224357   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:19:06.000374   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:19:56.838429   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:20:49.528754   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:21:37.566166   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:22:24.547609   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:23:00.312446   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:23:33.570292   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:24:17.257095   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:24:55.681188   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:25:25.853847   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:26:06.941754   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:26:59.533405   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:27:40.529855   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:28:33.066536   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:29:19.751706   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:30:19.417425   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:30:50.158600   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:31:22.512213   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:32:16.717239   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0503 21:33:04.648208   24176 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-tlzjtq/events?resourceVersion=8685": dial tcp: lookup capz-e2e-tlzjtq-public-custom-vnet-fcd4d29.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a GPU-enabled cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:76

Ran 9 of 22 Specs in 6551.107 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 1h50m36.714328146s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...