This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2022-04-27 19:37
Elapsed2h5m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node 38m37s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\san\sAKS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454
Timed out after 1200.000s.
System machine pools not ready
Expected
    <bool>: false
to equal
    <bool>: true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 432 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Wed, 27 Apr 2022 19:45:15 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-g7onm1" for hosting the cluster
Apr 27 19:45:15.072: INFO: starting to create namespace for hosting the "capz-e2e-g7onm1" test spec
2022/04/27 19:45:15 failed trying to get namespace (capz-e2e-g7onm1):namespaces "capz-e2e-g7onm1" not found
INFO: Creating namespace capz-e2e-g7onm1
INFO: Creating event watcher for namespace "capz-e2e-g7onm1"
Apr 27 19:45:15.154: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-g7onm1-ipv6
INFO: Creating the workload cluster with name "capz-e2e-g7onm1-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 591.581788ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-g7onm1" namespace
STEP: Deleting all clusters in the capz-e2e-g7onm1 namespace
STEP: Deleting cluster capz-e2e-g7onm1-ipv6
INFO: Waiting for the Cluster capz-e2e-g7onm1/capz-e2e-g7onm1-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-g7onm1-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-77srd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-g7onm1-ipv6-control-plane-bxvkr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-g7onm1-ipv6-control-plane-bxvkr, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2wgj4, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-g7onm1-ipv6-control-plane-bxvkr, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8tbw5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8pkfd, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-g7onm1-ipv6-control-plane-bxvkr, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-thsbv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-g7onm1-ipv6-control-plane-d8nsh, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-g7onm1-ipv6-control-plane-d8nsh, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-g7onm1-ipv6-control-plane-d8nsh, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-l22sj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-g7onm1-ipv6-control-plane-d8nsh, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-b54lq, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-g7onm1
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 18m3s on Ginkgo node 2 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Wed, 27 Apr 2022 19:45:15 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-clanif" for hosting the cluster
Apr 27 19:45:15.072: INFO: starting to create namespace for hosting the "capz-e2e-clanif" test spec
2022/04/27 19:45:15 failed trying to get namespace (capz-e2e-clanif):namespaces "capz-e2e-clanif" not found
INFO: Creating namespace capz-e2e-clanif
INFO: Creating event watcher for namespace "capz-e2e-clanif"
Apr 27 19:45:15.151: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-clanif-ha
INFO: Creating the workload cluster with name "capz-e2e-clanif-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 67 lines ...
Apr 27 19:56:02.582: INFO: starting to delete external LB service webu1cx5y-elb
Apr 27 19:56:02.745: INFO: starting to delete deployment webu1cx5y
Apr 27 19:56:02.855: INFO: starting to delete job curl-to-elb-jobwvzxbapz9dm
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Apr 27 19:56:03.014: INFO: starting to create dev deployment namespace
2022/04/27 19:56:03 failed trying to get namespace (development):namespaces "development" not found
2022/04/27 19:56:03 namespace development does not exist, creating...
STEP: Creating production namespace
Apr 27 19:56:03.232: INFO: starting to create prod deployment namespace
2022/04/27 19:56:03 failed trying to get namespace (production):namespaces "production" not found
2022/04/27 19:56:03 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Apr 27 19:56:03.449: INFO: starting to create frontend-prod deployments
Apr 27 19:56:03.578: INFO: starting to create frontend-dev deployments
Apr 27 19:56:03.711: INFO: starting to create backend deployments
Apr 27 19:56:03.828: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Apr 27 19:56:30.275: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.60.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Apr 27 19:58:42.194: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Apr 27 19:58:42.607: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.60.131 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.60.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Apr 27 20:03:04.340: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Apr 27 20:03:04.718: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.60.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
Apr 27 20:05:17.457: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Apr 27 20:05:18.112: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.60.129 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.60.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
Apr 27 20:09:41.648: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Apr 27 20:09:42.037: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.60.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Apr 27 20:11:54.768: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Apr 27 20:11:55.144: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.60.131 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-clanif-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-clanif/capz-e2e-clanif-ha logs
Apr 27 20:14:06.692: INFO: INFO: Collecting logs for node capz-e2e-clanif-ha-control-plane-r8lkk in cluster capz-e2e-clanif-ha in namespace capz-e2e-clanif

Apr 27 20:14:19.491: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-clanif-ha-control-plane-r8lkk
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-clanif-ha-control-plane-zrdtd, container etcd
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-gdlk8, container coredns
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-clanif-ha-control-plane-r8lkk, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-clanif-ha-control-plane-wgjzt, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-2lrp2, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-proxy-mbtbj, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-clanif-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00073289s
STEP: Dumping all the Cluster API resources in the "capz-e2e-clanif" namespace
STEP: Deleting all clusters in the capz-e2e-clanif namespace
STEP: Deleting cluster capz-e2e-clanif-ha
INFO: Waiting for the Cluster capz-e2e-clanif/capz-e2e-clanif-ha to be deleted
STEP: Waiting for cluster capz-e2e-clanif-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-clanif-ha-control-plane-r8lkk, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-bzn7s, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-2lrp2, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-clanif-ha-control-plane-r8lkk, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-v7pxb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-clanif-ha-control-plane-r8lkk, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-gdlk8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-clanif-ha-control-plane-r8lkk, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8v44g, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-clanif
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 37m35s on Ginkgo node 3 of 3

... skipping 8 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Wed, 27 Apr 2022 20:03:17 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-m55nro" for hosting the cluster
Apr 27 20:03:17.972: INFO: starting to create namespace for hosting the "capz-e2e-m55nro" test spec
2022/04/27 20:03:17 failed trying to get namespace (capz-e2e-m55nro):namespaces "capz-e2e-m55nro" not found
INFO: Creating namespace capz-e2e-m55nro
INFO: Creating event watcher for namespace "capz-e2e-m55nro"
Apr 27 20:03:18.016: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-m55nro-vmss
INFO: Creating the workload cluster with name "capz-e2e-m55nro-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 106 lines ...
STEP: Fetching activity logs took 671.935597ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-m55nro" namespace
STEP: Deleting all clusters in the capz-e2e-m55nro namespace
STEP: Deleting cluster capz-e2e-m55nro-vmss
INFO: Waiting for the Cluster capz-e2e-m55nro/capz-e2e-m55nro-vmss to be deleted
STEP: Waiting for cluster capz-e2e-m55nro-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4jblq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4x5xw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-r88qw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-nzsq2, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-m55nro
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 21m24s on Ginkgo node 2 of 3

... skipping 10 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Wed, 27 Apr 2022 19:45:15 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-v29gtz" for hosting the cluster
Apr 27 19:45:15.060: INFO: starting to create namespace for hosting the "capz-e2e-v29gtz" test spec
2022/04/27 19:45:15 failed trying to get namespace (capz-e2e-v29gtz):namespaces "capz-e2e-v29gtz" not found
INFO: Creating namespace capz-e2e-v29gtz
INFO: Creating event watcher for namespace "capz-e2e-v29gtz"
Apr 27 19:45:15.107: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-v29gtz-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-v29gtz-public-custom-vnet-control-plane-6rpv4, container kube-apiserver
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-gpqsz, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-l8fnd, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-n6gcw, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-v29gtz-public-custom-vnet-control-plane-6rpv4, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-v29gtz-public-custom-vnet-control-plane-6rpv4, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-v29gtz-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000391951s
STEP: Dumping all the Cluster API resources in the "capz-e2e-v29gtz" namespace
STEP: Deleting all clusters in the capz-e2e-v29gtz namespace
STEP: Deleting cluster capz-e2e-v29gtz-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-v29gtz/capz-e2e-v29gtz-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-v29gtz-public-custom-vnet to be deleted
W0427 20:34:32.379306   24223 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I0427 20:35:03.844625   24223 trace.go:205] Trace[117788073]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (27-Apr-2022 20:34:33.842) (total time: 30001ms):
Trace[117788073]: [30.001632117s] [30.001632117s] END
E0427 20:35:03.844716   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp 20.93.47.135:6443: i/o timeout
I0427 20:35:37.007728   24223 trace.go:205] Trace[140523964]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (27-Apr-2022 20:35:07.006) (total time: 30000ms):
Trace[140523964]: [30.000741121s] [30.000741121s] END
E0427 20:35:37.007810   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp 20.93.47.135:6443: i/o timeout
I0427 20:36:13.292064   24223 trace.go:205] Trace[507076829]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (27-Apr-2022 20:35:43.291) (total time: 30000ms):
Trace[507076829]: [30.000899508s] [30.000899508s] END
E0427 20:36:13.292127   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp 20.93.47.135:6443: i/o timeout
I0427 20:36:53.620084   24223 trace.go:205] Trace[1044108094]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (27-Apr-2022 20:36:23.618) (total time: 30001ms):
Trace[1044108094]: [30.00155628s] [30.00155628s] END
E0427 20:36:53.620155   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp 20.93.47.135:6443: i/o timeout
I0427 20:37:40.377640   24223 trace.go:205] Trace[246867517]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (27-Apr-2022 20:37:10.376) (total time: 30000ms):
Trace[246867517]: [30.000693694s] [30.000693694s] END
E0427 20:37:40.377708   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp 20.93.47.135:6443: i/o timeout
I0427 20:38:59.060044   24223 trace.go:205] Trace[837785742]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (27-Apr-2022 20:38:29.058) (total time: 30001ms):
Trace[837785742]: [30.00148222s] [30.00148222s] END
E0427 20:38:59.060112   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp 20.93.47.135:6443: i/o timeout
I0427 20:39:59.927990   24223 trace.go:205] Trace[583144235]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (27-Apr-2022 20:39:29.926) (total time: 30001ms):
Trace[583144235]: [30.001152715s] [30.001152715s] END
E0427 20:39:59.928067   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp 20.93.47.135:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-v29gtz
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Apr 27 20:40:01.596: INFO: deleting an existing virtual network "custom-vnet"
Apr 27 20:40:12.601: INFO: deleting an existing route table "node-routetable"
Apr 27 20:40:15.397: INFO: deleting an existing network security group "node-nsg"
Apr 27 20:40:27.021: INFO: deleting an existing network security group "control-plane-nsg"
Apr 27 20:40:37.610: INFO: verifying the existing resource group "capz-e2e-v29gtz-public-custom-vnet" is empty
Apr 27 20:40:37.742: INFO: deleting the existing resource group "capz-e2e-v29gtz-public-custom-vnet"
E0427 20:40:49.113577   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 20:41:41.611027   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E0427 20:42:32.507676   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 58m3s on Ginkgo node 1 of 3


• [SLOW TEST:3482.644 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Wed, 27 Apr 2022 20:24:41 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-x9didm" for hosting the cluster
Apr 27 20:24:41.945: INFO: starting to create namespace for hosting the "capz-e2e-x9didm" test spec
2022/04/27 20:24:41 failed trying to get namespace (capz-e2e-x9didm):namespaces "capz-e2e-x9didm" not found
INFO: Creating namespace capz-e2e-x9didm
INFO: Creating event watcher for namespace "capz-e2e-x9didm"
Apr 27 20:24:41.987: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-x9didm-oot
INFO: Creating the workload cluster with name "capz-e2e-x9didm-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 98 lines ...
STEP: Fetching activity logs took 606.655209ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-x9didm" namespace
STEP: Deleting all clusters in the capz-e2e-x9didm namespace
STEP: Deleting cluster capz-e2e-x9didm-oot
INFO: Waiting for the Cluster capz-e2e-x9didm/capz-e2e-x9didm-oot to be deleted
STEP: Waiting for cluster capz-e2e-x9didm-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-x9didm-oot-control-plane-tzn8k, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-x9didm-oot-control-plane-tzn8k, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7lgv2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-fxtm6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-csk2n, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-fld9d, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-zqggp, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-x9didm-oot-control-plane-tzn8k, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-controller-manager, container cloud-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-2hznw, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-x9didm-oot-control-plane-tzn8k, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-x9didm
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 23m49s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Wed, 27 Apr 2022 20:22:49 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-d40d3z" for hosting the cluster
Apr 27 20:22:49.608: INFO: starting to create namespace for hosting the "capz-e2e-d40d3z" test spec
2022/04/27 20:22:49 failed trying to get namespace (capz-e2e-d40d3z):namespaces "capz-e2e-d40d3z" not found
INFO: Creating namespace capz-e2e-d40d3z
INFO: Creating event watcher for namespace "capz-e2e-d40d3z"
Apr 27 20:22:49.651: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-d40d3z-gpu
INFO: Creating the workload cluster with name "capz-e2e-d40d3z-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 58 lines ...
STEP: Fetching activity logs took 538.13533ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-d40d3z" namespace
STEP: Deleting all clusters in the capz-e2e-d40d3z namespace
STEP: Deleting cluster capz-e2e-d40d3z-gpu
INFO: Waiting for the Cluster capz-e2e-d40d3z/capz-e2e-d40d3z-gpu to be deleted
STEP: Waiting for cluster capz-e2e-d40d3z-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9qt29, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fvfcg, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-d40d3z
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 25m42s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Wed, 27 Apr 2022 20:43:17 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-ea7b7l" for hosting the cluster
Apr 27 20:43:17.707: INFO: starting to create namespace for hosting the "capz-e2e-ea7b7l" test spec
2022/04/27 20:43:17 failed trying to get namespace (capz-e2e-ea7b7l):namespaces "capz-e2e-ea7b7l" not found
INFO: Creating namespace capz-e2e-ea7b7l
INFO: Creating event watcher for namespace "capz-e2e-ea7b7l"
Apr 27 20:43:17.753: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-ea7b7l-aks
INFO: Creating the workload cluster with name "capz-e2e-ea7b7l-aks" using the "aks-multi-tenancy" template (Kubernetes v1.22.6, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E0427 20:43:25.273149   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 20:44:19.561253   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 20:44:51.316847   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 20:45:21.616584   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 20:46:04.519016   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 20:46:45.903237   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 20:47:19.555130   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 20:47:51.929242   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Apr 27 20:48:00.049: INFO: Waiting for the first control plane machine managed by capz-e2e-ea7b7l/capz-e2e-ea7b7l-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
E0427 20:48:49.202489   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 20:49:34.278655   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 20:50:29.113523   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 20:51:13.821466   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 20:51:55.062711   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 20:52:29.570474   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 20:53:27.643458   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 20:53:58.216998   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 20:54:47.803598   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 20:55:24.742702   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 20:56:07.074718   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 20:56:54.400366   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 20:57:45.257243   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 20:58:16.820377   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 20:59:07.955266   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 20:59:59.777635   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:00:58.307297   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:01:41.615616   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:02:34.534780   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:03:19.020483   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:04:09.175078   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:05:00.101301   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:05:55.585980   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:06:45.043516   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:07:21.445074   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Dumping logs from the "capz-e2e-ea7b7l-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-ea7b7l/capz-e2e-ea7b7l-aks logs
Apr 27 21:08:00.143: INFO: INFO: Collecting logs for node aks-agentpool1-39366494-vmss000000 in cluster capz-e2e-ea7b7l-aks in namespace capz-e2e-ea7b7l

E0427 21:08:13.258182   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:08:47.744361   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:09:23.696887   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Apr 27 21:10:09.769: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-ea7b7l/capz-e2e-ea7b7l-aks: [dialing public load balancer at capz-e2e-ea7b7l-aks-d1260284.hcp.northeurope.azmk8s.io: dial tcp 40.127.250.39:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-ea7b7l/capz-e2e-ea7b7l-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 1.065856891s
STEP: Dumping workload cluster capz-e2e-ea7b7l/capz-e2e-ea7b7l-aks Azure activity log
STEP: Creating log watcher for controller kube-system/csi-azuredisk-node-kv29z, container liveness-probe
STEP: Creating log watcher for controller kube-system/csi-azuredisk-node-xcqh9, container azuredisk
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-w82q7, container azurefile
... skipping 20 lines ...
STEP: Fetching activity logs took 563.238541ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-ea7b7l" namespace
STEP: Deleting all clusters in the capz-e2e-ea7b7l namespace
STEP: Deleting cluster capz-e2e-ea7b7l-aks
INFO: Waiting for the Cluster capz-e2e-ea7b7l/capz-e2e-ea7b7l-aks to be deleted
STEP: Waiting for cluster capz-e2e-ea7b7l-aks to be deleted
E0427 21:10:21.898774   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:11:06.015122   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:11:38.920848   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:12:20.870033   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:13:19.747707   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:14:17.593713   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:14:49.793587   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:15:21.634163   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:16:21.205545   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:17:01.662635   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:17:46.067408   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:18:37.243320   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:19:15.120007   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:20:06.436245   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-ea7b7l
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E0427 21:20:42.145151   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:21:42.134258   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 38m38s on Ginkgo node 1 of 3


• Failure [2317.691 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 57 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Wed, 27 Apr 2022 20:48:31 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-pip8m1" for hosting the cluster
Apr 27 20:48:31.277: INFO: starting to create namespace for hosting the "capz-e2e-pip8m1" test spec
2022/04/27 20:48:31 failed trying to get namespace (capz-e2e-pip8m1):namespaces "capz-e2e-pip8m1" not found
INFO: Creating namespace capz-e2e-pip8m1
INFO: Creating event watcher for namespace "capz-e2e-pip8m1"
Apr 27 20:48:31.329: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-pip8m1-win-ha
INFO: Creating the workload cluster with name "capz-e2e-pip8m1-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 55 lines ...
STEP: waiting for job default/curl-to-elb-job7ecu24rtovz to be complete
Apr 27 20:59:17.284: INFO: waiting for job default/curl-to-elb-job7ecu24rtovz to be complete
Apr 27 20:59:27.493: INFO: job default/curl-to-elb-job7ecu24rtovz is complete, took 10.209425374s
STEP: connecting directly to the external LB service
Apr 27 20:59:27.493: INFO: starting attempts to connect directly to the external LB service
2022/04/27 20:59:27 [DEBUG] GET http://20.123.126.177
2022/04/27 20:59:57 [ERR] GET http://20.123.126.177 request failed: Get "http://20.123.126.177": dial tcp 20.123.126.177:80: i/o timeout
2022/04/27 20:59:57 [DEBUG] GET http://20.123.126.177: retrying in 1s (4 left)
Apr 27 20:59:58.697: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Apr 27 20:59:58.698: INFO: starting to delete external LB service web2i2g56-elb
Apr 27 20:59:58.857: INFO: starting to delete deployment web2i2g56
Apr 27 20:59:58.966: INFO: starting to delete job curl-to-elb-job7ecu24rtovz
... skipping 79 lines ...
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-mh7lq, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-pip8m1-win-ha-control-plane-jrv4b, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-pip8m1-win-ha-control-plane-fqq4j, container kube-apiserver
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-pip8m1-win-ha-control-plane-p9stk, container etcd
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-nwjsh, container kube-flannel
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-f2r58, container coredns
STEP: Got error while iterating over activity logs for resource group capz-e2e-pip8m1-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000508156s
STEP: Dumping all the Cluster API resources in the "capz-e2e-pip8m1" namespace
STEP: Deleting all clusters in the capz-e2e-pip8m1 namespace
STEP: Deleting cluster capz-e2e-pip8m1-win-ha
INFO: Waiting for the Cluster capz-e2e-pip8m1/capz-e2e-pip8m1-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-pip8m1-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-pip8m1-win-ha-control-plane-fqq4j, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-wsfjs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-pip8m1-win-ha-control-plane-p9stk, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-9tplc, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-gx6jz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-gnj6p, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-mh7lq, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-pip8m1-win-ha-control-plane-p9stk, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8q79s, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-pip8m1-win-ha-control-plane-p9stk, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-pip8m1-win-ha-control-plane-fqq4j, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-nwjsh, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-n25j2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-pip8m1-win-ha-control-plane-fqq4j, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-f2r58, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-pip8m1-win-ha-control-plane-fqq4j, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ss4rs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-pip8m1-win-ha-control-plane-p9stk, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-pip8m1
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 35m29s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Wed, 27 Apr 2022 20:48:31 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-lowyaf" for hosting the cluster
Apr 27 20:48:31.573: INFO: starting to create namespace for hosting the "capz-e2e-lowyaf" test spec
2022/04/27 20:48:31 failed trying to get namespace (capz-e2e-lowyaf):namespaces "capz-e2e-lowyaf" not found
INFO: Creating namespace capz-e2e-lowyaf
INFO: Creating event watcher for namespace "capz-e2e-lowyaf"
Apr 27 20:48:31.627: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-lowyaf-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-lowyaf-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 123 lines ...
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-lowyaf-win-vmss-control-plane-zw9sm, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-lowyaf-win-vmss-control-plane-zw9sm, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-lowyaf-win-vmss-control-plane-zw9sm, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-6wgx7, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-lowyaf-win-vmss-control-plane-zw9sm, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-s84zs, container kube-flannel
STEP: Got error while iterating over activity logs for resource group capz-e2e-lowyaf-win-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000635554s
STEP: Dumping all the Cluster API resources in the "capz-e2e-lowyaf" namespace
STEP: Deleting all clusters in the capz-e2e-lowyaf namespace
STEP: Deleting cluster capz-e2e-lowyaf-win-vmss
INFO: Waiting for the Cluster capz-e2e-lowyaf/capz-e2e-lowyaf-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-lowyaf-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-tfz5w, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-m9zgn, container kube-flannel: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-lowyaf
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 53m6s on Ginkgo node 3 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows enabled VMSS cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:542
    with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543
------------------------------
E0427 21:22:29.492508   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:23:10.170153   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:24:00.160300   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:24:52.164807   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:25:27.755799   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:25:57.966032   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:26:38.141622   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:27:24.529755   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:28:21.211074   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:29:15.855682   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:29:49.858809   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:30:24.697513   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:31:11.953026   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:31:47.541087   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:32:43.332701   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:33:32.498632   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:34:27.432095   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:35:08.951591   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:35:44.187206   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:36:24.536857   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:37:19.591635   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:38:01.182271   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:38:40.760774   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:39:29.889106   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:40:18.306933   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:40:54.548371   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0427 21:41:35.592481   24223 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-v29gtz/events?resourceVersion=8734": dial tcp: lookup capz-e2e-v29gtz-public-custom-vnet-ce2c7177.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating an AKS cluster [It] with a single control plane node and 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/aks.go:216

Ran 9 of 22 Specs in 7104.658 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 1h59m48.86283024s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...