This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 3 succeeded
Started2021-12-02 18:38
Elapsed2h15m
Revisionrelease-0.5

No Test Failures!


Show 3 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 437 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Thu, 02 Dec 2021 18:45:15 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-ogct2t" for hosting the cluster
Dec  2 18:45:15.698: INFO: starting to create namespace for hosting the "capz-e2e-ogct2t" test spec
2021/12/02 18:45:15 failed trying to get namespace (capz-e2e-ogct2t):namespaces "capz-e2e-ogct2t" not found
INFO: Creating namespace capz-e2e-ogct2t
INFO: Creating event watcher for namespace "capz-e2e-ogct2t"
Dec  2 18:45:15.782: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-ogct2t-ipv6
INFO: Creating the workload cluster with name "capz-e2e-ogct2t-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 605.587323ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-ogct2t" namespace
STEP: Deleting all clusters in the capz-e2e-ogct2t namespace
STEP: Deleting cluster capz-e2e-ogct2t-ipv6
INFO: Waiting for the Cluster capz-e2e-ogct2t/capz-e2e-ogct2t-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-ogct2t-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ogct2t-ipv6-control-plane-4vkdt, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-t5jxg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ogct2t-ipv6-control-plane-r6dxc, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ogct2t-ipv6-control-plane-4vkdt, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ogct2t-ipv6-control-plane-4vkdt, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-dkjfc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7pk25, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ogct2t-ipv6-control-plane-4vkdt, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-sf4jf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-c84xt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-7srmn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ogct2t-ipv6-control-plane-r6dxc, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-5wmjj, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8ld69, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ogct2t-ipv6-control-plane-r6dxc, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8kqx5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ogct2t-ipv6-control-plane-r6dxc, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-ogct2t
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 17m9s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Thu, 02 Dec 2021 19:02:24 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-vv9qb5" for hosting the cluster
Dec  2 19:02:24.979: INFO: starting to create namespace for hosting the "capz-e2e-vv9qb5" test spec
2021/12/02 19:02:24 failed trying to get namespace (capz-e2e-vv9qb5):namespaces "capz-e2e-vv9qb5" not found
INFO: Creating namespace capz-e2e-vv9qb5
INFO: Creating event watcher for namespace "capz-e2e-vv9qb5"
Dec  2 19:02:25.009: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-vv9qb5-vmss
INFO: Creating the workload cluster with name "capz-e2e-vv9qb5-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 128 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Thu, 02 Dec 2021 18:45:15 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-satf3l" for hosting the cluster
Dec  2 18:45:15.684: INFO: starting to create namespace for hosting the "capz-e2e-satf3l" test spec
2021/12/02 18:45:15 failed trying to get namespace (capz-e2e-satf3l):namespaces "capz-e2e-satf3l" not found
INFO: Creating namespace capz-e2e-satf3l
INFO: Creating event watcher for namespace "capz-e2e-satf3l"
Dec  2 18:45:15.752: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-satf3l-ha
INFO: Creating the workload cluster with name "capz-e2e-satf3l-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 67 lines ...
Dec  2 18:55:40.403: INFO: starting to delete external LB service webmgga9p-elb
Dec  2 18:55:40.549: INFO: starting to delete deployment webmgga9p
Dec  2 18:55:40.661: INFO: starting to delete job curl-to-elb-jobmfmwg2x3gcn
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Dec  2 18:55:40.815: INFO: starting to create dev deployment namespace
2021/12/02 18:55:40 failed trying to get namespace (development):namespaces "development" not found
2021/12/02 18:55:40 namespace development does not exist, creating...
STEP: Creating production namespace
Dec  2 18:55:41.035: INFO: starting to create prod deployment namespace
2021/12/02 18:55:41 failed trying to get namespace (production):namespaces "production" not found
2021/12/02 18:55:41 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Dec  2 18:55:41.254: INFO: starting to create frontend-prod deployments
Dec  2 18:55:41.362: INFO: starting to create frontend-dev deployments
Dec  2 18:55:41.469: INFO: starting to create backend deployments
Dec  2 18:55:41.576: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Dec  2 18:56:08.107: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.69.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Dec  2 18:58:18.358: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Dec  2 18:58:18.733: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.69.131 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.69.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Dec  2 19:02:39.953: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Dec  2 19:02:40.368: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.243.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
Dec  2 19:04:53.074: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Dec  2 19:04:53.454: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.243.195 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.243.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
Dec  2 19:09:17.267: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Dec  2 19:09:17.645: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.69.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Dec  2 19:11:28.886: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Dec  2 19:11:29.294: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.69.131 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-satf3l-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-satf3l/capz-e2e-satf3l-ha logs
Dec  2 19:13:40.856: INFO: INFO: Collecting logs for node capz-e2e-satf3l-ha-control-plane-qpjt9 in cluster capz-e2e-satf3l-ha in namespace capz-e2e-satf3l

Dec  2 19:13:52.644: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-satf3l-ha-control-plane-qpjt9
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-satf3l-ha-control-plane-qpjt9, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-47dkq, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-hqccv, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-r8fxx, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-proxy-zjbqj, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-satf3l-ha-control-plane-prn5z, container kube-apiserver
STEP: Got error while iterating over activity logs for resource group capz-e2e-satf3l-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001222134s
STEP: Dumping all the Cluster API resources in the "capz-e2e-satf3l" namespace
STEP: Deleting all clusters in the capz-e2e-satf3l namespace
STEP: Deleting cluster capz-e2e-satf3l-ha
INFO: Waiting for the Cluster capz-e2e-satf3l/capz-e2e-satf3l-ha to be deleted
STEP: Waiting for cluster capz-e2e-satf3l-ha to be deleted
... skipping 14 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Thu, 02 Dec 2021 19:22:40 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-v5iqer" for hosting the cluster
Dec  2 19:22:40.710: INFO: starting to create namespace for hosting the "capz-e2e-v5iqer" test spec
2021/12/02 19:22:40 failed trying to get namespace (capz-e2e-v5iqer):namespaces "capz-e2e-v5iqer" not found
INFO: Creating namespace capz-e2e-v5iqer
INFO: Creating event watcher for namespace "capz-e2e-v5iqer"
Dec  2 19:22:40.749: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-v5iqer-gpu
INFO: Creating the workload cluster with name "capz-e2e-v5iqer-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 58 lines ...
STEP: Fetching activity logs took 572.810514ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-v5iqer" namespace
STEP: Deleting all clusters in the capz-e2e-v5iqer namespace
STEP: Deleting cluster capz-e2e-v5iqer-gpu
INFO: Waiting for the Cluster capz-e2e-v5iqer/capz-e2e-v5iqer-gpu to be deleted
STEP: Waiting for cluster capz-e2e-v5iqer-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-t28ln, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-sl8tq, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-v5iqer
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 22m44s on Ginkgo node 3 of 3

... skipping 10 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Thu, 02 Dec 2021 18:45:15 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-1ltt9u" for hosting the cluster
Dec  2 18:45:15.679: INFO: starting to create namespace for hosting the "capz-e2e-1ltt9u" test spec
2021/12/02 18:45:15 failed trying to get namespace (capz-e2e-1ltt9u):namespaces "capz-e2e-1ltt9u" not found
INFO: Creating namespace capz-e2e-1ltt9u
INFO: Creating event watcher for namespace "capz-e2e-1ltt9u"
Dec  2 18:45:15.723: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-1ltt9u-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-kw7f6, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-ww97f, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-1ltt9u-public-custom-vnet-control-plane-gkcg5, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-spcdj, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-proxy-wdvww, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-4856k, container calico-node
STEP: Got error while iterating over activity logs for resource group capz-e2e-1ltt9u-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000282084s
STEP: Dumping all the Cluster API resources in the "capz-e2e-1ltt9u" namespace
STEP: Deleting all clusters in the capz-e2e-1ltt9u namespace
STEP: Deleting cluster capz-e2e-1ltt9u-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-1ltt9u/capz-e2e-1ltt9u-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-1ltt9u-public-custom-vnet to be deleted
W1202 19:39:33.866794   24243 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1202 19:40:05.440880   24243 trace.go:205] Trace[894358155]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (02-Dec-2021 19:39:35.439) (total time: 30001ms):
Trace[894358155]: [30.001663981s] [30.001663981s] END
E1202 19:40:05.440971   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp 20.93.52.148:6443: i/o timeout
I1202 19:40:38.132416   24243 trace.go:205] Trace[1505086334]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (02-Dec-2021 19:40:08.131) (total time: 30001ms):
Trace[1505086334]: [30.001262211s] [30.001262211s] END
E1202 19:40:38.132520   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp 20.93.52.148:6443: i/o timeout
I1202 19:41:12.946220   24243 trace.go:205] Trace[1526192561]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (02-Dec-2021 19:40:42.944) (total time: 30001ms):
Trace[1526192561]: [30.001193903s] [30.001193903s] END
E1202 19:41:12.946293   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp 20.93.52.148:6443: i/o timeout
I1202 19:41:51.679524   24243 trace.go:205] Trace[972378880]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (02-Dec-2021 19:41:21.678) (total time: 30001ms):
Trace[972378880]: [30.001351487s] [30.001351487s] END
E1202 19:41:51.679611   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp 20.93.52.148:6443: i/o timeout
I1202 19:42:39.653858   24243 trace.go:205] Trace[582500945]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (02-Dec-2021 19:42:09.653) (total time: 30000ms):
Trace[582500945]: [30.000770253s] [30.000770253s] END
E1202 19:42:39.653956   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp 20.93.52.148:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-1ltt9u
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Dec  2 19:42:54.931: INFO: deleting an existing virtual network "custom-vnet"
Dec  2 19:43:06.142: INFO: deleting an existing route table "node-routetable"
E1202 19:43:09.853787   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Dec  2 19:43:16.683: INFO: deleting an existing network security group "node-nsg"
Dec  2 19:43:27.213: INFO: deleting an existing network security group "control-plane-nsg"
Dec  2 19:43:39.891: INFO: verifying the existing resource group "capz-e2e-1ltt9u-public-custom-vnet" is empty
Dec  2 19:43:40.311: INFO: deleting the existing resource group "capz-e2e-1ltt9u-public-custom-vnet"
E1202 19:43:59.602489   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 19:44:44.178745   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1202 19:45:30.143754   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 19:46:21.756671   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 1h1m24s on Ginkgo node 1 of 3


• [SLOW TEST:3684.172 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Thu, 02 Dec 2021 19:31:19 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-3kc6j7" for hosting the cluster
Dec  2 19:31:19.762: INFO: starting to create namespace for hosting the "capz-e2e-3kc6j7" test spec
2021/12/02 19:31:19 failed trying to get namespace (capz-e2e-3kc6j7):namespaces "capz-e2e-3kc6j7" not found
INFO: Creating namespace capz-e2e-3kc6j7
INFO: Creating event watcher for namespace "capz-e2e-3kc6j7"
Dec  2 19:31:19.815: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-3kc6j7-oot
INFO: Creating the workload cluster with name "capz-e2e-3kc6j7-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 98 lines ...
STEP: Fetching activity logs took 626.648827ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-3kc6j7" namespace
STEP: Deleting all clusters in the capz-e2e-3kc6j7 namespace
STEP: Deleting cluster capz-e2e-3kc6j7-oot
INFO: Waiting for the Cluster capz-e2e-3kc6j7/capz-e2e-3kc6j7-oot to be deleted
STEP: Waiting for cluster capz-e2e-3kc6j7-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-3kc6j7-oot-control-plane-kkptl, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-3kc6j7-oot-control-plane-kkptl, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-dr9gd, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-3kc6j7-oot-control-plane-kkptl, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-3kc6j7-oot-control-plane-kkptl, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-jwwvh, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-4trc2, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-controller-manager, container cloud-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-bqhv2, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-kglnl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qd8s4, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-3kc6j7
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 20m51s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Thu, 02 Dec 2021 19:52:11 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-aqxx6o" for hosting the cluster
Dec  2 19:52:11.239: INFO: starting to create namespace for hosting the "capz-e2e-aqxx6o" test spec
2021/12/02 19:52:11 failed trying to get namespace (capz-e2e-aqxx6o):namespaces "capz-e2e-aqxx6o" not found
INFO: Creating namespace capz-e2e-aqxx6o
INFO: Creating event watcher for namespace "capz-e2e-aqxx6o"
Dec  2 19:52:11.279: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-aqxx6o-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-aqxx6o-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 123 lines ...
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-45628, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-fcd7q, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-aqxx6o-win-vmss-control-plane-5p6b9, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-7lf5m, container coredns
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-aqxx6o-win-vmss-control-plane-5p6b9, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-wnnw4, container kube-flannel
STEP: Got error while iterating over activity logs for resource group capz-e2e-aqxx6o-win-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001063773s
STEP: Dumping all the Cluster API resources in the "capz-e2e-aqxx6o" namespace
STEP: Deleting all clusters in the capz-e2e-aqxx6o namespace
STEP: Deleting cluster capz-e2e-aqxx6o-win-vmss
INFO: Waiting for the Cluster capz-e2e-aqxx6o/capz-e2e-aqxx6o-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-aqxx6o-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-fcd7q, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-46fbr, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-86xk8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-wnnw4, container kube-flannel: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-aqxx6o
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 32m25s on Ginkgo node 2 of 3

... skipping 12 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Thu, 02 Dec 2021 19:46:39 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-e33zcz" for hosting the cluster
Dec  2 19:46:39.854: INFO: starting to create namespace for hosting the "capz-e2e-e33zcz" test spec
2021/12/02 19:46:39 failed trying to get namespace (capz-e2e-e33zcz):namespaces "capz-e2e-e33zcz" not found
INFO: Creating namespace capz-e2e-e33zcz
INFO: Creating event watcher for namespace "capz-e2e-e33zcz"
Dec  2 19:46:39.904: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-e33zcz-win-ha
INFO: Creating the workload cluster with name "capz-e2e-e33zcz-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 12 lines ...
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created
clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-e33zcz-win-ha-flannel created
configmap/cni-capz-e2e-e33zcz-win-ha-flannel created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1202 19:46:59.251929   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 19:47:51.262551   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by capz-e2e-e33zcz/capz-e2e-e33zcz-win-ha-control-plane to be provisioned
STEP: Waiting for one control plane node to exist
E1202 19:48:22.821404   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 19:48:57.141312   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 19:49:27.601594   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 19:50:26.800646   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 19:51:22.876066   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
INFO: Waiting for the remaining control plane machines managed by capz-e2e-e33zcz/capz-e2e-e33zcz-win-ha-control-plane to be provisioned
STEP: Waiting for all control plane nodes to exist
E1202 19:51:55.009163   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 19:52:31.952156   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 19:53:21.682103   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 19:53:52.587098   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 19:54:36.670323   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 19:55:30.595690   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 19:56:09.922065   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane capz-e2e-e33zcz/capz-e2e-e33zcz-win-ha-control-plane to be ready (implies underlying nodes to be ready as well)
STEP: Waiting for the control plane to be ready
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
STEP: Waiting for the workload nodes to exist
INFO: Waiting for the machine pools to be provisioned
... skipping 3 lines ...
Dec  2 19:56:22.681: INFO: starting to wait for deployment to become available
Dec  2 19:56:43.018: INFO: Deployment default/web5pr1hr is now available, took 20.337615243s
STEP: creating an internal Load Balancer service
Dec  2 19:56:43.018: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/web5pr1hr-ilb to be available
Dec  2 19:56:43.183: INFO: waiting for service default/web5pr1hr-ilb to be available
E1202 19:56:55.188696   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 19:57:30.059891   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Dec  2 19:58:04.137: INFO: service default/web5pr1hr-ilb is available, took 1m20.954355078s
STEP: connecting to the internal LB service from a curl pod
Dec  2 19:58:04.242: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-job1qwoo to be complete
Dec  2 19:58:04.505: INFO: waiting for job default/curl-to-ilb-job1qwoo to be complete
E1202 19:58:07.585926   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Dec  2 19:58:14.715: INFO: job default/curl-to-ilb-job1qwoo is complete, took 10.209985059s
STEP: deleting the ilb test resources
Dec  2 19:58:14.715: INFO: deleting the ilb service: web5pr1hr-ilb
Dec  2 19:58:14.895: INFO: deleting the ilb job: curl-to-ilb-job1qwoo
STEP: creating an external Load Balancer service
Dec  2 19:58:15.005: INFO: starting to create an external Load Balancer service
... skipping 5 lines ...
STEP: waiting for job default/curl-to-elb-jobq8zp4nrahm9 to be complete
Dec  2 19:58:35.748: INFO: waiting for job default/curl-to-elb-jobq8zp4nrahm9 to be complete
Dec  2 19:58:45.957: INFO: job default/curl-to-elb-jobq8zp4nrahm9 is complete, took 10.209300594s
STEP: connecting directly to the external LB service
Dec  2 19:58:45.957: INFO: starting attempts to connect directly to the external LB service
2021/12/02 19:58:45 [DEBUG] GET http://20.93.53.154
E1202 19:58:50.234003   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
2021/12/02 19:59:15 [ERR] GET http://20.93.53.154 request failed: Get "http://20.93.53.154": dial tcp 20.93.53.154:80: i/o timeout
2021/12/02 19:59:15 [DEBUG] GET http://20.93.53.154: retrying in 1s (4 left)
E1202 19:59:22.762824   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Dec  2 19:59:32.467: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Dec  2 19:59:32.467: INFO: starting to delete external LB service web5pr1hr-elb
Dec  2 19:59:32.630: INFO: starting to delete deployment web5pr1hr
Dec  2 19:59:32.740: INFO: starting to delete job curl-to-elb-jobq8zp4nrahm9
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowsc1zwu8 to be available
Dec  2 19:59:33.109: INFO: starting to wait for deployment to become available
E1202 19:59:55.989796   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:00:31.874347   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:01:24.533793   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:02:02.916103   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:02:45.047606   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Dec  2 20:03:15.706: INFO: Deployment default/web-windowsc1zwu8 is now available, took 3m42.597917665s
STEP: creating an internal Load Balancer service
Dec  2 20:03:15.706: INFO: starting to create an internal Load Balancer service
STEP: waiting for service default/web-windowsc1zwu8-ilb to be available
Dec  2 20:03:15.863: INFO: waiting for service default/web-windowsc1zwu8-ilb to be available
E1202 20:03:35.639046   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Dec  2 20:03:36.196: INFO: service default/web-windowsc1zwu8-ilb is available, took 20.332668059s
STEP: connecting to the internal LB service from a curl pod
Dec  2 20:03:36.300: INFO: starting to create a curl to ilb job
STEP: waiting for job default/curl-to-ilb-jobk4gzt to be complete
Dec  2 20:03:36.410: INFO: waiting for job default/curl-to-ilb-jobk4gzt to be complete
Dec  2 20:03:46.619: INFO: job default/curl-to-ilb-jobk4gzt is complete, took 10.208777949s
STEP: deleting the ilb test resources
Dec  2 20:03:46.619: INFO: deleting the ilb service: web-windowsc1zwu8-ilb
Dec  2 20:03:46.774: INFO: deleting the ilb job: curl-to-ilb-jobk4gzt
STEP: creating an external Load Balancer service
Dec  2 20:03:46.888: INFO: starting to create an external Load Balancer service
STEP: waiting for service default/web-windowsc1zwu8-elb to be available
Dec  2 20:03:47.044: INFO: waiting for service default/web-windowsc1zwu8-elb to be available
E1202 20:04:17.056775   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Dec  2 20:04:17.465: INFO: service default/web-windowsc1zwu8-elb is available, took 30.421398202s
STEP: connecting to the external LB service from a curl pod
Dec  2 20:04:17.576: INFO: starting to create curl-to-elb job
STEP: waiting for job default/curl-to-elb-jobi3afa79gdrt to be complete
Dec  2 20:04:17.689: INFO: waiting for job default/curl-to-elb-jobi3afa79gdrt to be complete
Dec  2 20:04:27.899: INFO: job default/curl-to-elb-jobi3afa79gdrt is complete, took 10.209189404s
STEP: connecting directly to the external LB service
Dec  2 20:04:27.899: INFO: starting attempts to connect directly to the external LB service
2021/12/02 20:04:27 [DEBUG] GET http://20.105.41.73
E1202 20:04:55.869802   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
2021/12/02 20:04:57 [ERR] GET http://20.105.41.73 request failed: Get "http://20.105.41.73": dial tcp 20.105.41.73:80: i/o timeout
2021/12/02 20:04:57 [DEBUG] GET http://20.105.41.73: retrying in 1s (4 left)
Dec  2 20:04:59.105: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Dec  2 20:04:59.105: INFO: starting to delete external LB service web-windowsc1zwu8-elb
Dec  2 20:04:59.259: INFO: starting to delete deployment web-windowsc1zwu8
Dec  2 20:04:59.369: INFO: starting to delete job curl-to-elb-jobi3afa79gdrt
... skipping 6 lines ...
Dec  2 20:05:14.407: INFO: INFO: Collecting logs for node capz-e2e-e33zcz-win-ha-control-plane-9mf5w in cluster capz-e2e-e33zcz-win-ha in namespace capz-e2e-e33zcz

Dec  2 20:05:24.407: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-e33zcz-win-ha-control-plane-9mf5w

Dec  2 20:05:24.885: INFO: INFO: Collecting logs for node capz-e2e-e33zcz-win-ha-control-plane-r68w5 in cluster capz-e2e-e33zcz-win-ha in namespace capz-e2e-e33zcz

E1202 20:05:27.714128   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Dec  2 20:05:36.566: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-e33zcz-win-ha-control-plane-r68w5

Dec  2 20:05:37.068: INFO: INFO: Collecting logs for node capz-e2e-e33zcz-win-ha-md-0-2tc55 in cluster capz-e2e-e33zcz-win-ha in namespace capz-e2e-e33zcz

Dec  2 20:05:47.906: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-e33zcz-win-ha-md-0-2tc55

Dec  2 20:05:48.301: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster capz-e2e-e33zcz-win-ha in namespace capz-e2e-e33zcz

E1202 20:06:13.340185   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Dec  2 20:06:30.913: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-e33zcz-win-ha-md-win-xvk7h

STEP: Dumping workload cluster capz-e2e-e33zcz/capz-e2e-e33zcz-win-ha kube-system pod logs
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-e33zcz-win-ha-control-plane-9mf5w, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-lwq92, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-s6tg7, container kube-flannel
... skipping 17 lines ...
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-b7n54, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-e33zcz-win-ha-control-plane-9mf5w, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-e33zcz-win-ha-control-plane-fwb86, container kube-scheduler
STEP: Fetching kube-system pod logs took 834.021233ms
STEP: Dumping workload cluster capz-e2e-e33zcz/capz-e2e-e33zcz-win-ha Azure activity log
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-e33zcz-win-ha-control-plane-r68w5, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-e33zcz-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000444546s
STEP: Dumping all the Cluster API resources in the "capz-e2e-e33zcz" namespace
STEP: Deleting all clusters in the capz-e2e-e33zcz namespace
STEP: Deleting cluster capz-e2e-e33zcz-win-ha
INFO: Waiting for the Cluster capz-e2e-e33zcz/capz-e2e-e33zcz-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-e33zcz-win-ha to be deleted
E1202 20:07:12.940938   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:07:53.340097   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:08:47.513217   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:09:17.766131   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:09:52.559154   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:10:26.364997   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:11:01.563577   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:11:53.732971   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:12:29.045897   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:13:02.918302   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:13:58.777473   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:14:33.281284   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:15:22.239860   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:16:06.196321   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:16:41.749639   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:17:41.684461   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:18:15.678880   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-e33zcz-win-ha-control-plane-fwb86, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lwq92, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-e33zcz-win-ha-control-plane-9mf5w, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-e33zcz-win-ha-control-plane-9mf5w, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-e33zcz-win-ha-control-plane-fwb86, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-qqmc5, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-e33zcz-win-ha-control-plane-9mf5w, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-e33zcz-win-ha-control-plane-9mf5w, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-e33zcz-win-ha-control-plane-fwb86, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-9pvj8, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-xj5zz, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7vlkc, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-e33zcz-win-ha-control-plane-fwb86, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6xslm, container kube-proxy: http2: client connection lost
E1202 20:19:03.204794   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:19:42.169210   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:20:23.698157   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:21:10.875361   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:22:10.700547   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:23:10.075748   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:23:45.608293   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:24:20.965757   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:25:18.027362   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:26:04.433945   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-e33zcz
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1202 20:26:56.720799   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1202 20:27:55.216310   24243 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1ltt9u/events?resourceVersion=10006": dial tcp: lookup capz-e2e-1ltt9u-public-custom-vnet-312a8d69.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 41m33s on Ginkgo node 1 of 3


• [SLOW TEST:2493.264 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows Enabled cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:494
    With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2021-12-02T20:38:48Z"}
++ early_exit_handler
++ '[' -n 161 ']'
++ kill -TERM 161
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 12 lines ...
Cleaning up after docker
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
================================================================================
Done cleaning up after docker in docker.
All sensitive variables are redacted
{"component":"entrypoint","file":"prow/entrypoint/run.go:255","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2021-12-02T20:53:48Z"}
{"component":"entrypoint","error":"os: process already finished","file":"prow/entrypoint/run.go:257","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2021-12-02T20:53:48Z"}