Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 7 succeeded |
Started | |
Elapsed | 1h42m |
Revision | release-0.5 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\scluster\sthat\suses\sthe\sexternal\scloud\sprovider\swith\sa\s1\scontrol\splane\snodes\sand\s2\sworker\snodes$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419 Timed out after 1200.002s. Expected <bool>: false to be true /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/controlplane_helpers.go:145from junit.e2e_suite.3.xml
INFO: "with a 1 control plane nodes and 2 worker nodes" started at Sat, 14 May 2022 20:25:52 UTC on Ginkgo node 3 of 3 �[1mSTEP�[0m: Creating namespace "capz-e2e-22ppl5" for hosting the cluster May 14 20:25:52.790: INFO: starting to create namespace for hosting the "capz-e2e-22ppl5" test spec 2022/05/14 20:25:52 failed trying to get namespace (capz-e2e-22ppl5):namespaces "capz-e2e-22ppl5" not found INFO: Creating namespace capz-e2e-22ppl5 INFO: Creating event watcher for namespace "capz-e2e-22ppl5" May 14 20:25:52.827: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-22ppl5-oot INFO: Creating the workload cluster with name "capz-e2e-22ppl5-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-22ppl5-oot --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 2 --flavor external-cloud-provider INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/capz-e2e-22ppl5-oot created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-22ppl5-oot created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-22ppl5-oot-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-22ppl5-oot-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-22ppl5-oot-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-22ppl5-oot-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-22ppl5-oot-md-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created clusterresourceset.addons.cluster.x-k8s.io/crs-ccm created clusterresourceset.addons.cluster.x-k8s.io/crs-node-manager created configmap/cloud-controller-manager-addon created configmap/cloud-node-manager-addon created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-22ppl5-oot-calico created configmap/cni-capz-e2e-22ppl5-oot-calico created INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-e2e-22ppl5/capz-e2e-22ppl5-oot-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist �[1mSTEP�[0m: Dumping logs from the "capz-e2e-22ppl5-oot" workload cluster �[1mSTEP�[0m: Dumping workload cluster capz-e2e-22ppl5/capz-e2e-22ppl5-oot logs May 14 20:46:54.195: INFO: INFO: Collecting logs for node capz-e2e-22ppl5-oot-control-plane-zjsx8 in cluster capz-e2e-22ppl5-oot in namespace capz-e2e-22ppl5 May 14 20:47:03.508: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-22ppl5-oot-control-plane-zjsx8 May 14 20:47:04.292: INFO: INFO: Collecting logs for node capz-e2e-22ppl5-oot-md-0-bq6sp in cluster capz-e2e-22ppl5-oot in namespace capz-e2e-22ppl5 May 14 20:47:07.369: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-22ppl5-oot-md-0-bq6sp �[1mSTEP�[0m: Redacting sensitive information from logs
Filter through log files | View test history on testgrid
capz-e2e Workload cluster creation Creating a VMSS cluster with a single control plane node and an AzureMachinePool with 2 nodes
capz-e2e Workload cluster creation Creating a Windows Enabled cluster With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
capz-e2e Workload cluster creation Creating a Windows enabled VMSS cluster with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster With ipv6 worker node
capz-e2e Workload cluster creation Creating a private cluster Creates a public management cluster in the same vnet
capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node
capz-e2e Workload cluster creation With 3 control-plane nodes and 2 worker nodes
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster using scale in rollout Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a single control plane cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should adopt up-to-date control plane Machines without modification Should adopt up-to-date control plane Machines without modification
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Workload cluster creation Creating a GPU-enabled cluster with a single control plane node and 1 node
... skipping 437 lines ... [1mWith ipv6 worker node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269[0m INFO: "With ipv6 worker node" started at Sat, 14 May 2022 19:50:25 UTC on Ginkgo node 3 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-yh5yg7" for hosting the cluster May 14 19:50:25.950: INFO: starting to create namespace for hosting the "capz-e2e-yh5yg7" test spec 2022/05/14 19:50:25 failed trying to get namespace (capz-e2e-yh5yg7):namespaces "capz-e2e-yh5yg7" not found INFO: Creating namespace capz-e2e-yh5yg7 INFO: Creating event watcher for namespace "capz-e2e-yh5yg7" May 14 19:50:26.025: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-yh5yg7-ipv6 INFO: Creating the workload cluster with name "capz-e2e-yh5yg7-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 93 lines ... [1mSTEP[0m: Fetching activity logs took 537.701101ms [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-yh5yg7" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-yh5yg7 namespace [1mSTEP[0m: Deleting cluster capz-e2e-yh5yg7-ipv6 INFO: Waiting for the Cluster capz-e2e-yh5yg7/capz-e2e-yh5yg7-ipv6 to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-yh5yg7-ipv6 to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-yh5yg7-ipv6-control-plane-2nlcw, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-9b52m, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-4xj8f, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-yh5yg7-ipv6-control-plane-dd68g, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-flf6r, container calico-kube-controllers: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-l7mnb, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-jcvjs, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-yh5yg7-ipv6-control-plane-2nlcw, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-yh5yg7-ipv6-control-plane-btmnx, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-yh5yg7-ipv6-control-plane-dd68g, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-yh5yg7-ipv6-control-plane-btmnx, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-yh5yg7-ipv6-control-plane-dd68g, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-qz54v, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-yh5yg7-ipv6-control-plane-btmnx, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-689ns, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-yh5yg7-ipv6-control-plane-dd68g, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-yh5yg7-ipv6-control-plane-2nlcw, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-g4rm2, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-yh5yg7-ipv6-control-plane-btmnx, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-22nvs, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-yh5yg7-ipv6-control-plane-2nlcw, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-82lx5, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-m9vlw, container calico-node: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-yh5yg7 [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "With ipv6 worker node" ran for 16m22s on Ginkgo node 3 of 3 ... skipping 10 lines ... [1mwith a single control plane node and an AzureMachinePool with 2 nodes[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315[0m INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Sat, 14 May 2022 20:06:48 UTC on Ginkgo node 3 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-s49l83" for hosting the cluster May 14 20:06:48.014: INFO: starting to create namespace for hosting the "capz-e2e-s49l83" test spec 2022/05/14 20:06:48 failed trying to get namespace (capz-e2e-s49l83):namespaces "capz-e2e-s49l83" not found INFO: Creating namespace capz-e2e-s49l83 INFO: Creating event watcher for namespace "capz-e2e-s49l83" May 14 20:06:48.056: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-s49l83-vmss INFO: Creating the workload cluster with name "capz-e2e-s49l83-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml ... skipping 130 lines ... [1mWith 3 control-plane nodes and 2 worker nodes[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203[0m INFO: "With 3 control-plane nodes and 2 worker nodes" started at Sat, 14 May 2022 19:50:25 UTC on Ginkgo node 2 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-xv2je0" for hosting the cluster May 14 19:50:25.918: INFO: starting to create namespace for hosting the "capz-e2e-xv2je0" test spec 2022/05/14 19:50:25 failed trying to get namespace (capz-e2e-xv2je0):namespaces "capz-e2e-xv2je0" not found INFO: Creating namespace capz-e2e-xv2je0 INFO: Creating event watcher for namespace "capz-e2e-xv2je0" May 14 19:50:25.999: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-xv2je0-ha INFO: Creating the workload cluster with name "capz-e2e-xv2je0-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml ... skipping 59 lines ... [1mSTEP[0m: waiting for job default/curl-to-elb-jobqkzkw1r1dg2 to be complete May 14 20:00:21.528: INFO: waiting for job default/curl-to-elb-jobqkzkw1r1dg2 to be complete May 14 20:00:31.600: INFO: job default/curl-to-elb-jobqkzkw1r1dg2 is complete, took 10.071371785s [1mSTEP[0m: connecting directly to the external LB service May 14 20:00:31.600: INFO: starting attempts to connect directly to the external LB service 2022/05/14 20:00:31 [DEBUG] GET http://20.88.124.37 2022/05/14 20:01:01 [ERR] GET http://20.88.124.37 request failed: Get "http://20.88.124.37": dial tcp 20.88.124.37:80: i/o timeout 2022/05/14 20:01:01 [DEBUG] GET http://20.88.124.37: retrying in 1s (4 left) May 14 20:01:18.013: INFO: successfully connected to the external LB service [1mSTEP[0m: deleting the test resources May 14 20:01:18.013: INFO: starting to delete external LB service web77oofd-elb May 14 20:01:18.109: INFO: starting to delete deployment web77oofd May 14 20:01:18.146: INFO: starting to delete job curl-to-elb-jobqkzkw1r1dg2 [1mSTEP[0m: creating a Kubernetes client to the workload cluster [1mSTEP[0m: Creating development namespace May 14 20:01:18.233: INFO: starting to create dev deployment namespace 2022/05/14 20:01:18 failed trying to get namespace (development):namespaces "development" not found 2022/05/14 20:01:18 namespace development does not exist, creating... [1mSTEP[0m: Creating production namespace May 14 20:01:18.323: INFO: starting to create prod deployment namespace 2022/05/14 20:01:18 failed trying to get namespace (production):namespaces "production" not found 2022/05/14 20:01:18 namespace production does not exist, creating... [1mSTEP[0m: Creating frontendProd, backend and network-policy pod deployments May 14 20:01:18.401: INFO: starting to create frontend-prod deployments May 14 20:01:18.443: INFO: starting to create frontend-dev deployments May 14 20:01:18.487: INFO: starting to create backend deployments May 14 20:01:18.524: INFO: starting to create network-policy deployments ... skipping 11 lines ... [1mSTEP[0m: Ensuring we have outbound internet access from the network-policy pods [1mSTEP[0m: Ensuring we have connectivity from network-policy pods to frontend-prod pods [1mSTEP[0m: Ensuring we have connectivity from network-policy pods to backend pods [1mSTEP[0m: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace May 14 20:01:41.358: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace [1mSTEP[0m: Ensuring we no longer have ingress access from the network-policy pods to backend pods curl: (7) Failed to connect to 192.168.161.131 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves May 14 20:03:51.209: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves [1mSTEP[0m: Applying a network policy to deny egress access in development namespace May 14 20:03:51.392: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace [1mSTEP[0m: Ensuring we no longer have egress access from the network-policy pods to backend pods curl: (7) Failed to connect to 192.168.161.131 port 80: Connection timed out curl: (7) Failed to connect to 192.168.161.131 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves May 14 20:08:13.074: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves [1mSTEP[0m: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace May 14 20:08:13.266: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace [1mSTEP[0m: Ensuring we have egress access from pods with matching labels [1mSTEP[0m: Ensuring we don't have ingress access from pods without matching labels curl: (7) Failed to connect to 192.168.161.132 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves May 14 20:10:24.425: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves [1mSTEP[0m: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace May 14 20:10:24.610: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace [1mSTEP[0m: Ensuring we have egress access from pods with matching labels [1mSTEP[0m: Ensuring we don't have ingress access from pods without matching labels curl: (7) Failed to connect to 192.168.161.130 port 80: Connection timed out curl: (7) Failed to connect to 192.168.161.132 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves May 14 20:14:46.572: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves [1mSTEP[0m: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels May 14 20:14:46.758: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels [1mSTEP[0m: Ensuring we have ingress access from pods with matching labels [1mSTEP[0m: Ensuring we don't have ingress access from pods without matching labels curl: (7) Failed to connect to 192.168.161.131 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves May 14 20:16:57.641: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves [1mSTEP[0m: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development May 14 20:16:57.823: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development [1mSTEP[0m: Ensuring we don't have ingress access from role:frontend pods in production namespace curl: (7) Failed to connect to 192.168.161.131 port 80: Connection timed out [1mSTEP[0m: Ensuring we have ingress access from role:frontend pods in development namespace [1mSTEP[0m: Dumping logs from the "capz-e2e-xv2je0-ha" workload cluster [1mSTEP[0m: Dumping workload cluster capz-e2e-xv2je0/capz-e2e-xv2je0-ha logs May 14 20:19:09.126: INFO: INFO: Collecting logs for node capz-e2e-xv2je0-ha-control-plane-j54qq in cluster capz-e2e-xv2je0-ha in namespace capz-e2e-xv2je0 May 14 20:19:24.021: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-xv2je0-ha-control-plane-j54qq ... skipping 39 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-xv2je0-ha-control-plane-vjcjn, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-xv2je0-ha-control-plane-dq79c, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-crq7q, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-m82nv, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-dzj8r, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-xv2je0-ha-control-plane-vjcjn, container etcd [1mSTEP[0m: Got error while iterating over activity logs for resource group capz-e2e-xv2je0-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded [1mSTEP[0m: Fetching activity logs took 30.000917819s [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-xv2je0" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-xv2je0 namespace [1mSTEP[0m: Deleting cluster capz-e2e-xv2je0-ha INFO: Waiting for the Cluster capz-e2e-xv2je0/capz-e2e-xv2je0-ha to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-xv2je0-ha to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-xv2je0-ha-control-plane-j54qq, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-xv2je0-ha-control-plane-vjcjn, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-xv2je0-ha-control-plane-j54qq, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-xv2je0-ha-control-plane-j54qq, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-z4l2k, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-ssptv, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-xv2je0-ha-control-plane-j54qq, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-xv2je0-ha-control-plane-vjcjn, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-jtqxf, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-x2zdv, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-xv2je0-ha-control-plane-vjcjn, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-xv2je0-ha-control-plane-vjcjn, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-v9j4k, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-qhwch, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-crq7q, container calico-kube-controllers: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-xv2je0 [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 36m28s on Ginkgo node 2 of 3 ... skipping 8 lines ... [1mwith a single control plane node and 1 node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454[0m INFO: "with a single control plane node and 1 node" started at Sat, 14 May 2022 20:26:54 UTC on Ginkgo node 2 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-coslcd" for hosting the cluster May 14 20:26:54.045: INFO: starting to create namespace for hosting the "capz-e2e-coslcd" test spec 2022/05/14 20:26:54 failed trying to get namespace (capz-e2e-coslcd):namespaces "capz-e2e-coslcd" not found INFO: Creating namespace capz-e2e-coslcd INFO: Creating event watcher for namespace "capz-e2e-coslcd" May 14 20:26:54.085: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-coslcd-aks INFO: Creating the workload cluster with name "capz-e2e-coslcd-aks" using the "aks-multi-tenancy" template (Kubernetes v1.22.6, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 34 lines ... [1mSTEP[0m: Dumping logs from the "capz-e2e-coslcd-aks" workload cluster [1mSTEP[0m: Dumping workload cluster capz-e2e-coslcd/capz-e2e-coslcd-aks logs May 14 20:34:42.180: INFO: INFO: Collecting logs for node aks-agentpool1-28365680-vmss000000 in cluster capz-e2e-coslcd-aks in namespace capz-e2e-coslcd May 14 20:36:52.225: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0 Failed to get logs for machine pool agentpool0, cluster capz-e2e-coslcd/capz-e2e-coslcd-aks: [dialing public load balancer at capz-e2e-coslcd-aks-25a44157.hcp.eastus2.azmk8s.io: dial tcp 20.96.53.91:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."] May 14 20:36:52.766: INFO: INFO: Collecting logs for node aks-agentpool1-28365680-vmss000000 in cluster capz-e2e-coslcd-aks in namespace capz-e2e-coslcd May 14 20:39:03.296: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0 Failed to get logs for machine pool agentpool1, cluster capz-e2e-coslcd/capz-e2e-coslcd-aks: [dialing public load balancer at capz-e2e-coslcd-aks-25a44157.hcp.eastus2.azmk8s.io: dial tcp 20.96.53.91:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."] [1mSTEP[0m: Dumping workload cluster capz-e2e-coslcd/capz-e2e-coslcd-aks kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 428.799282ms [1mSTEP[0m: Dumping workload cluster capz-e2e-coslcd/capz-e2e-coslcd-aks Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/azure-ip-masq-agent-whqkr, container azure-ip-masq-agent [1mSTEP[0m: Creating log watcher for controller kube-system/csi-azurefile-node-zxnjh, container node-driver-registrar [1mSTEP[0m: Creating log watcher for controller kube-system/csi-azurefile-node-zxnjh, container azurefile ... skipping 42 lines ... [1mCreates a public management cluster in the same vnet[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141[0m INFO: "Creates a public management cluster in the same vnet" started at Sat, 14 May 2022 19:50:25 UTC on Ginkgo node 1 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-wivrd7" for hosting the cluster May 14 19:50:25.910: INFO: starting to create namespace for hosting the "capz-e2e-wivrd7" test spec 2022/05/14 19:50:25 failed trying to get namespace (capz-e2e-wivrd7):namespaces "capz-e2e-wivrd7" not found INFO: Creating namespace capz-e2e-wivrd7 INFO: Creating event watcher for namespace "capz-e2e-wivrd7" May 14 19:50:25.955: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-wivrd7-public-custom-vnet [1mSTEP[0m: creating Azure clients with the workload cluster's subscription [1mSTEP[0m: creating a resource group ... skipping 100 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-rwspm, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-wivrd7-public-custom-vnet-control-plane-4dlzh, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-wivrd7-public-custom-vnet-control-plane-4dlzh, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-5hnqv, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-7ztlf, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-54xvw, container coredns [1mSTEP[0m: Got error while iterating over activity logs for resource group capz-e2e-wivrd7-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded [1mSTEP[0m: Fetching activity logs took 30.000872575s [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-wivrd7" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-wivrd7 namespace [1mSTEP[0m: Deleting cluster capz-e2e-wivrd7-public-custom-vnet INFO: Waiting for the Cluster capz-e2e-wivrd7/capz-e2e-wivrd7-public-custom-vnet to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-wivrd7-public-custom-vnet to be deleted W0514 20:39:06.324714 24160 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding I0514 20:39:37.864307 24160 trace.go:205] Trace[784646361]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (14-May-2022 20:39:07.863) (total time: 30001ms): Trace[784646361]: [30.001217253s] [30.001217253s] END E0514 20:39:37.864397 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp 20.22.33.163:6443: i/o timeout I0514 20:40:10.920954 24160 trace.go:205] Trace[507218320]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (14-May-2022 20:39:40.919) (total time: 30001ms): Trace[507218320]: [30.00102254s] [30.00102254s] END E0514 20:40:10.921023 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp 20.22.33.163:6443: i/o timeout I0514 20:40:45.366172 24160 trace.go:205] Trace[1550106395]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (14-May-2022 20:40:15.364) (total time: 30001ms): Trace[1550106395]: [30.001589403s] [30.001589403s] END E0514 20:40:45.366260 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp 20.22.33.163:6443: i/o timeout I0514 20:41:22.396766 24160 trace.go:205] Trace[589534845]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (14-May-2022 20:40:52.395) (total time: 30001ms): Trace[589534845]: [30.001150639s] [30.001150639s] END E0514 20:41:22.396843 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp 20.22.33.163:6443: i/o timeout I0514 20:42:17.758462 24160 trace.go:205] Trace[185374530]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (14-May-2022 20:41:47.757) (total time: 30001ms): Trace[185374530]: [30.001240442s] [30.001240442s] END E0514 20:42:17.758528 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp 20.22.33.163:6443: i/o timeout I0514 20:43:18.285059 24160 trace.go:205] Trace[1106792788]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (14-May-2022 20:42:48.283) (total time: 30001ms): Trace[1106792788]: [30.001614295s] [30.001614295s] END E0514 20:43:18.285129 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp 20.22.33.163:6443: i/o timeout E0514 20:44:05.074045 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-wivrd7 [1mSTEP[0m: Running additional cleanup for the "create-workload-cluster" test spec May 14 20:44:28.695: INFO: deleting an existing virtual network "custom-vnet" May 14 20:44:39.269: INFO: deleting an existing route table "node-routetable" May 14 20:44:41.576: INFO: deleting an existing network security group "node-nsg" E0514 20:44:48.514842 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host May 14 20:44:51.896: INFO: deleting an existing network security group "control-plane-nsg" May 14 20:45:02.361: INFO: verifying the existing resource group "capz-e2e-wivrd7-public-custom-vnet" is empty May 14 20:45:02.430: INFO: deleting the existing resource group "capz-e2e-wivrd7-public-custom-vnet" [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs E0514 20:45:43.449215 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: "Creates a public management cluster in the same vnet" ran for 55m35s on Ginkgo node 1 of 3 [32m• [SLOW TEST:3335.416 seconds][0m Workload cluster creation [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43[0m ... skipping 6 lines ... [1mwith a 1 control plane nodes and 2 worker nodes[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419[0m INFO: "with a 1 control plane nodes and 2 worker nodes" started at Sat, 14 May 2022 20:25:52 UTC on Ginkgo node 3 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-22ppl5" for hosting the cluster May 14 20:25:52.790: INFO: starting to create namespace for hosting the "capz-e2e-22ppl5" test spec 2022/05/14 20:25:52 failed trying to get namespace (capz-e2e-22ppl5):namespaces "capz-e2e-22ppl5" not found INFO: Creating namespace capz-e2e-22ppl5 INFO: Creating event watcher for namespace "capz-e2e-22ppl5" May 14 20:25:52.827: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-22ppl5-oot INFO: Creating the workload cluster with name "capz-e2e-22ppl5-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml ... skipping 93 lines ... [1mwith a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543[0m INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Sat, 14 May 2022 20:46:01 UTC on Ginkgo node 1 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-lv43jj" for hosting the cluster May 14 20:46:01.330: INFO: starting to create namespace for hosting the "capz-e2e-lv43jj" test spec 2022/05/14 20:46:01 failed trying to get namespace (capz-e2e-lv43jj):namespaces "capz-e2e-lv43jj" not found INFO: Creating namespace capz-e2e-lv43jj INFO: Creating event watcher for namespace "capz-e2e-lv43jj" May 14 20:46:01.367: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-lv43jj-win-vmss INFO: Creating the workload cluster with name "capz-e2e-lv43jj-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 12 lines ... kubeadmconfig.bootstrap.cluster.x-k8s.io/capz-e2e-lv43jj-win-vmss-mp-win created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-lv43jj-win-vmss-flannel created configmap/cni-capz-e2e-lv43jj-win-vmss-flannel created INFO: Waiting for the cluster infrastructure to be provisioned [1mSTEP[0m: Waiting for cluster to enter the provisioned phase E0514 20:46:24.391353 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-e2e-lv43jj/capz-e2e-lv43jj-win-vmss-control-plane to be provisioned [1mSTEP[0m: Waiting for one control plane node to exist E0514 20:47:04.225836 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 20:47:45.391781 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 20:48:29.446384 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-lv43jj/capz-e2e-lv43jj-win-vmss-control-plane to be ready (implies underlying nodes to be ready as well) [1mSTEP[0m: Waiting for the control plane to be ready E0514 20:49:12.338268 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: Waiting for the machine deployments to be provisioned INFO: Waiting for the machine pools to be provisioned [1mSTEP[0m: Waiting for the machine pool workload nodes to exist E0514 20:49:42.906823 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 20:50:33.521299 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Waiting for the machine pool workload nodes to exist E0514 20:51:14.242415 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 20:52:09.580423 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 20:52:55.077742 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: creating a Kubernetes client to the workload cluster [1mSTEP[0m: creating an HTTP deployment [1mSTEP[0m: waiting for deployment default/webwwvxih to be available May 14 20:53:22.755: INFO: starting to wait for deployment to become available E0514 20:53:26.682865 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host May 14 20:53:42.867: INFO: Deployment default/webwwvxih is now available, took 20.112213204s [1mSTEP[0m: creating an internal Load Balancer service May 14 20:53:42.867: INFO: starting to create an internal Load Balancer service [1mSTEP[0m: waiting for service default/webwwvxih-ilb to be available May 14 20:53:42.925: INFO: waiting for service default/webwwvxih-ilb to be available E0514 20:54:05.644460 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host May 14 20:54:23.097: INFO: service default/webwwvxih-ilb is available, took 40.17117224s [1mSTEP[0m: connecting to the internal LB service from a curl pod May 14 20:54:23.130: INFO: starting to create a curl to ilb job [1mSTEP[0m: waiting for job default/curl-to-ilb-jobhxlxw to be complete May 14 20:54:23.179: INFO: waiting for job default/curl-to-ilb-jobhxlxw to be complete May 14 20:54:33.258: INFO: job default/curl-to-ilb-jobhxlxw is complete, took 10.078597292s [1mSTEP[0m: deleting the ilb test resources May 14 20:54:33.258: INFO: deleting the ilb service: webwwvxih-ilb May 14 20:54:33.313: INFO: deleting the ilb job: curl-to-ilb-jobhxlxw [1mSTEP[0m: creating an external Load Balancer service May 14 20:54:33.347: INFO: starting to create an external Load Balancer service [1mSTEP[0m: waiting for service default/webwwvxih-elb to be available May 14 20:54:33.404: INFO: waiting for service default/webwwvxih-elb to be available E0514 20:54:53.451263 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 20:55:49.721814 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host May 14 20:55:53.712: INFO: service default/webwwvxih-elb is available, took 1m20.307905951s [1mSTEP[0m: connecting to the external LB service from a curl pod May 14 20:55:53.745: INFO: starting to create curl-to-elb job [1mSTEP[0m: waiting for job default/curl-to-elb-jobg09x9tibcac to be complete May 14 20:55:53.781: INFO: waiting for job default/curl-to-elb-jobg09x9tibcac to be complete May 14 20:56:03.847: INFO: job default/curl-to-elb-jobg09x9tibcac is complete, took 10.066077141s [1mSTEP[0m: connecting directly to the external LB service May 14 20:56:03.847: INFO: starting attempts to connect directly to the external LB service 2022/05/14 20:56:03 [DEBUG] GET http://20.22.8.213 E0514 20:56:27.561815 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host 2022/05/14 20:56:33 [ERR] GET http://20.22.8.213 request failed: Get "http://20.22.8.213": dial tcp 20.22.8.213:80: i/o timeout 2022/05/14 20:56:33 [DEBUG] GET http://20.22.8.213: retrying in 1s (4 left) May 14 20:56:50.367: INFO: successfully connected to the external LB service [1mSTEP[0m: deleting the test resources May 14 20:56:50.367: INFO: starting to delete external LB service webwwvxih-elb May 14 20:56:50.428: INFO: starting to delete deployment webwwvxih May 14 20:56:50.463: INFO: starting to delete job curl-to-elb-jobg09x9tibcac [1mSTEP[0m: creating a Kubernetes client to the workload cluster [1mSTEP[0m: creating an HTTP deployment [1mSTEP[0m: waiting for deployment default/web-windowsd9if4y to be available May 14 20:56:50.591: INFO: starting to wait for deployment to become available E0514 20:57:21.966751 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 20:58:05.635904 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host May 14 20:58:30.995: INFO: Deployment default/web-windowsd9if4y is now available, took 1m40.404717189s [1mSTEP[0m: creating an internal Load Balancer service May 14 20:58:30.995: INFO: starting to create an internal Load Balancer service [1mSTEP[0m: waiting for service default/web-windowsd9if4y-ilb to be available May 14 20:58:31.045: INFO: waiting for service default/web-windowsd9if4y-ilb to be available E0514 20:58:53.345852 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host May 14 20:59:11.214: INFO: service default/web-windowsd9if4y-ilb is available, took 40.168668229s [1mSTEP[0m: connecting to the internal LB service from a curl pod May 14 20:59:11.246: INFO: starting to create a curl to ilb job [1mSTEP[0m: waiting for job default/curl-to-ilb-jobhsr4h to be complete May 14 20:59:11.282: INFO: waiting for job default/curl-to-ilb-jobhsr4h to be complete May 14 20:59:21.358: INFO: job default/curl-to-ilb-jobhsr4h is complete, took 10.0763273s [1mSTEP[0m: deleting the ilb test resources May 14 20:59:21.358: INFO: deleting the ilb service: web-windowsd9if4y-ilb May 14 20:59:21.414: INFO: deleting the ilb job: curl-to-ilb-jobhsr4h [1mSTEP[0m: creating an external Load Balancer service May 14 20:59:21.448: INFO: starting to create an external Load Balancer service [1mSTEP[0m: waiting for service default/web-windowsd9if4y-elb to be available May 14 20:59:21.506: INFO: waiting for service default/web-windowsd9if4y-elb to be available E0514 20:59:29.024230 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 20:59:59.214066 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host May 14 21:00:31.775: INFO: service default/web-windowsd9if4y-elb is available, took 1m10.26946725s [1mSTEP[0m: connecting to the external LB service from a curl pod May 14 21:00:31.808: INFO: starting to create curl-to-elb job [1mSTEP[0m: waiting for job default/curl-to-elb-jobm46eepxtph2 to be complete May 14 21:00:31.844: INFO: waiting for job default/curl-to-elb-jobm46eepxtph2 to be complete May 14 21:00:41.910: INFO: job default/curl-to-elb-jobm46eepxtph2 is complete, took 10.066454741s [1mSTEP[0m: connecting directly to the external LB service May 14 21:00:41.910: INFO: starting attempts to connect directly to the external LB service 2022/05/14 21:00:41 [DEBUG] GET http://20.22.9.209 E0514 21:00:57.852831 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host 2022/05/14 21:01:11 [ERR] GET http://20.22.9.209 request failed: Get "http://20.22.9.209": dial tcp 20.22.9.209:80: i/o timeout 2022/05/14 21:01:11 [DEBUG] GET http://20.22.9.209: retrying in 1s (4 left) May 14 21:01:13.993: INFO: successfully connected to the external LB service [1mSTEP[0m: deleting the test resources May 14 21:01:13.993: INFO: starting to delete external LB service web-windowsd9if4y-elb May 14 21:01:14.054: INFO: starting to delete deployment web-windowsd9if4y May 14 21:01:14.088: INFO: starting to delete job curl-to-elb-jobm46eepxtph2 ... skipping 6 lines ... May 14 21:01:25.583: INFO: INFO: Collecting logs for node capz-e2e-lv43jj-win-vmss-mp-0000000 in cluster capz-e2e-lv43jj-win-vmss in namespace capz-e2e-lv43jj May 14 21:01:37.137: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-lv43jj-win-vmss-mp-0 May 14 21:01:37.479: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-lv43jj-win-vmss in namespace capz-e2e-lv43jj E0514 21:01:50.865743 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:02:31.804301 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host May 14 21:02:34.938: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win [1mSTEP[0m: Dumping workload cluster capz-e2e-lv43jj/capz-e2e-lv43jj-win-vmss kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 359.586655ms [1mSTEP[0m: Dumping workload cluster capz-e2e-lv43jj/capz-e2e-lv43jj-win-vmss Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-g8h4m, container kube-proxy ... skipping 11 lines ... [1mSTEP[0m: Fetching activity logs took 983.744388ms [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-lv43jj" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-lv43jj namespace [1mSTEP[0m: Deleting cluster capz-e2e-lv43jj-win-vmss INFO: Waiting for the Cluster capz-e2e-lv43jj/capz-e2e-lv43jj-win-vmss to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-lv43jj-win-vmss to be deleted E0514 21:03:21.125455 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:03:51.751375 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:04:34.072735 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-windows-4lkvb, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-x7vrd, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-g8h4m, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-q4tcz, container kube-flannel: http2: client connection lost E0514 21:05:15.230251 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:05:48.125799 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:06:26.769478 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:07:10.229502 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:08:06.804906 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:08:37.735733 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:09:14.738567 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:09:48.051114 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:10:37.170356 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:11:17.000473 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:12:14.280258 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:12:51.767492 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:13:42.177093 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:14:26.445147 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:15:01.589487 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:15:38.761283 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-lv43jj [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs E0514 21:16:31.474321 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:17:09.044481 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 31m18s on Ginkgo node 1 of 3 [32m• [SLOW TEST:1877.732 seconds][0m Workload cluster creation [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43[0m ... skipping 6 lines ... [1mWith 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496[0m INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Sat, 14 May 2022 20:44:57 UTC on Ginkgo node 2 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-9pbmbj" for hosting the cluster May 14 20:44:57.876: INFO: starting to create namespace for hosting the "capz-e2e-9pbmbj" test spec 2022/05/14 20:44:57 failed trying to get namespace (capz-e2e-9pbmbj):namespaces "capz-e2e-9pbmbj" not found INFO: Creating namespace capz-e2e-9pbmbj INFO: Creating event watcher for namespace "capz-e2e-9pbmbj" May 14 20:44:57.917: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-9pbmbj-win-ha INFO: Creating the workload cluster with name "capz-e2e-9pbmbj-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 151 lines ... [1mSTEP[0m: Fetching activity logs took 1.008229121s [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-9pbmbj" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-9pbmbj namespace [1mSTEP[0m: Deleting cluster capz-e2e-9pbmbj-win-ha INFO: Waiting for the Cluster capz-e2e-9pbmbj/capz-e2e-9pbmbj-win-ha to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-9pbmbj-win-ha to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-6mxxh, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-h47mw, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-9pbmbj-win-ha-control-plane-xgj76, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-9pbmbj-win-ha-control-plane-sfb82, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-pg9j4, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-9pbmbj-win-ha-control-plane-sfb82, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-9pbmbj-win-ha-control-plane-xgj76, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-9pbmbj-win-ha-control-plane-sfb82, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-9pbmbj-win-ha-control-plane-xgj76, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-6brcl, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-9pbmbj-win-ha-control-plane-xgj76, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-n8ltw, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-6rrrw, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-9pbmbj-win-ha-control-plane-sfb82, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-zjf86, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-np84z, container kube-proxy: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-9pbmbj [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 39m51s on Ginkgo node 2 of 3 ... skipping 3 lines ... [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43[0m Creating a Windows Enabled cluster [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:494[0m With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496[0m [90m------------------------------[0m E0514 21:17:46.698774 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:18:32.683009 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:19:22.821869 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:20:03.962653 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:21:03.278380 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:21:53.385512 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:22:52.905270 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:23:50.015799 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0514 21:24:41.618201 24160 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-wivrd7/events?resourceVersion=8661": dial tcp: lookup capz-e2e-wivrd7-public-custom-vnet-cd1f2315.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Tearing down the management cluster [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mWorkload cluster creation [0m[0mCreating a cluster that uses the external cloud provider [0m[91m[1m[It] with a 1 control plane nodes and 2 worker nodes [0m [37m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/controlplane_helpers.go:145[0m [1m[91mRan 8 of 22 Specs in 5785.731 seconds[0m [1m[91mFAIL![0m -- [32m[1m7 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m14 Skipped[0m Ginkgo ran 1 suite in 1h37m49.764055601s Test Suite Failed make[1]: *** [Makefile:173: test-e2e-run] Error 1 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:181: test-e2e] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...