Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 7 succeeded |
Started | |
Elapsed | 1h44m |
Revision | release-0.5 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\san\sAKS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454 Timed out after 1200.000s. Expected <string>: Provisioning to equal <string>: Provisioned /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/cluster_helpers.go:134from junit.e2e_suite.3.xml
INFO: "with a single control plane node and 1 node" started at Tue, 10 May 2022 20:31:57 UTC on Ginkgo node 3 of 3 �[1mSTEP�[0m: Creating namespace "capz-e2e-ij5hsx" for hosting the cluster May 10 20:31:57.234: INFO: starting to create namespace for hosting the "capz-e2e-ij5hsx" test spec 2022/05/10 20:31:57 failed trying to get namespace (capz-e2e-ij5hsx):namespaces "capz-e2e-ij5hsx" not found INFO: Creating namespace capz-e2e-ij5hsx INFO: Creating event watcher for namespace "capz-e2e-ij5hsx" May 10 20:31:57.293: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-ij5hsx-aks INFO: Creating the workload cluster with name "capz-e2e-ij5hsx-aks" using the "aks-multi-tenancy" template (Kubernetes v1.22.6, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-ij5hsx-aks --infrastructure (default) --kubernetes-version v1.22.6 --control-plane-machine-count 1 --worker-machine-count 1 --flavor aks-multi-tenancy INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/capz-e2e-ij5hsx-aks created azuremanagedcontrolplane.infrastructure.cluster.x-k8s.io/capz-e2e-ij5hsx-aks created azuremanagedcluster.infrastructure.cluster.x-k8s.io/capz-e2e-ij5hsx-aks created machinepool.cluster.x-k8s.io/agentpool0 created azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool0 created machinepool.cluster.x-k8s.io/agentpool1 created azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase �[1mSTEP�[0m: Unable to dump workload cluster logs as the cluster is nil �[1mSTEP�[0m: Dumping all the Cluster API resources in the "capz-e2e-ij5hsx" namespace �[1mSTEP�[0m: Deleting all clusters in the capz-e2e-ij5hsx namespace �[1mSTEP�[0m: Deleting cluster capz-e2e-ij5hsx-aks INFO: Waiting for the Cluster capz-e2e-ij5hsx/capz-e2e-ij5hsx-aks to be deleted �[1mSTEP�[0m: Waiting for cluster capz-e2e-ij5hsx-aks to be deleted �[1mSTEP�[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-ij5hsx �[1mSTEP�[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" �[1mSTEP�[0m: Redacting sensitive information from logs INFO: "with a single control plane node and 1 node" ran for 29m18s on Ginkgo node 3 of 3
Filter through log files | View test history on testgrid
capz-e2e Workload cluster creation Creating a VMSS cluster with a single control plane node and an AzureMachinePool with 2 nodes
capz-e2e Workload cluster creation Creating a Windows Enabled cluster With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
capz-e2e Workload cluster creation Creating a Windows enabled VMSS cluster with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider with a 1 control plane nodes and 2 worker nodes
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster With ipv6 worker node
capz-e2e Workload cluster creation Creating a private cluster Creates a public management cluster in the same vnet
capz-e2e Workload cluster creation With 3 control-plane nodes and 2 worker nodes
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster using scale in rollout Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a single control plane cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should adopt up-to-date control plane Machines without modification Should adopt up-to-date control plane Machines without modification
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Workload cluster creation Creating a GPU-enabled cluster with a single control plane node and 1 node
... skipping 431 lines ... [1mWith ipv6 worker node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269[0m INFO: "With ipv6 worker node" started at Tue, 10 May 2022 19:49:27 UTC on Ginkgo node 2 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-3jq2t5" for hosting the cluster May 10 19:49:27.549: INFO: starting to create namespace for hosting the "capz-e2e-3jq2t5" test spec 2022/05/10 19:49:27 failed trying to get namespace (capz-e2e-3jq2t5):namespaces "capz-e2e-3jq2t5" not found INFO: Creating namespace capz-e2e-3jq2t5 INFO: Creating event watcher for namespace "capz-e2e-3jq2t5" May 10 19:49:27.625: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-3jq2t5-ipv6 INFO: Creating the workload cluster with name "capz-e2e-3jq2t5-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 93 lines ... [1mSTEP[0m: Fetching activity logs took 621.506348ms [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-3jq2t5" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-3jq2t5 namespace [1mSTEP[0m: Deleting cluster capz-e2e-3jq2t5-ipv6 INFO: Waiting for the Cluster capz-e2e-3jq2t5/capz-e2e-3jq2t5-ipv6 to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-3jq2t5-ipv6 to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-bk5n4, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-4bdfv, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-3jq2t5-ipv6-control-plane-mpzg9, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-c4j6d, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-nm8p2, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-3jq2t5-ipv6-control-plane-l4s6l, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-qvt9j, container calico-kube-controllers: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-nsrc6, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-3jq2t5-ipv6-control-plane-mpzg9, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-3jq2t5-ipv6-control-plane-mpzg9, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-3jq2t5-ipv6-control-plane-6bj9r, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-3jq2t5-ipv6-control-plane-6bj9r, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-3jq2t5-ipv6-control-plane-mpzg9, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-3jq2t5-ipv6-control-plane-l4s6l, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-h5952, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-3jq2t5-ipv6-control-plane-6bj9r, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-3jq2t5-ipv6-control-plane-l4s6l, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-3jq2t5-ipv6-control-plane-l4s6l, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-3jq2t5-ipv6-control-plane-6bj9r, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-d4zwc, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-2fl9j, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-6fchh, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-25gpt, container kube-proxy: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-3jq2t5 [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "With ipv6 worker node" ran for 18m13s on Ginkgo node 2 of 3 ... skipping 10 lines ... [1mwith a single control plane node and an AzureMachinePool with 2 nodes[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315[0m INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Tue, 10 May 2022 20:07:40 UTC on Ginkgo node 2 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-gwwh4n" for hosting the cluster May 10 20:07:40.077: INFO: starting to create namespace for hosting the "capz-e2e-gwwh4n" test spec 2022/05/10 20:07:40 failed trying to get namespace (capz-e2e-gwwh4n):namespaces "capz-e2e-gwwh4n" not found INFO: Creating namespace capz-e2e-gwwh4n INFO: Creating event watcher for namespace "capz-e2e-gwwh4n" May 10 20:07:40.114: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-gwwh4n-vmss INFO: Creating the workload cluster with name "capz-e2e-gwwh4n-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml ... skipping 52 lines ... [1mSTEP[0m: waiting for job default/curl-to-elb-job29hpadkwfsy to be complete May 10 20:16:11.319: INFO: waiting for job default/curl-to-elb-job29hpadkwfsy to be complete May 10 20:16:21.393: INFO: job default/curl-to-elb-job29hpadkwfsy is complete, took 10.074321182s [1mSTEP[0m: connecting directly to the external LB service May 10 20:16:21.393: INFO: starting attempts to connect directly to the external LB service 2022/05/10 20:16:21 [DEBUG] GET http://20.121.255.240 2022/05/10 20:16:51 [ERR] GET http://20.121.255.240 request failed: Get "http://20.121.255.240": dial tcp 20.121.255.240:80: i/o timeout 2022/05/10 20:16:51 [DEBUG] GET http://20.121.255.240: retrying in 1s (4 left) May 10 20:16:52.461: INFO: successfully connected to the external LB service [1mSTEP[0m: deleting the test resources May 10 20:16:52.461: INFO: starting to delete external LB service webvfnedj-elb May 10 20:16:52.517: INFO: starting to delete deployment webvfnedj May 10 20:16:52.546: INFO: starting to delete job curl-to-elb-job29hpadkwfsy ... skipping 43 lines ... [1mSTEP[0m: Fetching activity logs took 758.758084ms [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-gwwh4n" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-gwwh4n namespace [1mSTEP[0m: Deleting cluster capz-e2e-gwwh4n-vmss INFO: Waiting for the Cluster capz-e2e-gwwh4n/capz-e2e-gwwh4n-vmss to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-gwwh4n-vmss to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-wn5q2, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-cppsw, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-s5jw2, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-mvmdn, container calico-node: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-gwwh4n [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 20m46s on Ginkgo node 2 of 3 ... skipping 12 lines ... [1mWith 3 control-plane nodes and 2 worker nodes[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203[0m INFO: "With 3 control-plane nodes and 2 worker nodes" started at Tue, 10 May 2022 19:49:27 UTC on Ginkgo node 3 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-oegsik" for hosting the cluster May 10 19:49:27.549: INFO: starting to create namespace for hosting the "capz-e2e-oegsik" test spec 2022/05/10 19:49:27 failed trying to get namespace (capz-e2e-oegsik):namespaces "capz-e2e-oegsik" not found INFO: Creating namespace capz-e2e-oegsik INFO: Creating event watcher for namespace "capz-e2e-oegsik" May 10 19:49:27.616: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-oegsik-ha INFO: Creating the workload cluster with name "capz-e2e-oegsik-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml ... skipping 67 lines ... May 10 19:58:53.222: INFO: starting to delete external LB service webbh04sp-elb May 10 19:58:53.306: INFO: starting to delete deployment webbh04sp May 10 19:58:53.342: INFO: starting to delete job curl-to-elb-jobm7j8vmnwotd [1mSTEP[0m: creating a Kubernetes client to the workload cluster [1mSTEP[0m: Creating development namespace May 10 19:58:53.449: INFO: starting to create dev deployment namespace 2022/05/10 19:58:53 failed trying to get namespace (development):namespaces "development" not found 2022/05/10 19:58:53 namespace development does not exist, creating... [1mSTEP[0m: Creating production namespace May 10 19:58:53.519: INFO: starting to create prod deployment namespace 2022/05/10 19:58:53 failed trying to get namespace (production):namespaces "production" not found 2022/05/10 19:58:53 namespace production does not exist, creating... [1mSTEP[0m: Creating frontendProd, backend and network-policy pod deployments May 10 19:58:53.606: INFO: starting to create frontend-prod deployments May 10 19:58:53.646: INFO: starting to create frontend-dev deployments May 10 19:58:53.686: INFO: starting to create backend deployments May 10 19:58:53.745: INFO: starting to create network-policy deployments ... skipping 11 lines ... [1mSTEP[0m: Ensuring we have outbound internet access from the network-policy pods [1mSTEP[0m: Ensuring we have connectivity from network-policy pods to frontend-prod pods [1mSTEP[0m: Ensuring we have connectivity from network-policy pods to backend pods [1mSTEP[0m: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace May 10 19:59:16.623: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace [1mSTEP[0m: Ensuring we no longer have ingress access from the network-policy pods to backend pods curl: (7) Failed to connect to 192.168.84.66 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves May 10 20:01:28.144: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves [1mSTEP[0m: Applying a network policy to deny egress access in development namespace May 10 20:01:28.303: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace [1mSTEP[0m: Ensuring we no longer have egress access from the network-policy pods to backend pods curl: (7) Failed to connect to 192.168.84.66 port 80: Connection timed out curl: (7) Failed to connect to 192.168.84.66 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves May 10 20:05:50.290: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves [1mSTEP[0m: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace May 10 20:05:50.471: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace [1mSTEP[0m: Ensuring we have egress access from pods with matching labels [1mSTEP[0m: Ensuring we don't have ingress access from pods without matching labels curl: (7) Failed to connect to 192.168.7.133 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves May 10 20:08:01.548: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves [1mSTEP[0m: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace May 10 20:08:01.727: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace [1mSTEP[0m: Ensuring we have egress access from pods with matching labels [1mSTEP[0m: Ensuring we don't have ingress access from pods without matching labels curl: (7) Failed to connect to 192.168.84.65 port 80: Connection timed out curl: (7) Failed to connect to 192.168.7.133 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves May 10 20:12:23.692: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves [1mSTEP[0m: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels May 10 20:12:23.916: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels [1mSTEP[0m: Ensuring we have ingress access from pods with matching labels [1mSTEP[0m: Ensuring we don't have ingress access from pods without matching labels curl: (7) Failed to connect to 192.168.84.66 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves May 10 20:14:34.576: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves [1mSTEP[0m: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development May 10 20:14:34.761: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development [1mSTEP[0m: Ensuring we don't have ingress access from role:frontend pods in production namespace curl: (7) Failed to connect to 192.168.84.66 port 80: Connection timed out [1mSTEP[0m: Ensuring we have ingress access from role:frontend pods in development namespace [1mSTEP[0m: Dumping logs from the "capz-e2e-oegsik-ha" workload cluster [1mSTEP[0m: Dumping workload cluster capz-e2e-oegsik/capz-e2e-oegsik-ha logs May 10 20:16:46.243: INFO: INFO: Collecting logs for node capz-e2e-oegsik-ha-control-plane-9fvrp in cluster capz-e2e-oegsik-ha in namespace capz-e2e-oegsik May 10 20:16:56.972: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-oegsik-ha-control-plane-9fvrp ... skipping 39 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-dqgmb, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-2hsgr, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-oegsik-ha-control-plane-9sjkn, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-5r7g8, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-4csqf, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-oegsik-ha-control-plane-9fvrp, container kube-apiserver [1mSTEP[0m: Got error while iterating over activity logs for resource group capz-e2e-oegsik-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded [1mSTEP[0m: Fetching activity logs took 30.001192848s [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-oegsik" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-oegsik namespace [1mSTEP[0m: Deleting cluster capz-e2e-oegsik-ha INFO: Waiting for the Cluster capz-e2e-oegsik/capz-e2e-oegsik-ha to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-oegsik-ha to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-oegsik-ha-control-plane-6pffz, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-oegsik-ha-control-plane-6pffz, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-oegsik-ha-control-plane-6pffz, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-2hsgr, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-dqgmb, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-oegsik-ha-control-plane-9fvrp, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-lhcg7, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-oegsik-ha-control-plane-9fvrp, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-oegsik-ha-control-plane-9fvrp, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-84s5n, container calico-kube-controllers: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-htfrw, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-oegsik-ha-control-plane-6pffz, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-5226l, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-5r7g8, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-oegsik-ha-control-plane-9fvrp, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-rpscc, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-4csqf, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-g5dbl, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-59jbs, container coredns: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-oegsik [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 42m30s on Ginkgo node 3 of 3 ... skipping 8 lines ... [1mCreates a public management cluster in the same vnet[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141[0m INFO: "Creates a public management cluster in the same vnet" started at Tue, 10 May 2022 19:49:27 UTC on Ginkgo node 1 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-zv3jxp" for hosting the cluster May 10 19:49:27.526: INFO: starting to create namespace for hosting the "capz-e2e-zv3jxp" test spec 2022/05/10 19:49:27 failed trying to get namespace (capz-e2e-zv3jxp):namespaces "capz-e2e-zv3jxp" not found INFO: Creating namespace capz-e2e-zv3jxp INFO: Creating event watcher for namespace "capz-e2e-zv3jxp" May 10 19:49:27.566: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-zv3jxp-public-custom-vnet [1mSTEP[0m: creating Azure clients with the workload cluster's subscription [1mSTEP[0m: creating a resource group ... skipping 100 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-2b26s, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-zv3jxp-public-custom-vnet-control-plane-7m2vt, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-fwggf, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-zv3jxp-public-custom-vnet-control-plane-7m2vt, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-rzrf9, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-zv3jxp-public-custom-vnet-control-plane-7m2vt, container kube-scheduler [1mSTEP[0m: Got error while iterating over activity logs for resource group capz-e2e-zv3jxp-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded [1mSTEP[0m: Fetching activity logs took 30.000623473s [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-zv3jxp" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-zv3jxp namespace [1mSTEP[0m: Deleting cluster capz-e2e-zv3jxp-public-custom-vnet INFO: Waiting for the Cluster capz-e2e-zv3jxp/capz-e2e-zv3jxp-public-custom-vnet to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-zv3jxp-public-custom-vnet to be deleted W0510 20:38:01.238859 24216 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding I0510 20:38:32.355045 24216 trace.go:205] Trace[2116768800]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (10-May-2022 20:38:02.354) (total time: 30000ms): Trace[2116768800]: [30.000942484s] [30.000942484s] END E0510 20:38:32.355111 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp 20.124.45.9:6443: i/o timeout I0510 20:39:04.873660 24216 trace.go:205] Trace[2049821915]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (10-May-2022 20:38:34.872) (total time: 30001ms): Trace[2049821915]: [30.001479282s] [30.001479282s] END E0510 20:39:04.873741 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp 20.124.45.9:6443: i/o timeout I0510 20:39:38.957291 24216 trace.go:205] Trace[572719943]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (10-May-2022 20:39:08.955) (total time: 30002ms): Trace[572719943]: [30.002142032s] [30.002142032s] END E0510 20:39:38.957387 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp 20.124.45.9:6443: i/o timeout I0510 20:40:17.935027 24216 trace.go:205] Trace[22700151]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (10-May-2022 20:39:47.934) (total time: 30000ms): Trace[22700151]: [30.000776655s] [30.000776655s] END E0510 20:40:17.935198 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp 20.124.45.9:6443: i/o timeout I0510 20:41:09.548827 24216 trace.go:205] Trace[2046701567]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (10-May-2022 20:40:39.547) (total time: 30000ms): Trace[2046701567]: [30.000796551s] [30.000796551s] END E0510 20:41:09.548898 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp 20.124.45.9:6443: i/o timeout I0510 20:42:18.424441 24216 trace.go:205] Trace[92833028]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (10-May-2022 20:41:48.422) (total time: 30001ms): Trace[92833028]: [30.001476696s] [30.001476696s] END E0510 20:42:18.424504 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp 20.124.45.9:6443: i/o timeout [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-zv3jxp [1mSTEP[0m: Running additional cleanup for the "create-workload-cluster" test spec May 10 20:43:21.927: INFO: deleting an existing virtual network "custom-vnet" May 10 20:43:32.355: INFO: deleting an existing route table "node-routetable" May 10 20:43:34.690: INFO: deleting an existing network security group "node-nsg" I0510 20:43:44.009529 24216 trace.go:205] Trace[821921973]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (10-May-2022 20:43:14.007) (total time: 30001ms): Trace[821921973]: [30.001608636s] [30.001608636s] END E0510 20:43:44.009602 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp 20.124.45.9:6443: i/o timeout May 10 20:43:44.970: INFO: deleting an existing network security group "control-plane-nsg" May 10 20:43:55.284: INFO: verifying the existing resource group "capz-e2e-zv3jxp-public-custom-vnet" is empty May 10 20:43:55.346: INFO: deleting the existing resource group "capz-e2e-zv3jxp-public-custom-vnet" E0510 20:44:25.482062 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 20:45:05.248544 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs E0510 20:46:02.990288 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: "Creates a public management cluster in the same vnet" ran for 56m40s on Ginkgo node 1 of 3 [32m• [SLOW TEST:3400.228 seconds][0m Workload cluster creation [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43[0m ... skipping 6 lines ... [1mwith a 1 control plane nodes and 2 worker nodes[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419[0m INFO: "with a 1 control plane nodes and 2 worker nodes" started at Tue, 10 May 2022 20:28:26 UTC on Ginkgo node 2 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-2bw21h" for hosting the cluster May 10 20:28:26.213: INFO: starting to create namespace for hosting the "capz-e2e-2bw21h" test spec 2022/05/10 20:28:26 failed trying to get namespace (capz-e2e-2bw21h):namespaces "capz-e2e-2bw21h" not found INFO: Creating namespace capz-e2e-2bw21h INFO: Creating event watcher for namespace "capz-e2e-2bw21h" May 10 20:28:26.254: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-2bw21h-oot INFO: Creating the workload cluster with name "capz-e2e-2bw21h-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml ... skipping 98 lines ... [1mSTEP[0m: Fetching activity logs took 593.543506ms [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-2bw21h" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-2bw21h namespace [1mSTEP[0m: Deleting cluster capz-e2e-2bw21h-oot INFO: Waiting for the Cluster capz-e2e-2bw21h/capz-e2e-2bw21h-oot to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-2bw21h-oot to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-44kmm, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/cloud-controller-manager, container cloud-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-2bw21h-oot-control-plane-q5qrq, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-7ppz7, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-4df2r, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-2bw21h-oot-control-plane-q5qrq, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-b64lq, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-2bw21h-oot-control-plane-q5qrq, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-vdz7n, container calico-kube-controllers: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/cloud-node-manager-mrbcz, container cloud-node-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-2bw21h-oot-control-plane-q5qrq, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-2bw21h [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 20m37s on Ginkgo node 2 of 3 ... skipping 10 lines ... [1mwith a single control plane node and 1 node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454[0m INFO: "with a single control plane node and 1 node" started at Tue, 10 May 2022 20:31:57 UTC on Ginkgo node 3 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-ij5hsx" for hosting the cluster May 10 20:31:57.234: INFO: starting to create namespace for hosting the "capz-e2e-ij5hsx" test spec 2022/05/10 20:31:57 failed trying to get namespace (capz-e2e-ij5hsx):namespaces "capz-e2e-ij5hsx" not found INFO: Creating namespace capz-e2e-ij5hsx INFO: Creating event watcher for namespace "capz-e2e-ij5hsx" May 10 20:31:57.293: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-ij5hsx-aks INFO: Creating the workload cluster with name "capz-e2e-ij5hsx-aks" using the "aks-multi-tenancy" template (Kubernetes v1.22.6, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 83 lines ... [1mwith a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543[0m INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Tue, 10 May 2022 20:49:02 UTC on Ginkgo node 2 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-4wjolj" for hosting the cluster May 10 20:49:02.985: INFO: starting to create namespace for hosting the "capz-e2e-4wjolj" test spec 2022/05/10 20:49:02 failed trying to get namespace (capz-e2e-4wjolj):namespaces "capz-e2e-4wjolj" not found INFO: Creating namespace capz-e2e-4wjolj INFO: Creating event watcher for namespace "capz-e2e-4wjolj" May 10 20:49:03.021: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-4wjolj-win-vmss INFO: Creating the workload cluster with name "capz-e2e-4wjolj-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 129 lines ... [1mSTEP[0m: Fetching activity logs took 1.047437251s [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-4wjolj" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-4wjolj namespace [1mSTEP[0m: Deleting cluster capz-e2e-4wjolj-win-vmss INFO: Waiting for the Cluster capz-e2e-4wjolj/capz-e2e-4wjolj-win-vmss to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-4wjolj-win-vmss to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-windows-fzvds, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-wvmpm, container kube-flannel: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-4wjolj [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 29m20s on Ginkgo node 2 of 3 ... skipping 10 lines ... [1mWith 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496[0m INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Tue, 10 May 2022 20:46:07 UTC on Ginkgo node 1 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-k1l7cb" for hosting the cluster May 10 20:46:07.758: INFO: starting to create namespace for hosting the "capz-e2e-k1l7cb" test spec 2022/05/10 20:46:07 failed trying to get namespace (capz-e2e-k1l7cb):namespaces "capz-e2e-k1l7cb" not found INFO: Creating namespace capz-e2e-k1l7cb INFO: Creating event watcher for namespace "capz-e2e-k1l7cb" May 10 20:46:07.804: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-k1l7cb-win-ha INFO: Creating the workload cluster with name "capz-e2e-k1l7cb-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 12 lines ... azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-k1l7cb-win-ha-flannel created configmap/cni-capz-e2e-k1l7cb-win-ha-flannel created INFO: Waiting for the cluster infrastructure to be provisioned [1mSTEP[0m: Waiting for cluster to enter the provisioned phase E0510 20:46:47.049269 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 20:47:20.730583 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-e2e-k1l7cb/capz-e2e-k1l7cb-win-ha-control-plane to be provisioned [1mSTEP[0m: Waiting for one control plane node to exist E0510 20:48:08.511476 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 20:49:06.067413 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: Waiting for control plane to be ready INFO: Waiting for the remaining control plane machines managed by capz-e2e-k1l7cb/capz-e2e-k1l7cb-win-ha-control-plane to be provisioned [1mSTEP[0m: Waiting for all control plane nodes to exist E0510 20:50:00.623432 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 20:50:37.210213 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 20:51:24.444416 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 20:51:59.926659 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 20:52:46.473896 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 20:53:22.650318 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 20:53:54.664422 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 20:54:25.155379 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: Waiting for control plane capz-e2e-k1l7cb/capz-e2e-k1l7cb-win-ha-control-plane to be ready (implies underlying nodes to be ready as well) [1mSTEP[0m: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist [1mSTEP[0m: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned ... skipping 3 lines ... May 10 20:54:39.995: INFO: starting to wait for deployment to become available May 10 20:55:00.099: INFO: Deployment default/webmu3598 is now available, took 20.104401233s [1mSTEP[0m: creating an internal Load Balancer service May 10 20:55:00.099: INFO: starting to create an internal Load Balancer service [1mSTEP[0m: waiting for service default/webmu3598-ilb to be available May 10 20:55:00.204: INFO: waiting for service default/webmu3598-ilb to be available E0510 20:55:07.347016 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host May 10 20:55:50.438: INFO: service default/webmu3598-ilb is available, took 50.233735522s [1mSTEP[0m: connecting to the internal LB service from a curl pod May 10 20:55:50.468: INFO: starting to create a curl to ilb job [1mSTEP[0m: waiting for job default/curl-to-ilb-job5b2lu to be complete May 10 20:55:50.530: INFO: waiting for job default/curl-to-ilb-job5b2lu to be complete May 10 20:56:00.602: INFO: job default/curl-to-ilb-job5b2lu is complete, took 10.072306014s [1mSTEP[0m: deleting the ilb test resources May 10 20:56:00.602: INFO: deleting the ilb service: webmu3598-ilb May 10 20:56:00.689: INFO: deleting the ilb job: curl-to-ilb-job5b2lu [1mSTEP[0m: creating an external Load Balancer service May 10 20:56:00.734: INFO: starting to create an external Load Balancer service [1mSTEP[0m: waiting for service default/webmu3598-elb to be available May 10 20:56:00.821: INFO: waiting for service default/webmu3598-elb to be available E0510 20:56:05.968996 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host May 10 20:56:20.918: INFO: service default/webmu3598-elb is available, took 20.096891749s [1mSTEP[0m: connecting to the external LB service from a curl pod May 10 20:56:20.950: INFO: starting to create curl-to-elb job [1mSTEP[0m: waiting for job default/curl-to-elb-job0958xv4f4j5 to be complete May 10 20:56:20.998: INFO: waiting for job default/curl-to-elb-job0958xv4f4j5 to be complete May 10 20:56:31.070: INFO: job default/curl-to-elb-job0958xv4f4j5 is complete, took 10.072525582s ... skipping 6 lines ... May 10 20:56:31.466: INFO: starting to delete deployment webmu3598 May 10 20:56:31.504: INFO: starting to delete job curl-to-elb-job0958xv4f4j5 [1mSTEP[0m: creating a Kubernetes client to the workload cluster [1mSTEP[0m: creating an HTTP deployment [1mSTEP[0m: waiting for deployment default/web-windowsw5g00y to be available May 10 20:56:31.652: INFO: starting to wait for deployment to become available E0510 20:56:39.077329 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 20:57:22.774167 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host May 10 20:57:31.912: INFO: Deployment default/web-windowsw5g00y is now available, took 1m0.260147487s [1mSTEP[0m: creating an internal Load Balancer service May 10 20:57:31.912: INFO: starting to create an internal Load Balancer service [1mSTEP[0m: waiting for service default/web-windowsw5g00y-ilb to be available May 10 20:57:32.001: INFO: waiting for service default/web-windowsw5g00y-ilb to be available May 10 20:57:42.066: INFO: service default/web-windowsw5g00y-ilb is available, took 10.065318949s ... skipping 6 lines ... May 10 20:57:52.211: INFO: deleting the ilb service: web-windowsw5g00y-ilb May 10 20:57:52.307: INFO: deleting the ilb job: curl-to-ilb-jobxph5u [1mSTEP[0m: creating an external Load Balancer service May 10 20:57:52.350: INFO: starting to create an external Load Balancer service [1mSTEP[0m: waiting for service default/web-windowsw5g00y-elb to be available May 10 20:57:52.420: INFO: waiting for service default/web-windowsw5g00y-elb to be available E0510 20:58:13.806601 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host May 10 20:58:42.617: INFO: service default/web-windowsw5g00y-elb is available, took 50.197641367s [1mSTEP[0m: connecting to the external LB service from a curl pod May 10 20:58:42.649: INFO: starting to create curl-to-elb job [1mSTEP[0m: waiting for job default/curl-to-elb-jobykg2wmth3cn to be complete May 10 20:58:42.696: INFO: waiting for job default/curl-to-elb-jobykg2wmth3cn to be complete May 10 20:58:52.768: INFO: job default/curl-to-elb-jobykg2wmth3cn is complete, took 10.072365984s ... skipping 10 lines ... May 10 20:58:53.046: INFO: INFO: Collecting logs for node capz-e2e-k1l7cb-win-ha-control-plane-lxkzg in cluster capz-e2e-k1l7cb-win-ha in namespace capz-e2e-k1l7cb May 10 20:59:03.820: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-k1l7cb-win-ha-control-plane-lxkzg May 10 20:59:04.588: INFO: INFO: Collecting logs for node capz-e2e-k1l7cb-win-ha-control-plane-zdrq9 in cluster capz-e2e-k1l7cb-win-ha in namespace capz-e2e-k1l7cb E0510 20:59:11.818270 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host May 10 20:59:15.029: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-k1l7cb-win-ha-control-plane-zdrq9 May 10 20:59:15.415: INFO: INFO: Collecting logs for node capz-e2e-k1l7cb-win-ha-control-plane-2kbzt in cluster capz-e2e-k1l7cb-win-ha in namespace capz-e2e-k1l7cb May 10 20:59:22.875: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-k1l7cb-win-ha-control-plane-2kbzt May 10 20:59:23.168: INFO: INFO: Collecting logs for node capz-e2e-k1l7cb-win-ha-md-0-sktwx in cluster capz-e2e-k1l7cb-win-ha in namespace capz-e2e-k1l7cb May 10 20:59:34.089: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-k1l7cb-win-ha-md-0-sktwx May 10 20:59:34.395: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster capz-e2e-k1l7cb-win-ha in namespace capz-e2e-k1l7cb E0510 21:00:05.898246 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host May 10 21:00:11.970: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-k1l7cb-win-ha-md-win-vz958 [1mSTEP[0m: Dumping workload cluster capz-e2e-k1l7cb/capz-e2e-k1l7cb-win-ha kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 305.801634ms [1mSTEP[0m: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-p2d5t, container kube-flannel [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-t4l7j, container kube-proxy ... skipping 23 lines ... [1mSTEP[0m: Fetching activity logs took 1.02829977s [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-k1l7cb" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-k1l7cb namespace [1mSTEP[0m: Deleting cluster capz-e2e-k1l7cb-win-ha INFO: Waiting for the Cluster capz-e2e-k1l7cb/capz-e2e-k1l7cb-win-ha to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-k1l7cb-win-ha to be deleted E0510 21:00:48.262158 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:01:23.625199 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:02:11.895430 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:03:09.507843 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:04:00.515905 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:04:31.303923 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:05:23.107365 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:05:58.899279 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:06:57.002291 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:07:28.563481 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:08:02.947603 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:08:53.698397 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:09:40.032513 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:10:28.785466 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:11:21.361396 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:11:55.258445 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:12:31.496024 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-k1l7cb-win-ha-control-plane-lxkzg, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-windows-t4l7j, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-k1l7cb-win-ha-control-plane-lxkzg, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-v7454, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-k64mn, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-p2d5t, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-hmmsx, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-pj5cg, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-8d9ng, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-k1l7cb-win-ha-control-plane-2kbzt, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-8q9c8, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-vb8p7, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-k1l7cb-win-ha-control-plane-2kbzt, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7nqdq, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-k1l7cb-win-ha-control-plane-lxkzg, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-k1l7cb-win-ha-control-plane-lxkzg, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-k1l7cb-win-ha-control-plane-2kbzt, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-k1l7cb-win-ha-control-plane-2kbzt, container kube-scheduler: http2: client connection lost E0510 21:13:06.739018 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:13:57.559106 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:14:39.815962 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:15:22.888010 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:16:03.635304 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:16:36.472168 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:17:10.788130 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:18:07.018133 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:18:45.578962 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:19:45.421098 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:20:15.784415 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:20:50.307286 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:21:47.146659 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:22:34.275859 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host E0510 21:23:11.527038 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-k1l7cb [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs E0510 21:23:52.120350 24216 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-zv3jxp/events?resourceVersion=8697": dial tcp: lookup capz-e2e-zv3jxp-public-custom-vnet-48e00315.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 38m8s on Ginkgo node 1 of 3 [32m• [SLOW TEST:2288.227 seconds][0m Workload cluster creation [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43[0m ... skipping 5 lines ... [1mSTEP[0m: Tearing down the management cluster [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mWorkload cluster creation [0m[0mCreating an AKS cluster [0m[91m[1m[It] with a single control plane node and 1 node [0m [37m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/cluster_helpers.go:134[0m [1m[91mRan 8 of 22 Specs in 5809.212 seconds[0m [1m[91mFAIL![0m -- [32m[1m7 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m14 Skipped[0m Ginkgo ran 1 suite in 1h38m20.509357971s Test Suite Failed make[1]: *** [Makefile:173: test-e2e-run] Error 1 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:181: test-e2e] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...