Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 8 succeeded |
Started | |
Elapsed | 2h7m |
Revision | release-0.5 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sWindows\senabled\sVMSS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\san\sLinux\sAzureMachinePool\swith\s1\snodes\sand\sWindows\sAzureMachinePool\swith\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543 Timed out after 900.001s. Expected <int>: 0 to equal <int>: 1 /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/machinepool_helpers.go:85from junit.e2e_suite.2.xml
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Tue, 12 Apr 2022 20:47:38 UTC on Ginkgo node 2 of 3 �[1mSTEP�[0m: Creating namespace "capz-e2e-p1kjcf" for hosting the cluster Apr 12 20:47:38.554: INFO: starting to create namespace for hosting the "capz-e2e-p1kjcf" test spec 2022/04/12 20:47:38 failed trying to get namespace (capz-e2e-p1kjcf):namespaces "capz-e2e-p1kjcf" not found INFO: Creating namespace capz-e2e-p1kjcf INFO: Creating event watcher for namespace "capz-e2e-p1kjcf" Apr 12 20:47:38.606: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-p1kjcf-win-vmss INFO: Creating the workload cluster with name "capz-e2e-p1kjcf-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-p1kjcf-win-vmss --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor machine-pool-windows INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/capz-e2e-p1kjcf-win-vmss created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-p1kjcf-win-vmss created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-p1kjcf-win-vmss-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-p1kjcf-win-vmss-control-plane created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created machinepool.cluster.x-k8s.io/capz-e2e-p1kjcf-win-vmss-mp-0 created azuremachinepool.infrastructure.cluster.x-k8s.io/capz-e2e-p1kjcf-win-vmss-mp-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/capz-e2e-p1kjcf-win-vmss-mp-0 created machinepool.cluster.x-k8s.io/capz-e2e-p1kjcf-win-vmss-mp-win created azuremachinepool.infrastructure.cluster.x-k8s.io/capz-e2e-p1kjcf-win-vmss-mp-win created kubeadmconfig.bootstrap.cluster.x-k8s.io/capz-e2e-p1kjcf-win-vmss-mp-win created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-p1kjcf-win-vmss-flannel created configmap/cni-capz-e2e-p1kjcf-win-vmss-flannel created INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-e2e-p1kjcf/capz-e2e-p1kjcf-win-vmss-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-p1kjcf/capz-e2e-p1kjcf-win-vmss-control-plane to be ready (implies underlying nodes to be ready as well) �[1mSTEP�[0m: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned INFO: Waiting for the machine pools to be provisioned �[1mSTEP�[0m: Waiting for the machine pool workload nodes to exist �[1mSTEP�[0m: Dumping logs from the "capz-e2e-p1kjcf-win-vmss" workload cluster �[1mSTEP�[0m: Dumping workload cluster capz-e2e-p1kjcf/capz-e2e-p1kjcf-win-vmss logs Apr 12 21:06:00.728: INFO: INFO: Collecting logs for node capz-e2e-p1kjcf-win-vmss-control-plane-zlxfb in cluster capz-e2e-p1kjcf-win-vmss in namespace capz-e2e-p1kjcf Apr 12 21:06:11.406: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-p1kjcf-win-vmss-control-plane-zlxfb �[1mSTEP�[0m: Dumping workload cluster capz-e2e-p1kjcf/capz-e2e-p1kjcf-win-vmss kube-system pod logs �[1mSTEP�[0m: Fetching kube-system pod logs took 591.567926ms �[1mSTEP�[0m: Dumping workload cluster capz-e2e-p1kjcf/capz-e2e-p1kjcf-win-vmss Azure activity log �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-87zwt, container coredns �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-n9bqb, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-xlx2v, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-windows-g4fmn, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-p1kjcf-win-vmss-control-plane-zlxfb, container kube-scheduler �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-w84fr, container kube-flannel �[1mSTEP�[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-p1kjcf-win-vmss-control-plane-zlxfb, container etcd �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-p1kjcf-win-vmss-control-plane-zlxfb, container kube-apiserver �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-zwlxh, container kube-flannel �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-29bp9, container kube-flannel �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-p1kjcf-win-vmss-control-plane-zlxfb, container kube-controller-manager �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-ms9hs, container coredns �[1mSTEP�[0m: Fetching activity logs took 1.081303607s �[1mSTEP�[0m: Dumping all the Cluster API resources in the "capz-e2e-p1kjcf" namespace �[1mSTEP�[0m: Deleting all clusters in the capz-e2e-p1kjcf namespace �[1mSTEP�[0m: Deleting cluster capz-e2e-p1kjcf-win-vmss INFO: Waiting for the Cluster capz-e2e-p1kjcf/capz-e2e-p1kjcf-win-vmss to be deleted �[1mSTEP�[0m: Waiting for cluster capz-e2e-p1kjcf-win-vmss to be deleted �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-proxy-xlx2v, container kube-proxy: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-zwlxh, container kube-flannel: http2: client connection lost �[1mSTEP�[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-p1kjcf �[1mSTEP�[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" �[1mSTEP�[0m: Redacting sensitive information from logs INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 49m46s on Ginkgo node 2 of 3
Filter through log files | View test history on testgrid
capz-e2e Workload cluster creation Creating a GPU-enabled cluster with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a VMSS cluster with a single control plane node and an AzureMachinePool with 2 nodes
capz-e2e Workload cluster creation Creating a Windows Enabled cluster With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider with a 1 control plane nodes and 2 worker nodes
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster With ipv6 worker node
capz-e2e Workload cluster creation Creating a private cluster Creates a public management cluster in the same vnet
capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node
capz-e2e Workload cluster creation With 3 control-plane nodes and 2 worker nodes
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster using scale in rollout Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a single control plane cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should adopt up-to-date control plane Machines without modification Should adopt up-to-date control plane Machines without modification
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
... skipping 433 lines ... [1mWith ipv6 worker node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269[0m INFO: "With ipv6 worker node" started at Tue, 12 Apr 2022 19:39:13 UTC on Ginkgo node 1 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-m5hnk5" for hosting the cluster Apr 12 19:39:13.060: INFO: starting to create namespace for hosting the "capz-e2e-m5hnk5" test spec 2022/04/12 19:39:13 failed trying to get namespace (capz-e2e-m5hnk5):namespaces "capz-e2e-m5hnk5" not found INFO: Creating namespace capz-e2e-m5hnk5 INFO: Creating event watcher for namespace "capz-e2e-m5hnk5" Apr 12 19:39:13.108: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-m5hnk5-ipv6 INFO: Creating the workload cluster with name "capz-e2e-m5hnk5-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 93 lines ... [1mSTEP[0m: Fetching activity logs took 564.002273ms [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-m5hnk5" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-m5hnk5 namespace [1mSTEP[0m: Deleting cluster capz-e2e-m5hnk5-ipv6 INFO: Waiting for the Cluster capz-e2e-m5hnk5/capz-e2e-m5hnk5-ipv6 to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-m5hnk5-ipv6 to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-z6hzh, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-m5hnk5-ipv6-control-plane-57h8r, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-m5hnk5-ipv6-control-plane-jc9lh, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-pxgxc, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-m5hnk5-ipv6-control-plane-jc9lh, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-vsjbg, container calico-kube-controllers: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-m5hnk5-ipv6-control-plane-dzvpv, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-vnt8l, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-m5hnk5-ipv6-control-plane-57h8r, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-7slrj, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-m5hnk5-ipv6-control-plane-jc9lh, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-m5hnk5-ipv6-control-plane-jc9lh, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-m5hnk5-ipv6-control-plane-57h8r, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-m5hnk5-ipv6-control-plane-dzvpv, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-7k6jw, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-m5hnk5-ipv6-control-plane-dzvpv, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-725wv, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-dpwgj, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-m5hnk5-ipv6-control-plane-57h8r, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-7n4kj, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-xncbn, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-fcqcb, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-m5hnk5-ipv6-control-plane-dzvpv, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-m5hnk5 [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "With ipv6 worker node" ran for 17m39s on Ginkgo node 1 of 3 ... skipping 10 lines ... [1mwith a single control plane node and an AzureMachinePool with 2 nodes[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315[0m INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Tue, 12 Apr 2022 19:56:52 UTC on Ginkgo node 1 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-8qaxkx" for hosting the cluster Apr 12 19:56:52.158: INFO: starting to create namespace for hosting the "capz-e2e-8qaxkx" test spec 2022/04/12 19:56:52 failed trying to get namespace (capz-e2e-8qaxkx):namespaces "capz-e2e-8qaxkx" not found INFO: Creating namespace capz-e2e-8qaxkx INFO: Creating event watcher for namespace "capz-e2e-8qaxkx" Apr 12 19:56:52.201: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-8qaxkx-vmss INFO: Creating the workload cluster with name "capz-e2e-8qaxkx-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml ... skipping 106 lines ... [1mSTEP[0m: Fetching activity logs took 601.475144ms [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-8qaxkx" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-8qaxkx namespace [1mSTEP[0m: Deleting cluster capz-e2e-8qaxkx-vmss INFO: Waiting for the Cluster capz-e2e-8qaxkx/capz-e2e-8qaxkx-vmss to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-8qaxkx-vmss to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-8qaxkx-vmss-control-plane-x6dmj, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-5k85c, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-tndnk, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-j2cdf, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-8qaxkx-vmss-control-plane-x6dmj, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-8qaxkx-vmss-control-plane-x6dmj, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-8qaxkx-vmss-control-plane-x6dmj, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-t6ktv, container calico-kube-controllers: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-m6bg4, container coredns: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-8qaxkx [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 18m43s on Ginkgo node 1 of 3 ... skipping 10 lines ... [1mWith 3 control-plane nodes and 2 worker nodes[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203[0m INFO: "With 3 control-plane nodes and 2 worker nodes" started at Tue, 12 Apr 2022 19:39:13 UTC on Ginkgo node 2 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-kuu02b" for hosting the cluster Apr 12 19:39:13.059: INFO: starting to create namespace for hosting the "capz-e2e-kuu02b" test spec 2022/04/12 19:39:13 failed trying to get namespace (capz-e2e-kuu02b):namespaces "capz-e2e-kuu02b" not found INFO: Creating namespace capz-e2e-kuu02b INFO: Creating event watcher for namespace "capz-e2e-kuu02b" Apr 12 19:39:13.122: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-kuu02b-ha INFO: Creating the workload cluster with name "capz-e2e-kuu02b-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml ... skipping 67 lines ... Apr 12 19:49:44.528: INFO: starting to delete external LB service weby6kggt-elb Apr 12 19:49:44.625: INFO: starting to delete deployment weby6kggt Apr 12 19:49:44.705: INFO: starting to delete job curl-to-elb-jobj4noed6azpq [1mSTEP[0m: creating a Kubernetes client to the workload cluster [1mSTEP[0m: Creating development namespace Apr 12 19:49:44.836: INFO: starting to create dev deployment namespace 2022/04/12 19:49:44 failed trying to get namespace (development):namespaces "development" not found 2022/04/12 19:49:44 namespace development does not exist, creating... [1mSTEP[0m: Creating production namespace Apr 12 19:49:44.996: INFO: starting to create prod deployment namespace 2022/04/12 19:49:45 failed trying to get namespace (production):namespaces "production" not found 2022/04/12 19:49:45 namespace production does not exist, creating... [1mSTEP[0m: Creating frontendProd, backend and network-policy pod deployments Apr 12 19:49:45.128: INFO: starting to create frontend-prod deployments Apr 12 19:49:45.191: INFO: starting to create frontend-dev deployments Apr 12 19:49:45.271: INFO: starting to create backend deployments Apr 12 19:49:45.342: INFO: starting to create network-policy deployments ... skipping 11 lines ... [1mSTEP[0m: Ensuring we have outbound internet access from the network-policy pods [1mSTEP[0m: Ensuring we have connectivity from network-policy pods to frontend-prod pods [1mSTEP[0m: Ensuring we have connectivity from network-policy pods to backend pods [1mSTEP[0m: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace Apr 12 19:50:09.368: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace [1mSTEP[0m: Ensuring we no longer have ingress access from the network-policy pods to backend pods curl: (7) Failed to connect to 192.168.90.131 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves Apr 12 19:52:20.367: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves [1mSTEP[0m: Applying a network policy to deny egress access in development namespace Apr 12 19:52:20.613: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace [1mSTEP[0m: Ensuring we no longer have egress access from the network-policy pods to backend pods curl: (7) Failed to connect to 192.168.90.131 port 80: Connection timed out curl: (7) Failed to connect to 192.168.90.131 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves Apr 12 19:56:42.517: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves [1mSTEP[0m: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace Apr 12 19:56:42.754: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace [1mSTEP[0m: Ensuring we have egress access from pods with matching labels [1mSTEP[0m: Ensuring we don't have ingress access from pods without matching labels curl: (7) Failed to connect to 192.168.90.132 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves Apr 12 19:58:53.584: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves [1mSTEP[0m: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace Apr 12 19:58:53.814: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace [1mSTEP[0m: Ensuring we have egress access from pods with matching labels [1mSTEP[0m: Ensuring we don't have ingress access from pods without matching labels curl: (7) Failed to connect to 192.168.90.129 port 80: Connection timed out curl: (7) Failed to connect to 192.168.90.132 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves Apr 12 20:03:15.723: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves [1mSTEP[0m: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels Apr 12 20:03:15.974: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels [1mSTEP[0m: Ensuring we have ingress access from pods with matching labels [1mSTEP[0m: Ensuring we don't have ingress access from pods without matching labels curl: (7) Failed to connect to 192.168.90.131 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves Apr 12 20:05:26.796: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves [1mSTEP[0m: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development Apr 12 20:05:27.045: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development [1mSTEP[0m: Ensuring we don't have ingress access from role:frontend pods in production namespace curl: (7) Failed to connect to 192.168.90.131 port 80: Connection timed out [1mSTEP[0m: Ensuring we have ingress access from role:frontend pods in development namespace [1mSTEP[0m: Dumping logs from the "capz-e2e-kuu02b-ha" workload cluster [1mSTEP[0m: Dumping workload cluster capz-e2e-kuu02b/capz-e2e-kuu02b-ha logs Apr 12 20:07:38.450: INFO: INFO: Collecting logs for node capz-e2e-kuu02b-ha-control-plane-tsvck in cluster capz-e2e-kuu02b-ha in namespace capz-e2e-kuu02b Apr 12 20:07:50.690: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-kuu02b-ha-control-plane-tsvck ... skipping 39 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-mmvtj, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-gplg8, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-kuu02b-ha-control-plane-tsvck, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-kuu02b-ha-control-plane-469jc, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-5rn5n, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-qnmj7, container calico-node [1mSTEP[0m: Got error while iterating over activity logs for resource group capz-e2e-kuu02b-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded [1mSTEP[0m: Fetching activity logs took 30.000642096s [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-kuu02b" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-kuu02b namespace [1mSTEP[0m: Deleting cluster capz-e2e-kuu02b-ha INFO: Waiting for the Cluster capz-e2e-kuu02b/capz-e2e-kuu02b-ha to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-kuu02b-ha to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-vdtwg, container kube-proxy: http2: server sent GOAWAY and closed the connection; LastStreamID=113, ErrCode=NO_ERROR, debug="" [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-qnmj7, container calico-node: http2: server sent GOAWAY and closed the connection; LastStreamID=113, ErrCode=NO_ERROR, debug="" [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-5rn5n, container kube-proxy: http2: server sent GOAWAY and closed the connection; LastStreamID=113, ErrCode=NO_ERROR, debug="" [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-kuu02b-ha-control-plane-7j82f, container kube-apiserver: http2: server sent GOAWAY and closed the connection; LastStreamID=113, ErrCode=NO_ERROR, debug="" [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-gplg8, container coredns: http2: server sent GOAWAY and closed the connection; LastStreamID=113, ErrCode=NO_ERROR, debug="" [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-bxmjp, container calico-node: http2: server sent GOAWAY and closed the connection; LastStreamID=113, ErrCode=NO_ERROR, debug="" [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-hdxr8, container coredns: http2: server sent GOAWAY and closed the connection; LastStreamID=113, ErrCode=NO_ERROR, debug="" [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-kuu02b [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 46m4s on Ginkgo node 2 of 3 ... skipping 8 lines ... [1mCreates a public management cluster in the same vnet[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141[0m INFO: "Creates a public management cluster in the same vnet" started at Tue, 12 Apr 2022 19:39:13 UTC on Ginkgo node 3 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-j95k49" for hosting the cluster Apr 12 19:39:13.059: INFO: starting to create namespace for hosting the "capz-e2e-j95k49" test spec 2022/04/12 19:39:13 failed trying to get namespace (capz-e2e-j95k49):namespaces "capz-e2e-j95k49" not found INFO: Creating namespace capz-e2e-j95k49 INFO: Creating event watcher for namespace "capz-e2e-j95k49" Apr 12 19:39:13.142: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-j95k49-public-custom-vnet [1mSTEP[0m: creating Azure clients with the workload cluster's subscription [1mSTEP[0m: creating a resource group ... skipping 100 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-zs2x7, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-j95k49-public-custom-vnet-control-plane-nsq88, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-xb88s, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-bzpt9, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-rsw89, container kube-proxy [1mSTEP[0m: Dumping workload cluster capz-e2e-j95k49/capz-e2e-j95k49-public-custom-vnet Azure activity log [1mSTEP[0m: Got error while iterating over activity logs for resource group capz-e2e-j95k49-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded [1mSTEP[0m: Fetching activity logs took 30.000798719s [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-j95k49" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-j95k49 namespace [1mSTEP[0m: Deleting cluster capz-e2e-j95k49-public-custom-vnet INFO: Waiting for the Cluster capz-e2e-j95k49/capz-e2e-j95k49-public-custom-vnet to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-j95k49-public-custom-vnet to be deleted W0412 20:26:42.165520 24234 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding I0412 20:27:13.747789 24234 trace.go:205] Trace[758784297]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-Apr-2022 20:26:43.746) (total time: 30001ms): Trace[758784297]: [30.001252375s] [30.001252375s] END E0412 20:27:13.747912 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp 20.69.119.31:6443: i/o timeout I0412 20:27:45.793513 24234 trace.go:205] Trace[184497856]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-Apr-2022 20:27:15.792) (total time: 30001ms): Trace[184497856]: [30.001261134s] [30.001261134s] END E0412 20:27:45.793579 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp 20.69.119.31:6443: i/o timeout I0412 20:28:20.567605 24234 trace.go:205] Trace[911592206]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-Apr-2022 20:27:50.565) (total time: 30002ms): Trace[911592206]: [30.002295413s] [30.002295413s] END E0412 20:28:20.567683 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp 20.69.119.31:6443: i/o timeout I0412 20:28:59.301282 24234 trace.go:205] Trace[2105401631]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-Apr-2022 20:28:29.299) (total time: 30001ms): Trace[2105401631]: [30.001317973s] [30.001317973s] END E0412 20:28:59.301348 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp 20.69.119.31:6443: i/o timeout I0412 20:29:45.663076 24234 trace.go:205] Trace[462677896]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-Apr-2022 20:29:15.660) (total time: 30002ms): Trace[462677896]: [30.002843449s] [30.002843449s] END E0412 20:29:45.663144 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp 20.69.119.31:6443: i/o timeout I0412 20:30:45.395724 24234 trace.go:205] Trace[180741923]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-Apr-2022 20:30:15.394) (total time: 30001ms): Trace[180741923]: [30.001233663s] [30.001233663s] END E0412 20:30:45.395911 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp 20.69.119.31:6443: i/o timeout I0412 20:31:54.408443 24234 trace.go:205] Trace[1560766274]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-Apr-2022 20:31:24.406) (total time: 30001ms): Trace[1560766274]: [30.001478044s] [30.001478044s] END E0412 20:31:54.408523 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp 20.69.119.31:6443: i/o timeout [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-j95k49 [1mSTEP[0m: Running additional cleanup for the "create-workload-cluster" test spec Apr 12 20:32:12.984: INFO: deleting an existing virtual network "custom-vnet" Apr 12 20:32:23.533: INFO: deleting an existing route table "node-routetable" Apr 12 20:32:25.893: INFO: deleting an existing network security group "node-nsg" E0412 20:32:29.871710 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host Apr 12 20:32:36.284: INFO: deleting an existing network security group "control-plane-nsg" Apr 12 20:32:46.658: INFO: verifying the existing resource group "capz-e2e-j95k49-public-custom-vnet" is empty Apr 12 20:32:46.706: INFO: deleting the existing resource group "capz-e2e-j95k49-public-custom-vnet" [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs E0412 20:33:21.598054 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:34:12.460738 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: "Creates a public management cluster in the same vnet" ran for 55m54s on Ginkgo node 3 of 3 [32m• [SLOW TEST:3353.962 seconds][0m Workload cluster creation [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43[0m ... skipping 6 lines ... [1mwith a single control plane node and 1 node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377[0m INFO: "with a single control plane node and 1 node" started at Tue, 12 Apr 2022 20:15:34 UTC on Ginkgo node 1 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-etnhfp" for hosting the cluster Apr 12 20:15:34.974: INFO: starting to create namespace for hosting the "capz-e2e-etnhfp" test spec 2022/04/12 20:15:34 failed trying to get namespace (capz-e2e-etnhfp):namespaces "capz-e2e-etnhfp" not found INFO: Creating namespace capz-e2e-etnhfp INFO: Creating event watcher for namespace "capz-e2e-etnhfp" Apr 12 20:15:35.013: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-etnhfp-gpu INFO: Creating the workload cluster with name "capz-e2e-etnhfp-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 58 lines ... [1mSTEP[0m: Fetching activity logs took 515.708403ms [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-etnhfp" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-etnhfp namespace [1mSTEP[0m: Deleting cluster capz-e2e-etnhfp-gpu INFO: Waiting for the Cluster capz-e2e-etnhfp/capz-e2e-etnhfp-gpu to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-etnhfp-gpu to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-etnhfp-gpu-control-plane-wx6l8, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-52fph, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-lb9lf, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-gjt5h, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-etnhfp-gpu-control-plane-wx6l8, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-bcmpm, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-etnhfp-gpu-control-plane-wx6l8, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-etnhfp-gpu-control-plane-wx6l8, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-fxwlt, container calico-kube-controllers: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-etnhfp [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "with a single control plane node and 1 node" ran for 21m23s on Ginkgo node 1 of 3 ... skipping 10 lines ... [1mwith a 1 control plane nodes and 2 worker nodes[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419[0m INFO: "with a 1 control plane nodes and 2 worker nodes" started at Tue, 12 Apr 2022 20:25:16 UTC on Ginkgo node 2 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-lkcps8" for hosting the cluster Apr 12 20:25:16.920: INFO: starting to create namespace for hosting the "capz-e2e-lkcps8" test spec 2022/04/12 20:25:16 failed trying to get namespace (capz-e2e-lkcps8):namespaces "capz-e2e-lkcps8" not found INFO: Creating namespace capz-e2e-lkcps8 INFO: Creating event watcher for namespace "capz-e2e-lkcps8" Apr 12 20:25:16.965: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-lkcps8-oot INFO: Creating the workload cluster with name "capz-e2e-lkcps8-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml ... skipping 120 lines ... [1mwith a single control plane node and 1 node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454[0m INFO: "with a single control plane node and 1 node" started at Tue, 12 Apr 2022 20:35:07 UTC on Ginkgo node 3 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-s6nmj2" for hosting the cluster Apr 12 20:35:07.025: INFO: starting to create namespace for hosting the "capz-e2e-s6nmj2" test spec 2022/04/12 20:35:07 failed trying to get namespace (capz-e2e-s6nmj2):namespaces "capz-e2e-s6nmj2" not found INFO: Creating namespace capz-e2e-s6nmj2 INFO: Creating event watcher for namespace "capz-e2e-s6nmj2" Apr 12 20:35:07.078: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-s6nmj2-aks E0412 20:35:07.417888 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: Creating the workload cluster with name "capz-e2e-s6nmj2-aks" using the "aks-multi-tenancy" template (Kubernetes v1.22.6, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-s6nmj2-aks --infrastructure (default) --kubernetes-version v1.22.6 --control-plane-machine-count 1 --worker-machine-count 1 --flavor aks-multi-tenancy INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/capz-e2e-s6nmj2-aks created azuremanagedcontrolplane.infrastructure.cluster.x-k8s.io/capz-e2e-s6nmj2-aks created ... skipping 3 lines ... machinepool.cluster.x-k8s.io/agentpool1 created azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created INFO: Waiting for the cluster infrastructure to be provisioned [1mSTEP[0m: Waiting for cluster to enter the provisioned phase E0412 20:35:59.646227 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:36:49.077771 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:37:34.892552 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:38:10.294466 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:38:41.215671 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:39:12.970447 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: Waiting for control plane to be initialized Apr 12 20:39:39.481: INFO: Waiting for the first control plane machine managed by capz-e2e-s6nmj2/capz-e2e-s6nmj2-aks to be provisioned [1mSTEP[0m: Waiting for atleast one control plane node to exist E0412 20:39:46.168354 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:40:41.610254 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:41:27.238950 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:42:24.710127 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:43:04.427298 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:43:35.415432 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: Waiting for control plane to be ready Apr 12 20:43:39.811: INFO: Waiting for the first control plane machine managed by capz-e2e-s6nmj2/capz-e2e-s6nmj2-aks to be provisioned [1mSTEP[0m: Waiting for all control plane nodes to exist INFO: Waiting for the machine deployments to be provisioned INFO: Waiting for the machine pools to be provisioned [1mSTEP[0m: Waiting for the machine pool workload nodes to exist ... skipping 10 lines ... [1mSTEP[0m: time sync OK for host aks-agentpool1-14821083-vmss000000 [1mSTEP[0m: time sync OK for host aks-agentpool1-14821083-vmss000000 [1mSTEP[0m: Dumping logs from the "capz-e2e-s6nmj2-aks" workload cluster [1mSTEP[0m: Dumping workload cluster capz-e2e-s6nmj2/capz-e2e-s6nmj2-aks logs Apr 12 20:43:47.097: INFO: INFO: Collecting logs for node aks-agentpool1-14821083-vmss000000 in cluster capz-e2e-s6nmj2-aks in namespace capz-e2e-s6nmj2 E0412 20:44:22.454424 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:45:05.197812 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:45:35.753555 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host Apr 12 20:45:56.460: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0 Failed to get logs for machine pool agentpool0, cluster capz-e2e-s6nmj2/capz-e2e-s6nmj2-aks: [dialing public load balancer at capz-e2e-s6nmj2-aks-e9fe8e51.hcp.westus2.azmk8s.io: dial tcp 52.156.149.48:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."] Apr 12 20:45:57.158: INFO: INFO: Collecting logs for node aks-agentpool1-14821083-vmss000000 in cluster capz-e2e-s6nmj2-aks in namespace capz-e2e-s6nmj2 E0412 20:46:10.429473 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:46:51.571082 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:47:38.380062 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host Apr 12 20:48:07.532: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0 Failed to get logs for machine pool agentpool1, cluster capz-e2e-s6nmj2/capz-e2e-s6nmj2-aks: [dialing public load balancer at capz-e2e-s6nmj2-aks-e9fe8e51.hcp.westus2.azmk8s.io: dial tcp 52.156.149.48:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."] [1mSTEP[0m: Dumping workload cluster capz-e2e-s6nmj2/capz-e2e-s6nmj2-aks kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 636.629571ms [1mSTEP[0m: Dumping workload cluster capz-e2e-s6nmj2/capz-e2e-s6nmj2-aks Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/csi-azuredisk-node-xkhmv, container liveness-probe [1mSTEP[0m: Creating log watcher for controller kube-system/csi-azuredisk-node-vzhdb, container liveness-probe [1mSTEP[0m: Creating log watcher for controller kube-system/csi-azuredisk-node-xkhmv, container node-driver-registrar ... skipping 20 lines ... [1mSTEP[0m: Fetching activity logs took 576.135741ms [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-s6nmj2" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-s6nmj2 namespace [1mSTEP[0m: Deleting cluster capz-e2e-s6nmj2-aks INFO: Waiting for the Cluster capz-e2e-s6nmj2/capz-e2e-s6nmj2-aks to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-s6nmj2-aks to be deleted E0412 20:48:30.664426 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:49:26.751279 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:50:04.130063 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:50:45.473746 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:51:43.478677 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:52:19.798139 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:52:59.854145 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:53:58.665772 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:54:54.247932 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:55:32.432819 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:56:11.149761 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-s6nmj2 [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs E0412 20:56:43.856409 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:57:40.081694 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0412 20:58:21.650297 24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: "with a single control plane node and 1 node" ran for 23m16s on Ginkgo node 3 of 3 [32m• [SLOW TEST:1395.643 seconds][0m Workload cluster creation [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43[0m ... skipping 6 lines ... [1mWith 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496[0m INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Tue, 12 Apr 2022 20:36:57 UTC on Ginkgo node 1 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-giqinb" for hosting the cluster Apr 12 20:36:57.915: INFO: starting to create namespace for hosting the "capz-e2e-giqinb" test spec 2022/04/12 20:36:57 failed trying to get namespace (capz-e2e-giqinb):namespaces "capz-e2e-giqinb" not found INFO: Creating namespace capz-e2e-giqinb INFO: Creating event watcher for namespace "capz-e2e-giqinb" Apr 12 20:36:57.951: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-giqinb-win-ha INFO: Creating the workload cluster with name "capz-e2e-giqinb-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 151 lines ... [1mSTEP[0m: Fetching activity logs took 928.164974ms [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-giqinb" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-giqinb namespace [1mSTEP[0m: Deleting cluster capz-e2e-giqinb-win-ha INFO: Waiting for the Cluster capz-e2e-giqinb/capz-e2e-giqinb-win-ha to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-giqinb-win-ha to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-giqinb-win-ha-control-plane-pnkc6, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-jqgh8, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-giqinb-win-ha-control-plane-l5kdv, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-giqinb-win-ha-control-plane-pnkc6, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-m4989, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-giqinb-win-ha-control-plane-pnkc6, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-giqinb-win-ha-control-plane-l5kdv, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-giqinb-win-ha-control-plane-l5kdv, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-giqinb-win-ha-control-plane-pnkc6, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-x6kh8, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-59mn2, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-giqinb-win-ha-control-plane-l5kdv, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-4cx22, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-windows-rmv9b, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-mbbkp, container kube-flannel: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-giqinb [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 24m0s on Ginkgo node 1 of 3 ... skipping 10 lines ... [1mwith a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543[0m INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Tue, 12 Apr 2022 20:47:38 UTC on Ginkgo node 2 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-p1kjcf" for hosting the cluster Apr 12 20:47:38.554: INFO: starting to create namespace for hosting the "capz-e2e-p1kjcf" test spec 2022/04/12 20:47:38 failed trying to get namespace (capz-e2e-p1kjcf):namespaces "capz-e2e-p1kjcf" not found INFO: Creating namespace capz-e2e-p1kjcf INFO: Creating event watcher for namespace "capz-e2e-p1kjcf" Apr 12 20:47:38.606: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-p1kjcf-win-vmss INFO: Creating the workload cluster with name "capz-e2e-p1kjcf-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 48 lines ... [1mSTEP[0m: Fetching activity logs took 1.081303607s [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-p1kjcf" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-p1kjcf namespace [1mSTEP[0m: Deleting cluster capz-e2e-p1kjcf-win-vmss INFO: Waiting for the Cluster capz-e2e-p1kjcf/capz-e2e-p1kjcf-win-vmss to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-p1kjcf-win-vmss to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-xlx2v, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-zwlxh, container kube-flannel: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-p1kjcf [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 49m46s on Ginkgo node 2 of 3 ... skipping 55 lines ... [1mSTEP[0m: Tearing down the management cluster [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mWorkload cluster creation [0m[0mCreating a Windows enabled VMSS cluster [0m[91m[1m[It] with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node [0m [37m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/machinepool_helpers.go:85[0m [1m[91mRan 9 of 22 Specs in 7209.647 seconds[0m [1m[91mFAIL![0m -- [32m[1m8 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m13 Skipped[0m Ginkgo ran 1 suite in 2h1m38.930926406s Test Suite Failed make[1]: *** [Makefile:173: test-e2e-run] Error 1 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:181: test-e2e] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...