Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 7 succeeded |
Started | |
Elapsed | 1h26m |
Revision | release-0.5 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sprivate\scluster\sCreates\sa\spublic\smanagement\scluster\sin\sthe\ssame\svnet$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141 Expected success, but got an error: <*errors.withStack | 0xc0007a45e8>: { error: <*exec.ExitError | 0xc0007fc000>{ ProcessState: { pid: 28053, status: 256, rusage: { Utime: {Sec: 0, Usec: 444719}, Stime: {Sec: 0, Usec: 177002}, Maxrss: 104752, Ixrss: 0, Idrss: 0, Isrss: 0, Minflt: 13163, Majflt: 0, Nswap: 0, Inblock: 0, Oublock: 25392, Msgsnd: 0, Msgrcv: 0, Nsignals: 0, Nvcsw: 4249, Nivcsw: 158, }, }, Stderr: nil, }, stack: [0x1819e9e, 0x181a565, 0x19839b7, 0x1b3c528, 0x1c9d968, 0x1cbebcc, 0x813b23, 0x82154a, 0x1cbf2db, 0x7fc603, 0x7fc21c, 0x7fb547, 0x8024ef, 0x801b92, 0x811491, 0x810fa7, 0x810797, 0x812ea6, 0x820bd8, 0x820916, 0x1cae6ba, 0x529ce5, 0x474781], } exit status 1 /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/clusterctl/clusterctl_helpers.go:272from junit.e2e_suite.1.xml
INFO: "Creates a public management cluster in the same vnet" started at Tue, 17 May 2022 19:51:40 UTC on Ginkgo node 1 of 3 �[1mSTEP�[0m: Creating namespace "capz-e2e-o71z8t" for hosting the cluster May 17 19:51:40.492: INFO: starting to create namespace for hosting the "capz-e2e-o71z8t" test spec 2022/05/17 19:51:40 failed trying to get namespace (capz-e2e-o71z8t):namespaces "capz-e2e-o71z8t" not found INFO: Creating namespace capz-e2e-o71z8t INFO: Creating event watcher for namespace "capz-e2e-o71z8t" May 17 19:51:40.533: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-o71z8t-public-custom-vnet �[1mSTEP�[0m: creating Azure clients with the workload cluster's subscription �[1mSTEP�[0m: creating a resource group �[1mSTEP�[0m: creating a network security group �[1mSTEP�[0m: creating a node security group �[1mSTEP�[0m: creating a node routetable �[1mSTEP�[0m: creating a virtual network INFO: Creating the workload cluster with name "capz-e2e-o71z8t-public-custom-vnet" using the "custom-vnet" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-o71z8t-public-custom-vnet --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor custom-vnet INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/capz-e2e-o71z8t-public-custom-vnet created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-o71z8t-public-custom-vnet created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-o71z8t-public-custom-vnet-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-o71z8t-public-custom-vnet-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-o71z8t-public-custom-vnet-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-o71z8t-public-custom-vnet-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-o71z8t-public-custom-vnet-md-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created machinehealthcheck.cluster.x-k8s.io/capz-e2e-o71z8t-public-custom-vnet-mhc-0 created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-o71z8t-public-custom-vnet-calico created configmap/cni-capz-e2e-o71z8t-public-custom-vnet-calico created INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-e2e-o71z8t/capz-e2e-o71z8t-public-custom-vnet-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-o71z8t/capz-e2e-o71z8t-public-custom-vnet-control-plane to be ready (implies underlying nodes to be ready as well) �[1mSTEP�[0m: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned �[1mSTEP�[0m: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned �[1mSTEP�[0m: checking that time synchronization is healthy on capz-e2e-o71z8t-public-custom-vnet-control-plane-l2r5b �[1mSTEP�[0m: checking that time synchronization is healthy on capz-e2e-o71z8t-public-custom-vnet-md-0-2fgm2 �[1mSTEP�[0m: creating a Kubernetes client to the workload cluster �[1mSTEP�[0m: Creating a namespace for hosting the azure-private-cluster test spec May 17 19:57:49.196: INFO: starting to create namespace for hosting the azure-private-cluster test spec INFO: Creating namespace capz-e2e-o71z8t INFO: Creating event watcher for namespace "capz-e2e-o71z8t" �[1mSTEP�[0m: Initializing the workload cluster INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure azure INFO: Waiting for provider controllers to be running �[1mSTEP�[0m: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-75467796c5-4nf28, container manager �[1mSTEP�[0m: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-688b75d88d-qjfd4, container manager �[1mSTEP�[0m: Waiting for deployment capi-system/capi-controller-manager to be available INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-58757dd9b4-k2dp4, container manager �[1mSTEP�[0m: Waiting for deployment capz-system/capz-controller-manager to be available INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-85d9b6b564-pzghk, container manager �[1mSTEP�[0m: Ensure public API server is stable before creating private cluster �[1mSTEP�[0m: Creating a private workload cluster INFO: Creating the workload cluster with name "capz-e2e-pulmqz-private" using the "private" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-pulmqz-private --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 3 --worker-machine-count 1 --flavor private INFO: Applying the cluster template yaml to the cluster Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.azurecluster.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.azuremachinetemplate.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.azuremachinetemplate.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource �[1mSTEP�[0m: Dumping logs from the "capz-e2e-o71z8t-public-custom-vnet" workload cluster �[1mSTEP�[0m: Dumping workload cluster capz-e2e-o71z8t/capz-e2e-o71z8t-public-custom-vnet logs May 17 19:59:44.383: INFO: INFO: Collecting logs for node capz-e2e-o71z8t-public-custom-vnet-control-plane-l2r5b in cluster capz-e2e-o71z8t-public-custom-vnet in namespace capz-e2e-o71z8t May 17 19:59:48.725: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-o71z8t-public-custom-vnet-control-plane-l2r5b May 17 19:59:49.612: INFO: INFO: Collecting logs for node capz-e2e-o71z8t-public-custom-vnet-md-0-2fgm2 in cluster capz-e2e-o71z8t-public-custom-vnet in namespace capz-e2e-o71z8t May 17 19:59:56.489: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-o71z8t-public-custom-vnet-md-0-2fgm2 �[1mSTEP�[0m: Dumping workload cluster capz-e2e-o71z8t/capz-e2e-o71z8t-public-custom-vnet kube-system pod logs �[1mSTEP�[0m: Fetching kube-system pod logs took 215.318348ms �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-wfr4r, container calico-kube-controllers �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-vpsrh, container coredns �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-o71z8t-public-custom-vnet-control-plane-l2r5b, container kube-apiserver �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-jchh4, container calico-node �[1mSTEP�[0m: Dumping workload cluster capz-e2e-o71z8t/capz-e2e-o71z8t-public-custom-vnet Azure activity log �[1mSTEP�[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-o71z8t-public-custom-vnet-control-plane-l2r5b, container etcd �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-qrv4l, container calico-node �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-rxgj4, container coredns �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-o71z8t-public-custom-vnet-control-plane-l2r5b, container kube-scheduler �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-bw6zn, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-o71z8t-public-custom-vnet-control-plane-l2r5b, container kube-controller-manager �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-dnh4r, container kube-proxy �[1mSTEP�[0m: Fetching activity logs took 598.048882ms �[1mSTEP�[0m: Dumping all the Cluster API resources in the "capz-e2e-o71z8t" namespace �[1mSTEP�[0m: Deleting all clusters in the capz-e2e-o71z8t namespace �[1mSTEP�[0m: Deleting cluster capz-e2e-o71z8t-public-custom-vnet INFO: Waiting for the Cluster capz-e2e-o71z8t/capz-e2e-o71z8t-public-custom-vnet to be deleted �[1mSTEP�[0m: Waiting for cluster capz-e2e-o71z8t-public-custom-vnet to be deleted W0517 20:04:58.058319 24092 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding I0517 20:05:29.334554 24092 trace.go:205] Trace[1233557425]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (17-May-2022 20:04:59.333) (total time: 30000ms): Trace[1233557425]: [30.000676677s] [30.000676677s] END E0517 20:05:29.334622 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp 52.177.90.88:6443: i/o timeout I0517 20:06:01.382313 24092 trace.go:205] Trace[650936861]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (17-May-2022 20:05:31.381) (total time: 30001ms): Trace[650936861]: [30.001120452s] [30.001120452s] END E0517 20:06:01.382381 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp 52.177.90.88:6443: i/o timeout I0517 20:06:35.024316 24092 trace.go:205] Trace[2088150757]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (17-May-2022 20:06:05.023) (total time: 30001ms): Trace[2088150757]: [30.001024536s] [30.001024536s] END E0517 20:06:35.024396 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp 52.177.90.88:6443: i/o timeout I0517 20:07:11.682766 24092 trace.go:205] Trace[1951331134]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (17-May-2022 20:06:41.681) (total time: 30001ms): Trace[1951331134]: [30.001303813s] [30.001303813s] END E0517 20:07:11.682832 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp 52.177.90.88:6443: i/o timeout I0517 20:08:05.015667 24092 trace.go:205] Trace[1476412921]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (17-May-2022 20:07:35.014) (total time: 30001ms): Trace[1476412921]: [30.00156993s] [30.00156993s] END E0517 20:08:05.015777 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp 52.177.90.88:6443: i/o timeout I0517 20:09:15.787553 24092 trace.go:205] Trace[847083495]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (17-May-2022 20:08:45.786) (total time: 30000ms): Trace[847083495]: [30.000874176s] [30.000874176s] END E0517 20:09:15.787624 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp 52.177.90.88:6443: i/o timeout E0517 20:10:00.262078 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host �[1mSTEP�[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-o71z8t �[1mSTEP�[0m: Running additional cleanup for the "create-workload-cluster" test spec May 17 20:10:18.346: INFO: deleting an existing virtual network "custom-vnet" May 17 20:10:31.801: INFO: deleting an existing route table "node-routetable" May 17 20:10:34.343: INFO: deleting an existing network security group "node-nsg" E0517 20:10:36.039688 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host May 17 20:10:44.890: INFO: deleting an existing network security group "control-plane-nsg" May 17 20:10:55.215: INFO: verifying the existing resource group "capz-e2e-o71z8t-public-custom-vnet" is empty May 17 20:10:55.304: INFO: deleting the existing resource group "capz-e2e-o71z8t-public-custom-vnet" E0517 20:11:14.936200 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:11:45.219613 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host �[1mSTEP�[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" �[1mSTEP�[0m: Redacting sensitive information from logs INFO: "Creates a public management cluster in the same vnet" ran for 20m45s on Ginkgo node 1 of 3
Filter through log files | View test history on testgrid
capz-e2e Workload cluster creation Creating a VMSS cluster with a single control plane node and an AzureMachinePool with 2 nodes
capz-e2e Workload cluster creation Creating a Windows Enabled cluster With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
capz-e2e Workload cluster creation Creating a Windows enabled VMSS cluster with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider with a 1 control plane nodes and 2 worker nodes
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster With ipv6 worker node
capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node
capz-e2e Workload cluster creation With 3 control-plane nodes and 2 worker nodes
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster using scale in rollout Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a single control plane cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should adopt up-to-date control plane Machines without modification Should adopt up-to-date control plane Machines without modification
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Workload cluster creation Creating a GPU-enabled cluster with a single control plane node and 1 node
... skipping 431 lines ... [1mWith ipv6 worker node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269[0m INFO: "With ipv6 worker node" started at Tue, 17 May 2022 19:51:40 UTC on Ginkgo node 3 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-bn6b6d" for hosting the cluster May 17 19:51:40.537: INFO: starting to create namespace for hosting the "capz-e2e-bn6b6d" test spec 2022/05/17 19:51:40 failed trying to get namespace (capz-e2e-bn6b6d):namespaces "capz-e2e-bn6b6d" not found INFO: Creating namespace capz-e2e-bn6b6d INFO: Creating event watcher for namespace "capz-e2e-bn6b6d" May 17 19:51:40.590: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-bn6b6d-ipv6 INFO: Creating the workload cluster with name "capz-e2e-bn6b6d-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 93 lines ... [1mSTEP[0m: Fetching activity logs took 546.359858ms [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-bn6b6d" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-bn6b6d namespace [1mSTEP[0m: Deleting cluster capz-e2e-bn6b6d-ipv6 INFO: Waiting for the Cluster capz-e2e-bn6b6d/capz-e2e-bn6b6d-ipv6 to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-bn6b6d-ipv6 to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-trstl, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-lrfgc, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-bn6b6d-ipv6-control-plane-626n6, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-bn6b6d-ipv6-control-plane-626n6, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-bn6b6d-ipv6-control-plane-xxdpg, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-bn6b6d-ipv6-control-plane-kdgfw, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-vrh5h, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-bn6b6d-ipv6-control-plane-kdgfw, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-7fnbx, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-bn6b6d-ipv6-control-plane-xxdpg, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-bn6b6d-ipv6-control-plane-xxdpg, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-bx24b, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-m4sw9, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-bn6b6d-ipv6-control-plane-626n6, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-ffzs9, container calico-kube-controllers: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-kxnmb, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-bn6b6d-ipv6-control-plane-xxdpg, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-bn6b6d-ipv6-control-plane-kdgfw, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-6thk6, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kg2g2, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-rpnfw, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-bn6b6d-ipv6-control-plane-626n6, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-bn6b6d-ipv6-control-plane-kdgfw, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-bn6b6d [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "With ipv6 worker node" ran for 17m39s on Ginkgo node 3 of 3 ... skipping 10 lines ... [1mCreates a public management cluster in the same vnet[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141[0m INFO: "Creates a public management cluster in the same vnet" started at Tue, 17 May 2022 19:51:40 UTC on Ginkgo node 1 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-o71z8t" for hosting the cluster May 17 19:51:40.492: INFO: starting to create namespace for hosting the "capz-e2e-o71z8t" test spec 2022/05/17 19:51:40 failed trying to get namespace (capz-e2e-o71z8t):namespaces "capz-e2e-o71z8t" not found INFO: Creating namespace capz-e2e-o71z8t INFO: Creating event watcher for namespace "capz-e2e-o71z8t" May 17 19:51:40.533: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-o71z8t-public-custom-vnet [1mSTEP[0m: creating Azure clients with the workload cluster's subscription [1mSTEP[0m: creating a resource group ... skipping 49 lines ... [1mSTEP[0m: Ensure public API server is stable before creating private cluster [1mSTEP[0m: Creating a private workload cluster INFO: Creating the workload cluster with name "capz-e2e-pulmqz-private" using the "private" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-pulmqz-private --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 3 --worker-machine-count 1 --flavor private INFO: Applying the cluster template yaml to the cluster Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.azurecluster.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.azuremachinetemplate.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.azuremachinetemplate.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource [1mSTEP[0m: Dumping logs from the "capz-e2e-o71z8t-public-custom-vnet" workload cluster [1mSTEP[0m: Dumping workload cluster capz-e2e-o71z8t/capz-e2e-o71z8t-public-custom-vnet logs May 17 19:59:44.383: INFO: INFO: Collecting logs for node capz-e2e-o71z8t-public-custom-vnet-control-plane-l2r5b in cluster capz-e2e-o71z8t-public-custom-vnet in namespace capz-e2e-o71z8t May 17 19:59:48.725: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-o71z8t-public-custom-vnet-control-plane-l2r5b ... skipping 19 lines ... [1mSTEP[0m: Fetching activity logs took 598.048882ms [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-o71z8t" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-o71z8t namespace [1mSTEP[0m: Deleting cluster capz-e2e-o71z8t-public-custom-vnet INFO: Waiting for the Cluster capz-e2e-o71z8t/capz-e2e-o71z8t-public-custom-vnet to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-o71z8t-public-custom-vnet to be deleted W0517 20:04:58.058319 24092 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding I0517 20:05:29.334554 24092 trace.go:205] Trace[1233557425]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (17-May-2022 20:04:59.333) (total time: 30000ms): Trace[1233557425]: [30.000676677s] [30.000676677s] END E0517 20:05:29.334622 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp 52.177.90.88:6443: i/o timeout I0517 20:06:01.382313 24092 trace.go:205] Trace[650936861]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (17-May-2022 20:05:31.381) (total time: 30001ms): Trace[650936861]: [30.001120452s] [30.001120452s] END E0517 20:06:01.382381 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp 52.177.90.88:6443: i/o timeout I0517 20:06:35.024316 24092 trace.go:205] Trace[2088150757]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (17-May-2022 20:06:05.023) (total time: 30001ms): Trace[2088150757]: [30.001024536s] [30.001024536s] END E0517 20:06:35.024396 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp 52.177.90.88:6443: i/o timeout I0517 20:07:11.682766 24092 trace.go:205] Trace[1951331134]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (17-May-2022 20:06:41.681) (total time: 30001ms): Trace[1951331134]: [30.001303813s] [30.001303813s] END E0517 20:07:11.682832 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp 52.177.90.88:6443: i/o timeout I0517 20:08:05.015667 24092 trace.go:205] Trace[1476412921]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (17-May-2022 20:07:35.014) (total time: 30001ms): Trace[1476412921]: [30.00156993s] [30.00156993s] END E0517 20:08:05.015777 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp 52.177.90.88:6443: i/o timeout I0517 20:09:15.787553 24092 trace.go:205] Trace[847083495]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (17-May-2022 20:08:45.786) (total time: 30000ms): Trace[847083495]: [30.000874176s] [30.000874176s] END E0517 20:09:15.787624 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp 52.177.90.88:6443: i/o timeout E0517 20:10:00.262078 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-o71z8t [1mSTEP[0m: Running additional cleanup for the "create-workload-cluster" test spec May 17 20:10:18.346: INFO: deleting an existing virtual network "custom-vnet" May 17 20:10:31.801: INFO: deleting an existing route table "node-routetable" May 17 20:10:34.343: INFO: deleting an existing network security group "node-nsg" E0517 20:10:36.039688 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host May 17 20:10:44.890: INFO: deleting an existing network security group "control-plane-nsg" May 17 20:10:55.215: INFO: verifying the existing resource group "capz-e2e-o71z8t-public-custom-vnet" is empty May 17 20:10:55.304: INFO: deleting the existing resource group "capz-e2e-o71z8t-public-custom-vnet" E0517 20:11:14.936200 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:11:45.219613 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "Creates a public management cluster in the same vnet" ran for 20m45s on Ginkgo node 1 of 3 [91m[1m• Failure [1245.204 seconds][0m Workload cluster creation [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43[0m Creating a private cluster [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:140[0m [91m[1mCreates a public management cluster in the same vnet [It][0m [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141[0m [91mExpected success, but got an error: <*errors.withStack | 0xc0007a45e8>: { error: <*exec.ExitError | 0xc0007fc000>{ ProcessState: { pid: 28053, status: 256, rusage: { Utime: {Sec: 0, Usec: 444719}, Stime: {Sec: 0, Usec: 177002}, ... skipping 69 lines ... [1mwith a single control plane node and an AzureMachinePool with 2 nodes[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315[0m INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Tue, 17 May 2022 20:09:19 UTC on Ginkgo node 3 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-ql3bs2" for hosting the cluster May 17 20:09:19.102: INFO: starting to create namespace for hosting the "capz-e2e-ql3bs2" test spec 2022/05/17 20:09:19 failed trying to get namespace (capz-e2e-ql3bs2):namespaces "capz-e2e-ql3bs2" not found INFO: Creating namespace capz-e2e-ql3bs2 INFO: Creating event watcher for namespace "capz-e2e-ql3bs2" May 17 20:09:19.141: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-ql3bs2-vmss INFO: Creating the workload cluster with name "capz-e2e-ql3bs2-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml ... skipping 106 lines ... [1mSTEP[0m: Fetching activity logs took 559.346509ms [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-ql3bs2" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-ql3bs2 namespace [1mSTEP[0m: Deleting cluster capz-e2e-ql3bs2-vmss INFO: Waiting for the Cluster capz-e2e-ql3bs2/capz-e2e-ql3bs2-vmss to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-ql3bs2-vmss to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-wv5dv, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-kkm2n, container calico-node: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-ql3bs2 [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 18m28s on Ginkgo node 3 of 3 ... skipping 10 lines ... [1mWith 3 control-plane nodes and 2 worker nodes[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203[0m INFO: "With 3 control-plane nodes and 2 worker nodes" started at Tue, 17 May 2022 19:51:40 UTC on Ginkgo node 2 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-vleueu" for hosting the cluster May 17 19:51:40.533: INFO: starting to create namespace for hosting the "capz-e2e-vleueu" test spec 2022/05/17 19:51:40 failed trying to get namespace (capz-e2e-vleueu):namespaces "capz-e2e-vleueu" not found INFO: Creating namespace capz-e2e-vleueu INFO: Creating event watcher for namespace "capz-e2e-vleueu" May 17 19:51:40.600: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-vleueu-ha INFO: Creating the workload cluster with name "capz-e2e-vleueu-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml ... skipping 67 lines ... May 17 20:02:15.214: INFO: starting to delete external LB service webszt92n-elb May 17 20:02:15.343: INFO: starting to delete deployment webszt92n May 17 20:02:15.382: INFO: starting to delete job curl-to-elb-jobrw676yyrlv4 [1mSTEP[0m: creating a Kubernetes client to the workload cluster [1mSTEP[0m: Creating development namespace May 17 20:02:15.465: INFO: starting to create dev deployment namespace 2022/05/17 20:02:15 failed trying to get namespace (development):namespaces "development" not found 2022/05/17 20:02:15 namespace development does not exist, creating... [1mSTEP[0m: Creating production namespace May 17 20:02:15.552: INFO: starting to create prod deployment namespace 2022/05/17 20:02:15 failed trying to get namespace (production):namespaces "production" not found 2022/05/17 20:02:15 namespace production does not exist, creating... [1mSTEP[0m: Creating frontendProd, backend and network-policy pod deployments May 17 20:02:15.656: INFO: starting to create frontend-prod deployments May 17 20:02:15.714: INFO: starting to create frontend-dev deployments May 17 20:02:15.759: INFO: starting to create backend deployments May 17 20:02:15.803: INFO: starting to create network-policy deployments ... skipping 11 lines ... [1mSTEP[0m: Ensuring we have outbound internet access from the network-policy pods [1mSTEP[0m: Ensuring we have connectivity from network-policy pods to frontend-prod pods [1mSTEP[0m: Ensuring we have connectivity from network-policy pods to backend pods [1mSTEP[0m: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace May 17 20:02:38.617: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace [1mSTEP[0m: Ensuring we no longer have ingress access from the network-policy pods to backend pods curl: (7) Failed to connect to 192.168.209.2 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves May 17 20:04:50.221: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves [1mSTEP[0m: Applying a network policy to deny egress access in development namespace May 17 20:04:50.409: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace [1mSTEP[0m: Ensuring we no longer have egress access from the network-policy pods to backend pods curl: (7) Failed to connect to 192.168.209.2 port 80: Connection timed out curl: (7) Failed to connect to 192.168.209.2 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves May 17 20:09:12.421: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves [1mSTEP[0m: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace May 17 20:09:12.620: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace [1mSTEP[0m: Ensuring we have egress access from pods with matching labels [1mSTEP[0m: Ensuring we don't have ingress access from pods without matching labels curl: (7) Failed to connect to 192.168.209.3 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves May 17 20:11:23.437: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves [1mSTEP[0m: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace May 17 20:11:23.614: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace [1mSTEP[0m: Ensuring we have egress access from pods with matching labels [1mSTEP[0m: Ensuring we don't have ingress access from pods without matching labels curl: (7) Failed to connect to 192.168.209.1 port 80: Connection timed out curl: (7) Failed to connect to 192.168.209.3 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves May 17 20:15:45.581: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves [1mSTEP[0m: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels May 17 20:15:45.738: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels [1mSTEP[0m: Ensuring we have ingress access from pods with matching labels [1mSTEP[0m: Ensuring we don't have ingress access from pods without matching labels curl: (7) Failed to connect to 192.168.209.2 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves May 17 20:17:56.653: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves [1mSTEP[0m: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development May 17 20:17:56.834: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development [1mSTEP[0m: Ensuring we don't have ingress access from role:frontend pods in production namespace curl: (7) Failed to connect to 192.168.209.2 port 80: Connection timed out [1mSTEP[0m: Ensuring we have ingress access from role:frontend pods in development namespace [1mSTEP[0m: Dumping logs from the "capz-e2e-vleueu-ha" workload cluster [1mSTEP[0m: Dumping workload cluster capz-e2e-vleueu/capz-e2e-vleueu-ha logs May 17 20:20:08.160: INFO: INFO: Collecting logs for node capz-e2e-vleueu-ha-control-plane-k5gd9 in cluster capz-e2e-vleueu-ha in namespace capz-e2e-vleueu May 17 20:20:17.651: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-vleueu-ha-control-plane-k5gd9 ... skipping 39 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-vleueu-ha-control-plane-qfbsn, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-xjbm2, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-vleueu-ha-control-plane-qfbsn, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-sp5fc, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-n9555, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-dbcsz, container coredns [1mSTEP[0m: Got error while iterating over activity logs for resource group capz-e2e-vleueu-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded [1mSTEP[0m: Fetching activity logs took 30.000995416s [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-vleueu" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-vleueu namespace [1mSTEP[0m: Deleting cluster capz-e2e-vleueu-ha INFO: Waiting for the Cluster capz-e2e-vleueu/capz-e2e-vleueu-ha to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-vleueu-ha to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-xjbm2, container coredns: http2: server sent GOAWAY and closed the connection; LastStreamID=113, ErrCode=NO_ERROR, debug="" [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-n9555, container calico-node: http2: server sent GOAWAY and closed the connection; LastStreamID=113, ErrCode=NO_ERROR, debug="" [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-vleueu-ha-control-plane-qfbsn, container kube-apiserver: http2: server sent GOAWAY and closed the connection; LastStreamID=113, ErrCode=NO_ERROR, debug="" [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-mjb2c, container kube-proxy: http2: server sent GOAWAY and closed the connection; LastStreamID=113, ErrCode=NO_ERROR, debug="" [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-dbcsz, container coredns: http2: server sent GOAWAY and closed the connection; LastStreamID=113, ErrCode=NO_ERROR, debug="" [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-tz425, container kube-proxy: http2: server sent GOAWAY and closed the connection; LastStreamID=113, ErrCode=NO_ERROR, debug="" [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-sp5fc, container calico-node: http2: server sent GOAWAY and closed the connection; LastStreamID=113, ErrCode=NO_ERROR, debug="" [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-8b9pb, container calico-node: http2: server sent GOAWAY and closed the connection; LastStreamID=113, ErrCode=NO_ERROR, debug="" [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-f4v5h, container kube-proxy: http2: server sent GOAWAY and closed the connection; LastStreamID=113, ErrCode=NO_ERROR, debug="" [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-vleueu [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 42m16s on Ginkgo node 2 of 3 ... skipping 8 lines ... [1mwith a 1 control plane nodes and 2 worker nodes[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419[0m INFO: "with a 1 control plane nodes and 2 worker nodes" started at Tue, 17 May 2022 20:12:25 UTC on Ginkgo node 1 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-e3yfx0" for hosting the cluster May 17 20:12:25.702: INFO: starting to create namespace for hosting the "capz-e2e-e3yfx0" test spec 2022/05/17 20:12:25 failed trying to get namespace (capz-e2e-e3yfx0):namespaces "capz-e2e-e3yfx0" not found INFO: Creating namespace capz-e2e-e3yfx0 INFO: Creating event watcher for namespace "capz-e2e-e3yfx0" May 17 20:12:25.740: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-e3yfx0-oot INFO: Creating the workload cluster with name "capz-e2e-e3yfx0-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-e3yfx0-oot --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 2 --flavor external-cloud-provider INFO: Applying the cluster template yaml to the cluster E0517 20:12:26.615378 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host cluster.cluster.x-k8s.io/capz-e2e-e3yfx0-oot created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-e3yfx0-oot created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-e3yfx0-oot-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-e3yfx0-oot-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-e3yfx0-oot-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-e3yfx0-oot-md-0 created ... skipping 5 lines ... configmap/cloud-node-manager-addon created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-e3yfx0-oot-calico created configmap/cni-capz-e2e-e3yfx0-oot-calico created INFO: Waiting for the cluster infrastructure to be provisioned [1mSTEP[0m: Waiting for cluster to enter the provisioned phase E0517 20:13:05.864484 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-e2e-e3yfx0/capz-e2e-e3yfx0-oot-control-plane to be provisioned [1mSTEP[0m: Waiting for one control plane node to exist E0517 20:13:59.949911 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:14:35.856043 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:15:10.251498 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:16:01.425364 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-e3yfx0/capz-e2e-e3yfx0-oot-control-plane to be ready (implies underlying nodes to be ready as well) [1mSTEP[0m: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist E0517 20:16:38.944478 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:17:35.006817 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: Waiting for the machine pools to be provisioned [1mSTEP[0m: creating a Kubernetes client to the workload cluster [1mSTEP[0m: creating an HTTP deployment [1mSTEP[0m: waiting for deployment default/webesqhjb to be available May 17 20:18:17.855: INFO: starting to wait for deployment to become available E0517 20:18:34.515365 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host May 17 20:18:37.983: INFO: Deployment default/webesqhjb is now available, took 20.127781649s [1mSTEP[0m: creating an internal Load Balancer service May 17 20:18:37.983: INFO: starting to create an internal Load Balancer service [1mSTEP[0m: waiting for service default/webesqhjb-ilb to be available May 17 20:18:38.046: INFO: waiting for service default/webesqhjb-ilb to be available E0517 20:19:11.645259 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host May 17 20:19:48.350: INFO: service default/webesqhjb-ilb is available, took 1m10.304457447s [1mSTEP[0m: connecting to the internal LB service from a curl pod May 17 20:19:48.387: INFO: starting to create a curl to ilb job [1mSTEP[0m: waiting for job default/curl-to-ilb-job1vofo to be complete May 17 20:19:48.440: INFO: waiting for job default/curl-to-ilb-job1vofo to be complete May 17 20:19:58.513: INFO: job default/curl-to-ilb-job1vofo is complete, took 10.073555325s [1mSTEP[0m: deleting the ilb test resources May 17 20:19:58.514: INFO: deleting the ilb service: webesqhjb-ilb May 17 20:19:58.591: INFO: deleting the ilb job: curl-to-ilb-job1vofo [1mSTEP[0m: creating an external Load Balancer service May 17 20:19:58.634: INFO: starting to create an external Load Balancer service [1mSTEP[0m: waiting for service default/webesqhjb-elb to be available May 17 20:19:58.736: INFO: waiting for service default/webesqhjb-elb to be available E0517 20:20:02.011221 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:20:45.143393 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:21:25.517496 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host May 17 20:21:49.182: INFO: service default/webesqhjb-elb is available, took 1m50.446261865s [1mSTEP[0m: connecting to the external LB service from a curl pod May 17 20:21:49.218: INFO: starting to create curl-to-elb job [1mSTEP[0m: waiting for job default/curl-to-elb-job5siwrn5w9la to be complete May 17 20:21:49.262: INFO: waiting for job default/curl-to-elb-job5siwrn5w9la to be complete May 17 20:21:59.340: INFO: job default/curl-to-elb-job5siwrn5w9la is complete, took 10.078201643s ... skipping 14 lines ... May 17 20:22:08.508: INFO: INFO: Collecting logs for node capz-e2e-e3yfx0-oot-md-0-lxs29 in cluster capz-e2e-e3yfx0-oot in namespace capz-e2e-e3yfx0 May 17 20:22:16.429: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-e3yfx0-oot-md-0-lxs29 May 17 20:22:16.722: INFO: INFO: Collecting logs for node capz-e2e-e3yfx0-oot-md-0-gpmv2 in cluster capz-e2e-e3yfx0-oot in namespace capz-e2e-e3yfx0 E0517 20:22:19.510496 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host May 17 20:22:26.807: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-e3yfx0-oot-md-0-gpmv2 [1mSTEP[0m: Dumping workload cluster capz-e2e-e3yfx0/capz-e2e-e3yfx0-oot kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 256.626529ms [1mSTEP[0m: Creating log watcher for controller kube-system/cloud-node-manager-xvxzk, container cloud-node-manager [1mSTEP[0m: Creating log watcher for controller kube-system/cloud-controller-manager, container cloud-controller-manager ... skipping 16 lines ... [1mSTEP[0m: Fetching activity logs took 607.880174ms [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-e3yfx0" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-e3yfx0 namespace [1mSTEP[0m: Deleting cluster capz-e2e-e3yfx0-oot INFO: Waiting for the Cluster capz-e2e-e3yfx0/capz-e2e-e3yfx0-oot to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-e3yfx0-oot to be deleted E0517 20:23:05.024475 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:23:38.006999 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:24:20.490981 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:24:51.724976 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:25:40.992704 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:26:12.955802 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:27:00.285406 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Got error while streaming logs for pod kube-system/cloud-node-manager-9ztcg, container cloud-node-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-k697v, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-8vpsp, container calico-node: http2: client connection lost E0517 20:27:41.320460 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:28:15.454301 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:28:47.221085 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:29:20.312177 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:29:56.638738 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:30:33.273265 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:31:23.767146 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:32:03.043800 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:32:43.427556 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:33:33.574323 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:34:24.609831 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:35:23.527039 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:36:12.978000 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:36:55.045409 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:37:29.634742 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-e3yfx0 [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs E0517 20:38:16.061276 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 25m56s on Ginkgo node 1 of 3 [32m• [SLOW TEST:1555.695 seconds][0m Workload cluster creation [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43[0m ... skipping 6 lines ... [1mwith a single control plane node and 1 node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454[0m INFO: "with a single control plane node and 1 node" started at Tue, 17 May 2022 20:27:47 UTC on Ginkgo node 3 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-m7shez" for hosting the cluster May 17 20:27:47.220: INFO: starting to create namespace for hosting the "capz-e2e-m7shez" test spec 2022/05/17 20:27:47 failed trying to get namespace (capz-e2e-m7shez):namespaces "capz-e2e-m7shez" not found INFO: Creating namespace capz-e2e-m7shez INFO: Creating event watcher for namespace "capz-e2e-m7shez" May 17 20:27:47.260: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-m7shez-aks INFO: Creating the workload cluster with name "capz-e2e-m7shez-aks" using the "aks-multi-tenancy" template (Kubernetes v1.22.6, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 34 lines ... [1mSTEP[0m: Dumping logs from the "capz-e2e-m7shez-aks" workload cluster [1mSTEP[0m: Dumping workload cluster capz-e2e-m7shez/capz-e2e-m7shez-aks logs May 17 20:37:35.632: INFO: INFO: Collecting logs for node aks-agentpool1-27585753-vmss000000 in cluster capz-e2e-m7shez-aks in namespace capz-e2e-m7shez May 17 20:39:45.051: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0 Failed to get logs for machine pool agentpool0, cluster capz-e2e-m7shez/capz-e2e-m7shez-aks: [dialing public load balancer at capz-e2e-m7shez-aks-d9dda9ce.hcp.eastus2.azmk8s.io: dial tcp 20.65.29.4:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."] May 17 20:39:45.672: INFO: INFO: Collecting logs for node aks-agentpool1-27585753-vmss000000 in cluster capz-e2e-m7shez-aks in namespace capz-e2e-m7shez May 17 20:41:56.119: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0 Failed to get logs for machine pool agentpool1, cluster capz-e2e-m7shez/capz-e2e-m7shez-aks: [dialing public load balancer at capz-e2e-m7shez-aks-d9dda9ce.hcp.eastus2.azmk8s.io: dial tcp 20.65.29.4:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."] [1mSTEP[0m: Dumping workload cluster capz-e2e-m7shez/capz-e2e-m7shez-aks kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 433.353837ms [1mSTEP[0m: Creating log watcher for controller kube-system/azure-ip-masq-agent-2kzns, container azure-ip-masq-agent [1mSTEP[0m: Creating log watcher for controller kube-system/cloud-node-manager-zr7vt, container cloud-node-manager [1mSTEP[0m: Creating log watcher for controller kube-system/csi-azuredisk-node-q6ss4, container azuredisk [1mSTEP[0m: Creating log watcher for controller kube-system/azure-ip-masq-agent-48744, container azure-ip-masq-agent ... skipping 44 lines ... [1mwith a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543[0m INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Tue, 17 May 2022 20:38:21 UTC on Ginkgo node 1 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-y7gsto" for hosting the cluster May 17 20:38:21.401: INFO: starting to create namespace for hosting the "capz-e2e-y7gsto" test spec 2022/05/17 20:38:21 failed trying to get namespace (capz-e2e-y7gsto):namespaces "capz-e2e-y7gsto" not found INFO: Creating namespace capz-e2e-y7gsto INFO: Creating event watcher for namespace "capz-e2e-y7gsto" May 17 20:38:21.445: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-y7gsto-win-vmss INFO: Creating the workload cluster with name "capz-e2e-y7gsto-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 12 lines ... kubeadmconfig.bootstrap.cluster.x-k8s.io/capz-e2e-y7gsto-win-vmss-mp-win created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-y7gsto-win-vmss-flannel created configmap/cni-capz-e2e-y7gsto-win-vmss-flannel created INFO: Waiting for the cluster infrastructure to be provisioned [1mSTEP[0m: Waiting for cluster to enter the provisioned phase E0517 20:38:48.712718 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-e2e-y7gsto/capz-e2e-y7gsto-win-vmss-control-plane to be provisioned [1mSTEP[0m: Waiting for one control plane node to exist E0517 20:39:26.546430 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:40:09.420259 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:40:43.329688 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:41:31.829524 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-y7gsto/capz-e2e-y7gsto-win-vmss-control-plane to be ready (implies underlying nodes to be ready as well) [1mSTEP[0m: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned INFO: Waiting for the machine pools to be provisioned [1mSTEP[0m: Waiting for the machine pool workload nodes to exist E0517 20:42:08.435223 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:42:41.060564 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:43:12.914404 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Waiting for the machine pool workload nodes to exist E0517 20:44:04.930129 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:44:40.613571 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:45:35.883537 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: creating a Kubernetes client to the workload cluster [1mSTEP[0m: creating an HTTP deployment [1mSTEP[0m: waiting for deployment default/web9el5f4 to be available May 17 20:45:53.223: INFO: starting to wait for deployment to become available E0517 20:46:09.978837 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host May 17 20:46:13.333: INFO: Deployment default/web9el5f4 is now available, took 20.109981871s [1mSTEP[0m: creating an internal Load Balancer service May 17 20:46:13.333: INFO: starting to create an internal Load Balancer service [1mSTEP[0m: waiting for service default/web9el5f4-ilb to be available May 17 20:46:13.389: INFO: waiting for service default/web9el5f4-ilb to be available E0517 20:46:58.878407 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host May 17 20:47:03.596: INFO: service default/web9el5f4-ilb is available, took 50.2064989s [1mSTEP[0m: connecting to the internal LB service from a curl pod May 17 20:47:03.628: INFO: starting to create a curl to ilb job [1mSTEP[0m: waiting for job default/curl-to-ilb-jobzka01 to be complete May 17 20:47:03.668: INFO: waiting for job default/curl-to-ilb-jobzka01 to be complete May 17 20:47:13.733: INFO: job default/curl-to-ilb-jobzka01 is complete, took 10.065776454s [1mSTEP[0m: deleting the ilb test resources May 17 20:47:13.733: INFO: deleting the ilb service: web9el5f4-ilb May 17 20:47:13.783: INFO: deleting the ilb job: curl-to-ilb-jobzka01 [1mSTEP[0m: creating an external Load Balancer service May 17 20:47:13.827: INFO: starting to create an external Load Balancer service [1mSTEP[0m: waiting for service default/web9el5f4-elb to be available May 17 20:47:13.876: INFO: waiting for service default/web9el5f4-elb to be available E0517 20:47:29.171782 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:48:00.669924 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host May 17 20:48:24.143: INFO: service default/web9el5f4-elb is available, took 1m10.267050368s [1mSTEP[0m: connecting to the external LB service from a curl pod May 17 20:48:24.175: INFO: starting to create curl-to-elb job [1mSTEP[0m: waiting for job default/curl-to-elb-jobhk2hfr9uz8k to be complete May 17 20:48:24.210: INFO: waiting for job default/curl-to-elb-jobhk2hfr9uz8k to be complete E0517 20:48:32.526095 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host May 17 20:48:34.274: INFO: job default/curl-to-elb-jobhk2hfr9uz8k is complete, took 10.064423122s [1mSTEP[0m: connecting directly to the external LB service May 17 20:48:34.274: INFO: starting attempts to connect directly to the external LB service 2022/05/17 20:48:34 [DEBUG] GET http://20.85.8.74 May 17 20:48:37.373: INFO: successfully connected to the external LB service [1mSTEP[0m: deleting the test resources May 17 20:48:37.373: INFO: starting to delete external LB service web9el5f4-elb May 17 20:48:37.425: INFO: starting to delete deployment web9el5f4 May 17 20:48:37.458: INFO: starting to delete job curl-to-elb-jobhk2hfr9uz8k [1mSTEP[0m: creating a Kubernetes client to the workload cluster [1mSTEP[0m: creating an HTTP deployment [1mSTEP[0m: waiting for deployment default/web-windowslzh2i5 to be available May 17 20:48:37.580: INFO: starting to wait for deployment to become available E0517 20:49:10.766724 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:49:48.048489 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:50:31.041996 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:51:03.436792 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host May 17 20:51:08.151: INFO: Deployment default/web-windowslzh2i5 is now available, took 2m30.571899426s [1mSTEP[0m: creating an internal Load Balancer service May 17 20:51:08.151: INFO: starting to create an internal Load Balancer service [1mSTEP[0m: waiting for service default/web-windowslzh2i5-ilb to be available May 17 20:51:08.219: INFO: waiting for service default/web-windowslzh2i5-ilb to be available E0517 20:51:56.241616 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host May 17 20:51:58.422: INFO: service default/web-windowslzh2i5-ilb is available, took 50.203224465s [1mSTEP[0m: connecting to the internal LB service from a curl pod May 17 20:51:58.454: INFO: starting to create a curl to ilb job [1mSTEP[0m: waiting for job default/curl-to-ilb-joby3upf to be complete May 17 20:51:58.490: INFO: waiting for job default/curl-to-ilb-joby3upf to be complete May 17 20:52:08.556: INFO: job default/curl-to-ilb-joby3upf is complete, took 10.066029022s [1mSTEP[0m: deleting the ilb test resources May 17 20:52:08.556: INFO: deleting the ilb service: web-windowslzh2i5-ilb May 17 20:52:08.613: INFO: deleting the ilb job: curl-to-ilb-joby3upf [1mSTEP[0m: creating an external Load Balancer service May 17 20:52:08.646: INFO: starting to create an external Load Balancer service [1mSTEP[0m: waiting for service default/web-windowslzh2i5-elb to be available May 17 20:52:08.697: INFO: waiting for service default/web-windowslzh2i5-elb to be available E0517 20:52:55.512268 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:53:53.634914 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:54:32.362144 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:55:06.696493 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:55:44.628595 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host May 17 20:55:59.501: INFO: service default/web-windowslzh2i5-elb is available, took 3m50.803257523s [1mSTEP[0m: connecting to the external LB service from a curl pod May 17 20:55:59.532: INFO: starting to create curl-to-elb job [1mSTEP[0m: waiting for job default/curl-to-elb-jobce9ek3rmxhp to be complete May 17 20:55:59.568: INFO: waiting for job default/curl-to-elb-jobce9ek3rmxhp to be complete May 17 20:56:09.635: INFO: job default/curl-to-elb-jobce9ek3rmxhp is complete, took 10.066525356s ... skipping 10 lines ... May 17 20:56:25.217: INFO: INFO: Collecting logs for node capz-e2e-y7gsto-win-vmss-control-plane-5x8rf in cluster capz-e2e-y7gsto-win-vmss in namespace capz-e2e-y7gsto May 17 20:56:34.511: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-y7gsto-win-vmss-control-plane-5x8rf May 17 20:56:35.417: INFO: INFO: Collecting logs for node capz-e2e-y7gsto-win-vmss-mp-0000000 in cluster capz-e2e-y7gsto-win-vmss in namespace capz-e2e-y7gsto E0517 20:56:36.546069 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host May 17 20:56:45.474: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-y7gsto-win-vmss-mp-0 May 17 20:56:45.869: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-y7gsto-win-vmss in namespace capz-e2e-y7gsto E0517 20:57:14.013328 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host May 17 20:57:20.668: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win [1mSTEP[0m: Dumping workload cluster capz-e2e-y7gsto/capz-e2e-y7gsto-win-vmss kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 213.326849ms [1mSTEP[0m: Dumping workload cluster capz-e2e-y7gsto/capz-e2e-y7gsto-win-vmss Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-y7gsto-win-vmss-control-plane-5x8rf, container etcd ... skipping 11 lines ... [1mSTEP[0m: Fetching activity logs took 908.649582ms [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-y7gsto" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-y7gsto namespace [1mSTEP[0m: Deleting cluster capz-e2e-y7gsto-win-vmss INFO: Waiting for the Cluster capz-e2e-y7gsto/capz-e2e-y7gsto-win-vmss to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-y7gsto-win-vmss to be deleted E0517 20:57:47.847312 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-w6gp4, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-windows-5dhn8, container kube-proxy: http2: client connection lost E0517 20:58:38.504840 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:59:13.546541 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 20:59:44.357113 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 21:00:39.981243 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 21:01:29.901627 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 21:02:08.013223 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 21:02:39.538417 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 21:03:33.786815 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 21:04:33.170063 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 21:05:11.794548 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 21:05:58.218397 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-y7gsto [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs E0517 21:06:49.448748 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 28m56s on Ginkgo node 1 of 3 [32m• [SLOW TEST:1736.136 seconds][0m Workload cluster creation [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43[0m ... skipping 6 lines ... [1mWith 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496[0m INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Tue, 17 May 2022 20:33:56 UTC on Ginkgo node 2 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-cvn1qv" for hosting the cluster May 17 20:33:56.974: INFO: starting to create namespace for hosting the "capz-e2e-cvn1qv" test spec 2022/05/17 20:33:56 failed trying to get namespace (capz-e2e-cvn1qv):namespaces "capz-e2e-cvn1qv" not found INFO: Creating namespace capz-e2e-cvn1qv INFO: Creating event watcher for namespace "capz-e2e-cvn1qv" May 17 20:33:57.031: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-cvn1qv-win-ha INFO: Creating the workload cluster with name "capz-e2e-cvn1qv-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 151 lines ... [1mSTEP[0m: Fetching activity logs took 1.045274195s [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-cvn1qv" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-cvn1qv namespace [1mSTEP[0m: Deleting cluster capz-e2e-cvn1qv-win-ha INFO: Waiting for the Cluster capz-e2e-cvn1qv/capz-e2e-cvn1qv-win-ha to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-cvn1qv-win-ha to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-h29p6, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-9stxh, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-cvn1qv-win-ha-control-plane-zx62q, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-cvn1qv-win-ha-control-plane-zx62q, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-cvn1qv-win-ha-control-plane-hvl7s, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-cvn1qv-win-ha-control-plane-hvl7s, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-cvn1qv-win-ha-control-plane-hvl7s, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-vfnq7, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-frrnw, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-m4mxb, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-t4slp, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-cvn1qv-win-ha-control-plane-zx62q, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-cvn1qv-win-ha-control-plane-zx62q, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-4lt66, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-cvn1qv-win-ha-control-plane-hvl7s, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-kdgz2, container kube-proxy: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-cvn1qv [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 35m35s on Ginkgo node 2 of 3 ... skipping 3 lines ... [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43[0m Creating a Windows Enabled cluster [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:494[0m With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496[0m [90m------------------------------[0m E0517 21:07:29.032551 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 21:08:14.975584 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 21:08:45.443997 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host E0517 21:09:24.598294 24092 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-o71z8t/events?resourceVersion=2480": dial tcp: lookup capz-e2e-o71z8t-public-custom-vnet-adffb4b5.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Tearing down the management cluster [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mWorkload cluster creation [0m[0mCreating a private cluster [0m[91m[1m[It] Creates a public management cluster in the same vnet [0m [37m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/clusterctl/clusterctl_helpers.go:272[0m [1m[91mRan 8 of 22 Specs in 4787.446 seconds[0m [1m[91mFAIL![0m -- [32m[1m7 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m14 Skipped[0m Ginkgo ran 1 suite in 1h21m8.402313684s Test Suite Failed make[1]: *** [Makefile:173: test-e2e-run] Error 1 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:181: test-e2e] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...