Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 8 succeeded |
Started | |
Elapsed | 1h46m |
Revision | release-0.5 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\san\sAKS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115 Timed out after 1800.000s. Expected <bool>: false to be true /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/cluster_helpers.go:165from junit.e2e_suite.1.xml
INFO: "with a single control plane node and 1 node" started at Thu, 21 Apr 2022 20:38:11 UTC on Ginkgo node 1 of 3 �[1mSTEP�[0m: Creating namespace "capz-e2e-2ryiq3" for hosting the cluster Apr 21 20:38:11.613: INFO: starting to create namespace for hosting the "capz-e2e-2ryiq3" test spec 2022/04/21 20:38:11 failed trying to get namespace (capz-e2e-2ryiq3):namespaces "capz-e2e-2ryiq3" not found INFO: Creating namespace capz-e2e-2ryiq3 INFO: Creating event watcher for namespace "capz-e2e-2ryiq3" Apr 21 20:38:11.644: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-2ryiq3-aks INFO: Creating the workload cluster with name "capz-e2e-2ryiq3-aks" using the "aks-multi-tenancy" template (Kubernetes v1.22.6, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-2ryiq3-aks --infrastructure (default) --kubernetes-version v1.22.6 --control-plane-machine-count 1 --worker-machine-count 1 --flavor aks-multi-tenancy INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/capz-e2e-2ryiq3-aks created azuremanagedcontrolplane.infrastructure.cluster.x-k8s.io/capz-e2e-2ryiq3-aks created azuremanagedcluster.infrastructure.cluster.x-k8s.io/capz-e2e-2ryiq3-aks created machinepool.cluster.x-k8s.io/agentpool0 created azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool0 created machinepool.cluster.x-k8s.io/agentpool1 created azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase E0421 20:38:28.114164 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:39:09.504957 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:39:52.511927 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:40:33.521566 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:41:24.993470 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:41:56.972501 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:42:35.989643 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:43:08.473307 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:43:55.681559 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: Waiting for control plane to be initialized Apr 21 20:44:03.615: INFO: Waiting for the first control plane machine managed by capz-e2e-2ryiq3/capz-e2e-2ryiq3-aks to be provisioned �[1mSTEP�[0m: Waiting for atleast one control plane node to exist INFO: Waiting for control plane to be ready Apr 21 20:44:03.652: INFO: Waiting for the first control plane machine managed by capz-e2e-2ryiq3/capz-e2e-2ryiq3-aks to be provisioned �[1mSTEP�[0m: Waiting for all control plane nodes to exist INFO: Waiting for the machine deployments to be provisioned INFO: Waiting for the machine pools to be provisioned �[1mSTEP�[0m: Waiting for the machine pool workload nodes to exist �[1mSTEP�[0m: Waiting for the machine pool workload nodes to exist Apr 21 20:44:14.962: INFO: want 2 instances, found 0 ready and 0 available. generation: 1, observedGeneration: 1 Apr 21 20:44:20.068: INFO: want 2 instances, found 2 ready and 2 available. generation: 1, observedGeneration: 1 Apr 21 20:44:20.180: INFO: mapping nsenter pods to hostnames for host-by-host execution Apr 21 20:44:20.180: INFO: found host aks-agentpool0-58290704-vmss000000 with pod nsenter-7r8kg Apr 21 20:44:20.180: INFO: found host aks-agentpool1-58290704-vmss000000 with pod nsenter-24x2h �[1mSTEP�[0m: checking that time synchronization is healthy on aks-agentpool1-58290704-vmss000000 �[1mSTEP�[0m: checking that time synchronization is healthy on aks-agentpool1-58290704-vmss000000 �[1mSTEP�[0m: time sync OK for host aks-agentpool1-58290704-vmss000000 �[1mSTEP�[0m: time sync OK for host aks-agentpool1-58290704-vmss000000 �[1mSTEP�[0m: time sync OK for host aks-agentpool1-58290704-vmss000000 �[1mSTEP�[0m: time sync OK for host aks-agentpool1-58290704-vmss000000 �[1mSTEP�[0m: Dumping logs from the "capz-e2e-2ryiq3-aks" workload cluster �[1mSTEP�[0m: Dumping workload cluster capz-e2e-2ryiq3/capz-e2e-2ryiq3-aks logs Apr 21 20:44:21.971: INFO: INFO: Collecting logs for node aks-agentpool1-58290704-vmss000000 in cluster capz-e2e-2ryiq3-aks in namespace capz-e2e-2ryiq3 E0421 20:44:52.458657 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:45:23.938987 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:46:05.333474 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host Apr 21 20:46:32.625: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0 Failed to get logs for machine pool agentpool0, cluster capz-e2e-2ryiq3/capz-e2e-2ryiq3-aks: [dialing public load balancer at capz-e2e-2ryiq3-aks-c9bb4370.hcp.uksouth.azmk8s.io: dial tcp 51.132.168.222:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."] Apr 21 20:46:33.178: INFO: INFO: Collecting logs for node aks-agentpool1-58290704-vmss000000 in cluster capz-e2e-2ryiq3-aks in namespace capz-e2e-2ryiq3 E0421 20:46:59.049543 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:47:45.615460 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:48:23.952485 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host Apr 21 20:48:43.697: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0 Failed to get logs for machine pool agentpool1, cluster capz-e2e-2ryiq3/capz-e2e-2ryiq3-aks: [dialing public load balancer at capz-e2e-2ryiq3-aks-c9bb4370.hcp.uksouth.azmk8s.io: dial tcp 51.132.168.222:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."] �[1mSTEP�[0m: Dumping workload cluster capz-e2e-2ryiq3/capz-e2e-2ryiq3-aks kube-system pod logs �[1mSTEP�[0m: Fetching kube-system pod logs took 990.668393ms �[1mSTEP�[0m: Dumping workload cluster capz-e2e-2ryiq3/capz-e2e-2ryiq3-aks Azure activity log �[1mSTEP�[0m: Creating log watcher for controller kube-system/azure-ip-masq-agent-v78rh, container azure-ip-masq-agent �[1mSTEP�[0m: Creating log watcher for controller kube-system/cloud-node-manager-trmrk, container cloud-node-manager �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-69c47794-289lm, container coredns �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-69c47794-96tm8, container coredns �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azurefile-node-mdpsv, container azurefile �[1mSTEP�[0m: Creating log watcher for controller kube-system/konnectivity-agent-6c9647749c-gjghf, container konnectivity-agent �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azuredisk-node-nvqgt, container liveness-probe �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azuredisk-node-nvqgt, container node-driver-registrar �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azurefile-node-mdpsv, container liveness-probe �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azuredisk-node-nh5dm, container liveness-probe �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azuredisk-node-nh5dm, container node-driver-registrar �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-9rmsr, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/azure-ip-masq-agent-q66j5, container azure-ip-masq-agent �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-autoscaler-7d56cd888-rzd7d, container autoscaler �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azurefile-node-mdpsv, container node-driver-registrar �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azuredisk-node-nh5dm, container azuredisk �[1mSTEP�[0m: Creating log watcher for controller kube-system/metrics-server-6576d9ccf8-w7rjv, container metrics-server �[1mSTEP�[0m: Creating log watcher for controller kube-system/cloud-node-manager-w779k, container cloud-node-manager �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-cjbw4, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azuredisk-node-nvqgt, container azuredisk �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azurefile-node-ztwnk, container liveness-probe �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azurefile-node-ztwnk, container node-driver-registrar �[1mSTEP�[0m: Creating log watcher for controller kube-system/konnectivity-agent-6c9647749c-7b2sn, container konnectivity-agent �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azurefile-node-ztwnk, container azurefile �[1mSTEP�[0m: Fetching activity logs took 492.914326ms �[1mSTEP�[0m: Dumping all the Cluster API resources in the "capz-e2e-2ryiq3" namespace �[1mSTEP�[0m: Deleting all clusters in the capz-e2e-2ryiq3 namespace �[1mSTEP�[0m: Deleting cluster capz-e2e-2ryiq3-aks INFO: Waiting for the Cluster capz-e2e-2ryiq3/capz-e2e-2ryiq3-aks to be deleted �[1mSTEP�[0m: Waiting for cluster capz-e2e-2ryiq3-aks to be deleted E0421 20:49:04.984340 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:50:01.786047 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:50:36.791869 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:51:34.787557 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:52:17.569000 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:52:50.538634 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:53:21.486666 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:53:54.056062 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:54:38.738434 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:55:21.762226 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:56:03.601553 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:56:42.636917 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:57:27.070514 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:58:17.620219 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:59:16.114341 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:00:08.298608 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:00:42.795556 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:01:18.789451 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:02:11.232807 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:02:46.857986 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:03:23.560821 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:04:05.144834 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:04:44.895317 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:05:36.966736 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:06:11.294158 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:06:55.691719 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:07:50.209541 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:08:32.137746 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:09:23.996352 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:10:03.732618 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:11:02.241303 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:11:48.630141 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:12:27.346389 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:13:18.275670 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:14:12.249412 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:14:54.274421 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:15:49.059957 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:16:24.219232 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:17:17.217372 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:17:57.172592 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host �[1mSTEP�[0m: Redacting sensitive information from logs E0421 21:18:47.012504 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:19:31.858881 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host
Filter through log files | View test history on testgrid
capz-e2e Workload cluster creation Creating a GPU-enabled cluster with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a VMSS cluster with a single control plane node and an AzureMachinePool with 2 nodes
capz-e2e Workload cluster creation Creating a Windows Enabled cluster With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
capz-e2e Workload cluster creation Creating a Windows enabled VMSS cluster with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider with a 1 control plane nodes and 2 worker nodes
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster With ipv6 worker node
capz-e2e Workload cluster creation Creating a private cluster Creates a public management cluster in the same vnet
capz-e2e Workload cluster creation With 3 control-plane nodes and 2 worker nodes
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster using scale in rollout Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a single control plane cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should adopt up-to-date control plane Machines without modification Should adopt up-to-date control plane Machines without modification
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
... skipping 433 lines ... [1mWith ipv6 worker node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269[0m INFO: "With ipv6 worker node" started at Thu, 21 Apr 2022 19:42:28 UTC on Ginkgo node 3 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-44wvay" for hosting the cluster Apr 21 19:42:28.900: INFO: starting to create namespace for hosting the "capz-e2e-44wvay" test spec 2022/04/21 19:42:28 failed trying to get namespace (capz-e2e-44wvay):namespaces "capz-e2e-44wvay" not found INFO: Creating namespace capz-e2e-44wvay INFO: Creating event watcher for namespace "capz-e2e-44wvay" Apr 21 19:42:28.972: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-44wvay-ipv6 INFO: Creating the workload cluster with name "capz-e2e-44wvay-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 93 lines ... [1mSTEP[0m: Fetching activity logs took 619.345565ms [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-44wvay" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-44wvay namespace [1mSTEP[0m: Deleting cluster capz-e2e-44wvay-ipv6 INFO: Waiting for the Cluster capz-e2e-44wvay/capz-e2e-44wvay-ipv6 to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-44wvay-ipv6 to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-p5wr4, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-44wvay-ipv6-control-plane-gh49b, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-qhj7d, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-44wvay-ipv6-control-plane-kf9dh, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-m2h2k, container calico-kube-controllers: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-44wvay-ipv6-control-plane-r95dd, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-44wvay-ipv6-control-plane-r95dd, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-7mwth, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-44wvay-ipv6-control-plane-r95dd, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-44wvay-ipv6-control-plane-r95dd, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-bnbkv, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-44wvay-ipv6-control-plane-kf9dh, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-tn285, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-44wvay-ipv6-control-plane-kf9dh, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-44wvay-ipv6-control-plane-gh49b, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-qr4s2, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-44wvay-ipv6-control-plane-kf9dh, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-44wvay-ipv6-control-plane-gh49b, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-ht548, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-hp48v, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-flng5, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-xcn2x, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-44wvay-ipv6-control-plane-gh49b, container etcd: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-44wvay [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "With ipv6 worker node" ran for 19m3s on Ginkgo node 3 of 3 ... skipping 10 lines ... [1mWith 3 control-plane nodes and 2 worker nodes[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203[0m INFO: "With 3 control-plane nodes and 2 worker nodes" started at Thu, 21 Apr 2022 19:42:28 UTC on Ginkgo node 2 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-wbv495" for hosting the cluster Apr 21 19:42:28.899: INFO: starting to create namespace for hosting the "capz-e2e-wbv495" test spec 2022/04/21 19:42:28 failed trying to get namespace (capz-e2e-wbv495):namespaces "capz-e2e-wbv495" not found INFO: Creating namespace capz-e2e-wbv495 INFO: Creating event watcher for namespace "capz-e2e-wbv495" Apr 21 19:42:28.968: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-wbv495-ha INFO: Creating the workload cluster with name "capz-e2e-wbv495-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml ... skipping 67 lines ... Apr 21 19:52:21.628: INFO: starting to delete external LB service web4jutg7-elb Apr 21 19:52:21.780: INFO: starting to delete deployment web4jutg7 Apr 21 19:52:21.889: INFO: starting to delete job curl-to-elb-jobufnodgzb27q [1mSTEP[0m: creating a Kubernetes client to the workload cluster [1mSTEP[0m: Creating development namespace Apr 21 19:52:22.052: INFO: starting to create dev deployment namespace 2022/04/21 19:52:22 failed trying to get namespace (development):namespaces "development" not found 2022/04/21 19:52:22 namespace development does not exist, creating... [1mSTEP[0m: Creating production namespace Apr 21 19:52:22.272: INFO: starting to create prod deployment namespace 2022/04/21 19:52:22 failed trying to get namespace (production):namespaces "production" not found 2022/04/21 19:52:22 namespace production does not exist, creating... [1mSTEP[0m: Creating frontendProd, backend and network-policy pod deployments Apr 21 19:52:22.487: INFO: starting to create frontend-prod deployments Apr 21 19:52:22.599: INFO: starting to create frontend-dev deployments Apr 21 19:52:22.709: INFO: starting to create backend deployments Apr 21 19:52:22.818: INFO: starting to create network-policy deployments ... skipping 11 lines ... [1mSTEP[0m: Ensuring we have outbound internet access from the network-policy pods [1mSTEP[0m: Ensuring we have connectivity from network-policy pods to frontend-prod pods [1mSTEP[0m: Ensuring we have connectivity from network-policy pods to backend pods [1mSTEP[0m: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace Apr 21 19:52:49.489: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace [1mSTEP[0m: Ensuring we no longer have ingress access from the network-policy pods to backend pods curl: (7) Failed to connect to 192.168.89.67 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves Apr 21 19:54:59.716: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves [1mSTEP[0m: Applying a network policy to deny egress access in development namespace Apr 21 19:55:00.147: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace [1mSTEP[0m: Ensuring we no longer have egress access from the network-policy pods to backend pods curl: (7) Failed to connect to 192.168.89.67 port 80: Connection timed out curl: (7) Failed to connect to 192.168.89.67 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves Apr 21 19:59:21.859: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves [1mSTEP[0m: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace Apr 21 19:59:22.294: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace [1mSTEP[0m: Ensuring we have egress access from pods with matching labels [1mSTEP[0m: Ensuring we don't have ingress access from pods without matching labels curl: (7) Failed to connect to 192.168.90.4 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves Apr 21 20:01:35.661: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves [1mSTEP[0m: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace Apr 21 20:01:36.095: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace [1mSTEP[0m: Ensuring we have egress access from pods with matching labels [1mSTEP[0m: Ensuring we don't have ingress access from pods without matching labels curl: (7) Failed to connect to 192.168.89.66 port 80: Connection timed out curl: (7) Failed to connect to 192.168.90.4 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves Apr 21 20:05:59.853: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves [1mSTEP[0m: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels Apr 21 20:06:00.239: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels [1mSTEP[0m: Ensuring we have ingress access from pods with matching labels [1mSTEP[0m: Ensuring we don't have ingress access from pods without matching labels curl: (7) Failed to connect to 192.168.89.67 port 80: Connection timed out [1mSTEP[0m: Cleaning up after ourselves Apr 21 20:08:12.292: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves [1mSTEP[0m: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development Apr 21 20:08:12.713: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development [1mSTEP[0m: Ensuring we don't have ingress access from role:frontend pods in production namespace curl: (7) Failed to connect to 192.168.89.67 port 80: Connection timed out [1mSTEP[0m: Ensuring we have ingress access from role:frontend pods in development namespace [1mSTEP[0m: Dumping logs from the "capz-e2e-wbv495-ha" workload cluster [1mSTEP[0m: Dumping workload cluster capz-e2e-wbv495/capz-e2e-wbv495-ha logs Apr 21 20:10:24.883: INFO: INFO: Collecting logs for node capz-e2e-wbv495-ha-control-plane-z9986 in cluster capz-e2e-wbv495-ha in namespace capz-e2e-wbv495 Apr 21 20:10:44.571: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-wbv495-ha-control-plane-z9986 ... skipping 39 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-md6bb, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-wbv495-ha-control-plane-4l55p, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-wbv495-ha-control-plane-b4lr6, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-wbv495-ha-control-plane-z9986, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-wbv495-ha-control-plane-4l55p, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-wbv495-ha-control-plane-b4lr6, container kube-apiserver [1mSTEP[0m: Got error while iterating over activity logs for resource group capz-e2e-wbv495-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded [1mSTEP[0m: Fetching activity logs took 30.000656434s [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-wbv495" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-wbv495 namespace [1mSTEP[0m: Deleting cluster capz-e2e-wbv495-ha INFO: Waiting for the Cluster capz-e2e-wbv495/capz-e2e-wbv495-ha to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-wbv495-ha to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-wbv495-ha-control-plane-z9986, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-wbv495-ha-control-plane-z9986, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-wbv495-ha-control-plane-b4lr6, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-4bpj4, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-wbv495-ha-control-plane-b4lr6, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-wbv495-ha-control-plane-4l55p, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-ljf6q, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-wbv495-ha-control-plane-b4lr6, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-44nlj, container calico-kube-controllers: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-wbv495-ha-control-plane-4l55p, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-5r5rn, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-wbv495-ha-control-plane-b4lr6, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-bzkbd, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-md6bb, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-j6vj8, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-tm8b4, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-wbv495-ha-control-plane-z9986, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-wbv495-ha-control-plane-4l55p, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-z7sm6, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-82l2t, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-wbv495-ha-control-plane-z9986, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-wbv495-ha-control-plane-4l55p, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-4c7bb, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-5mp98, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-hbbzw, container calico-node: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-wbv495 [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 38m1s on Ginkgo node 2 of 3 ... skipping 8 lines ... [1mwith a single control plane node and an AzureMachinePool with 2 nodes[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315[0m INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Thu, 21 Apr 2022 20:01:31 UTC on Ginkgo node 3 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-zt64mx" for hosting the cluster Apr 21 20:01:31.654: INFO: starting to create namespace for hosting the "capz-e2e-zt64mx" test spec 2022/04/21 20:01:31 failed trying to get namespace (capz-e2e-zt64mx):namespaces "capz-e2e-zt64mx" not found INFO: Creating namespace capz-e2e-zt64mx INFO: Creating event watcher for namespace "capz-e2e-zt64mx" Apr 21 20:01:31.687: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-zt64mx-vmss INFO: Creating the workload cluster with name "capz-e2e-zt64mx-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml ... skipping 84 lines ... Apr 21 20:11:18.868: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-zt64mx-vmss-mp-0 Apr 21 20:11:19.448: INFO: INFO: Collecting logs for node capz-e2e-zt64mx-vmss-mp-0000001 in cluster capz-e2e-zt64mx-vmss in namespace capz-e2e-zt64mx Apr 21 20:11:32.821: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-e2e-zt64mx-vmss-mp-0 Failed to get logs for machine pool capz-e2e-zt64mx-vmss-mp-0, cluster capz-e2e-zt64mx/capz-e2e-zt64mx-vmss: opening SSH session: ssh: unexpected packet in response to channel open: <nil> [1mSTEP[0m: Dumping workload cluster capz-e2e-zt64mx/capz-e2e-zt64mx-vmss kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 673.591692ms [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-bfttj, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-zt64mx-vmss-control-plane-xtf52, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-6x8nq, container calico-node [1mSTEP[0m: Dumping workload cluster capz-e2e-zt64mx/capz-e2e-zt64mx-vmss Azure activity log ... skipping 10 lines ... [1mSTEP[0m: Fetching activity logs took 588.172014ms [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-zt64mx" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-zt64mx namespace [1mSTEP[0m: Deleting cluster capz-e2e-zt64mx-vmss INFO: Waiting for the Cluster capz-e2e-zt64mx/capz-e2e-zt64mx-vmss to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-zt64mx-vmss to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-t7bfz, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-6qbrp, container calico-node: http2: client connection lost [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-6x8nq, container calico-node: Get "https://capz-e2e-zt64mx-vmss-8c06bf1d.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/kube-system/pods/calico-node-6x8nq/log?container=calico-node&follow=true": http2: client connection lost [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-lmqx7, container kube-proxy: Get "https://capz-e2e-zt64mx-vmss-8c06bf1d.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/kube-system/pods/kube-proxy-lmqx7/log?container=kube-proxy&follow=true": http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-zt64mx [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 23m17s on Ginkgo node 3 of 3 ... skipping 10 lines ... [1mCreates a public management cluster in the same vnet[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141[0m INFO: "Creates a public management cluster in the same vnet" started at Thu, 21 Apr 2022 19:42:28 UTC on Ginkgo node 1 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-su0eo7" for hosting the cluster Apr 21 19:42:28.857: INFO: starting to create namespace for hosting the "capz-e2e-su0eo7" test spec 2022/04/21 19:42:28 failed trying to get namespace (capz-e2e-su0eo7):namespaces "capz-e2e-su0eo7" not found INFO: Creating namespace capz-e2e-su0eo7 INFO: Creating event watcher for namespace "capz-e2e-su0eo7" Apr 21 19:42:28.889: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-su0eo7-public-custom-vnet [1mSTEP[0m: creating Azure clients with the workload cluster's subscription [1mSTEP[0m: creating a resource group ... skipping 100 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-ct9dw, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-su0eo7-public-custom-vnet-control-plane-p6255, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-su0eo7-public-custom-vnet-control-plane-p6255, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-h28mk, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-h9kw9, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-wcm5z, container calico-node [1mSTEP[0m: Got error while iterating over activity logs for resource group capz-e2e-su0eo7-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded [1mSTEP[0m: Fetching activity logs took 30.003078412s [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-su0eo7" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-su0eo7 namespace [1mSTEP[0m: Deleting cluster capz-e2e-su0eo7-public-custom-vnet INFO: Waiting for the Cluster capz-e2e-su0eo7/capz-e2e-su0eo7-public-custom-vnet to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-su0eo7-public-custom-vnet to be deleted W0421 20:29:36.461299 24227 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding I0421 20:30:07.426389 24227 trace.go:205] Trace[1669841178]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (21-Apr-2022 20:29:37.425) (total time: 30000ms): Trace[1669841178]: [30.000713231s] [30.000713231s] END E0421 20:30:07.426456 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp 20.117.218.30:6443: i/o timeout I0421 20:30:39.797024 24227 trace.go:205] Trace[954037558]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (21-Apr-2022 20:30:09.795) (total time: 30001ms): Trace[954037558]: [30.001279944s] [30.001279944s] END E0421 20:30:39.797151 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp 20.117.218.30:6443: i/o timeout I0421 20:31:14.186382 24227 trace.go:205] Trace[687020630]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (21-Apr-2022 20:30:44.185) (total time: 30000ms): Trace[687020630]: [30.000662843s] [30.000662843s] END E0421 20:31:14.186442 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp 20.117.218.30:6443: i/o timeout I0421 20:31:51.335372 24227 trace.go:205] Trace[1393835487]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (21-Apr-2022 20:31:21.334) (total time: 30000ms): Trace[1393835487]: [30.000961322s] [30.000961322s] END E0421 20:31:51.335443 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp 20.117.218.30:6443: i/o timeout I0421 20:32:41.237324 24227 trace.go:205] Trace[158909343]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (21-Apr-2022 20:32:11.236) (total time: 30000ms): Trace[158909343]: [30.000983745s] [30.000983745s] END E0421 20:32:41.237393 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp 20.117.218.30:6443: i/o timeout I0421 20:33:53.046965 24227 trace.go:205] Trace[1502171275]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (21-Apr-2022 20:33:23.046) (total time: 30000ms): Trace[1502171275]: [30.000681812s] [30.000681812s] END E0421 20:33:53.047043 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp 20.117.218.30:6443: i/o timeout [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-su0eo7 [1mSTEP[0m: Running additional cleanup for the "create-workload-cluster" test spec Apr 21 20:34:56.352: INFO: deleting an existing virtual network "custom-vnet" I0421 20:35:03.557491 24227 trace.go:205] Trace[626668323]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (21-Apr-2022 20:34:33.556) (total time: 30000ms): Trace[626668323]: [30.00087187s] [30.00087187s] END E0421 20:35:03.557585 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp 20.117.218.30:6443: i/o timeout Apr 21 20:35:07.876: INFO: deleting an existing route table "node-routetable" Apr 21 20:35:10.493: INFO: deleting an existing network security group "node-nsg" Apr 21 20:35:21.097: INFO: deleting an existing network security group "control-plane-nsg" Apr 21 20:35:31.671: INFO: verifying the existing resource group "capz-e2e-su0eo7-public-custom-vnet" is empty Apr 21 20:35:31.860: INFO: deleting the existing resource group "capz-e2e-su0eo7-public-custom-vnet" E0421 20:35:59.322565 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:36:45.297741 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs E0421 20:37:34.944910 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: "Creates a public management cluster in the same vnet" ran for 55m43s on Ginkgo node 1 of 3 [32m• [SLOW TEST:3342.753 seconds][0m Workload cluster creation [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43[0m ... skipping 6 lines ... [1mwith a single control plane node and 1 node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377[0m INFO: "with a single control plane node and 1 node" started at Thu, 21 Apr 2022 20:20:29 UTC on Ginkgo node 2 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-b1vofu" for hosting the cluster Apr 21 20:20:29.475: INFO: starting to create namespace for hosting the "capz-e2e-b1vofu" test spec 2022/04/21 20:20:29 failed trying to get namespace (capz-e2e-b1vofu):namespaces "capz-e2e-b1vofu" not found INFO: Creating namespace capz-e2e-b1vofu INFO: Creating event watcher for namespace "capz-e2e-b1vofu" Apr 21 20:20:29.531: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-b1vofu-gpu INFO: Creating the workload cluster with name "capz-e2e-b1vofu-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 80 lines ... [1mwith a 1 control plane nodes and 2 worker nodes[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419[0m INFO: "with a 1 control plane nodes and 2 worker nodes" started at Thu, 21 Apr 2022 20:24:48 UTC on Ginkgo node 3 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-x4jxi4" for hosting the cluster Apr 21 20:24:48.851: INFO: starting to create namespace for hosting the "capz-e2e-x4jxi4" test spec 2022/04/21 20:24:48 failed trying to get namespace (capz-e2e-x4jxi4):namespaces "capz-e2e-x4jxi4" not found INFO: Creating namespace capz-e2e-x4jxi4 INFO: Creating event watcher for namespace "capz-e2e-x4jxi4" Apr 21 20:24:48.895: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-x4jxi4-oot INFO: Creating the workload cluster with name "capz-e2e-x4jxi4-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml ... skipping 98 lines ... [1mSTEP[0m: Fetching activity logs took 579.009437ms [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-x4jxi4" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-x4jxi4 namespace [1mSTEP[0m: Deleting cluster capz-e2e-x4jxi4-oot INFO: Waiting for the Cluster capz-e2e-x4jxi4/capz-e2e-x4jxi4-oot to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-x4jxi4-oot to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/cloud-node-manager-45lxt, container cloud-node-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/cloud-node-manager-4sstb, container cloud-node-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-tvqhd, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-5kbhv, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-8vjb6, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-mtskr, container calico-node: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-x4jxi4 [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 22m55s on Ginkgo node 3 of 3 ... skipping 10 lines ... [1mWith 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496[0m INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Thu, 21 Apr 2022 20:44:17 UTC on Ginkgo node 2 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-2f49m0" for hosting the cluster Apr 21 20:44:17.599: INFO: starting to create namespace for hosting the "capz-e2e-2f49m0" test spec 2022/04/21 20:44:17 failed trying to get namespace (capz-e2e-2f49m0):namespaces "capz-e2e-2f49m0" not found INFO: Creating namespace capz-e2e-2f49m0 INFO: Creating event watcher for namespace "capz-e2e-2f49m0" Apr 21 20:44:17.647: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-2f49m0-win-ha INFO: Creating the workload cluster with name "capz-e2e-2f49m0-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 175 lines ... [1mwith a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543[0m INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Thu, 21 Apr 2022 20:47:44 UTC on Ginkgo node 3 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-h5k22z" for hosting the cluster Apr 21 20:47:44.174: INFO: starting to create namespace for hosting the "capz-e2e-h5k22z" test spec 2022/04/21 20:47:44 failed trying to get namespace (capz-e2e-h5k22z):namespaces "capz-e2e-h5k22z" not found INFO: Creating namespace capz-e2e-h5k22z INFO: Creating event watcher for namespace "capz-e2e-h5k22z" Apr 21 20:47:44.213: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-h5k22z-win-vmss INFO: Creating the workload cluster with name "capz-e2e-h5k22z-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 123 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-sdtsj, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-h5k22z-win-vmss-control-plane-5558m, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-d594v, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-h5k22z-win-vmss-control-plane-5558m, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-jqlwg, container kube-flannel [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-xgctf, container kube-proxy [1mSTEP[0m: Got error while iterating over activity logs for resource group capz-e2e-h5k22z-win-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded [1mSTEP[0m: Fetching activity logs took 30.001355359s [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-h5k22z" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-h5k22z namespace [1mSTEP[0m: Deleting cluster capz-e2e-h5k22z-win-vmss INFO: Waiting for the Cluster capz-e2e-h5k22z/capz-e2e-h5k22z-win-vmss to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-h5k22z-win-vmss to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-d594v, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-h5k22z-win-vmss-control-plane-5558m, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-windows-sdtsj, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-h5k22z-win-vmss-control-plane-5558m, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-xgctf, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-h5k22z-win-vmss-control-plane-5558m, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-h5k22z-win-vmss-control-plane-5558m, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-w5fg4, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-xtnfl, container kube-flannel: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-h5k22z [1mSTEP[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP[0m: Redacting sensitive information from logs INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 27m23s on Ginkgo node 3 of 3 ... skipping 10 lines ... [1mwith a single control plane node and 1 node[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454[0m INFO: "with a single control plane node and 1 node" started at Thu, 21 Apr 2022 20:38:11 UTC on Ginkgo node 1 of 3 [1mSTEP[0m: Creating namespace "capz-e2e-2ryiq3" for hosting the cluster Apr 21 20:38:11.613: INFO: starting to create namespace for hosting the "capz-e2e-2ryiq3" test spec 2022/04/21 20:38:11 failed trying to get namespace (capz-e2e-2ryiq3):namespaces "capz-e2e-2ryiq3" not found INFO: Creating namespace capz-e2e-2ryiq3 INFO: Creating event watcher for namespace "capz-e2e-2ryiq3" Apr 21 20:38:11.644: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-2ryiq3-aks INFO: Creating the workload cluster with name "capz-e2e-2ryiq3-aks" using the "aks-multi-tenancy" template (Kubernetes v1.22.6, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml ... skipping 7 lines ... machinepool.cluster.x-k8s.io/agentpool1 created azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created INFO: Waiting for the cluster infrastructure to be provisioned [1mSTEP[0m: Waiting for cluster to enter the provisioned phase E0421 20:38:28.114164 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:39:09.504957 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:39:52.511927 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:40:33.521566 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:41:24.993470 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:41:56.972501 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:42:35.989643 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:43:08.473307 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:43:55.681559 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host INFO: Waiting for control plane to be initialized Apr 21 20:44:03.615: INFO: Waiting for the first control plane machine managed by capz-e2e-2ryiq3/capz-e2e-2ryiq3-aks to be provisioned [1mSTEP[0m: Waiting for atleast one control plane node to exist INFO: Waiting for control plane to be ready Apr 21 20:44:03.652: INFO: Waiting for the first control plane machine managed by capz-e2e-2ryiq3/capz-e2e-2ryiq3-aks to be provisioned [1mSTEP[0m: Waiting for all control plane nodes to exist ... skipping 13 lines ... [1mSTEP[0m: time sync OK for host aks-agentpool1-58290704-vmss000000 [1mSTEP[0m: time sync OK for host aks-agentpool1-58290704-vmss000000 [1mSTEP[0m: Dumping logs from the "capz-e2e-2ryiq3-aks" workload cluster [1mSTEP[0m: Dumping workload cluster capz-e2e-2ryiq3/capz-e2e-2ryiq3-aks logs Apr 21 20:44:21.971: INFO: INFO: Collecting logs for node aks-agentpool1-58290704-vmss000000 in cluster capz-e2e-2ryiq3-aks in namespace capz-e2e-2ryiq3 E0421 20:44:52.458657 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:45:23.938987 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:46:05.333474 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host Apr 21 20:46:32.625: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0 Failed to get logs for machine pool agentpool0, cluster capz-e2e-2ryiq3/capz-e2e-2ryiq3-aks: [dialing public load balancer at capz-e2e-2ryiq3-aks-c9bb4370.hcp.uksouth.azmk8s.io: dial tcp 51.132.168.222:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."] Apr 21 20:46:33.178: INFO: INFO: Collecting logs for node aks-agentpool1-58290704-vmss000000 in cluster capz-e2e-2ryiq3-aks in namespace capz-e2e-2ryiq3 E0421 20:46:59.049543 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:47:45.615460 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:48:23.952485 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host Apr 21 20:48:43.697: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0 Failed to get logs for machine pool agentpool1, cluster capz-e2e-2ryiq3/capz-e2e-2ryiq3-aks: [dialing public load balancer at capz-e2e-2ryiq3-aks-c9bb4370.hcp.uksouth.azmk8s.io: dial tcp 51.132.168.222:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."] [1mSTEP[0m: Dumping workload cluster capz-e2e-2ryiq3/capz-e2e-2ryiq3-aks kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 990.668393ms [1mSTEP[0m: Dumping workload cluster capz-e2e-2ryiq3/capz-e2e-2ryiq3-aks Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/azure-ip-masq-agent-v78rh, container azure-ip-masq-agent [1mSTEP[0m: Creating log watcher for controller kube-system/cloud-node-manager-trmrk, container cloud-node-manager [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-69c47794-289lm, container coredns ... skipping 21 lines ... [1mSTEP[0m: Fetching activity logs took 492.914326ms [1mSTEP[0m: Dumping all the Cluster API resources in the "capz-e2e-2ryiq3" namespace [1mSTEP[0m: Deleting all clusters in the capz-e2e-2ryiq3 namespace [1mSTEP[0m: Deleting cluster capz-e2e-2ryiq3-aks INFO: Waiting for the Cluster capz-e2e-2ryiq3/capz-e2e-2ryiq3-aks to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-2ryiq3-aks to be deleted E0421 20:49:04.984340 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:50:01.786047 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:50:36.791869 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:51:34.787557 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:52:17.569000 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:52:50.538634 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:53:21.486666 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:53:54.056062 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:54:38.738434 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:55:21.762226 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:56:03.601553 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:56:42.636917 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:57:27.070514 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:58:17.620219 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 20:59:16.114341 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:00:08.298608 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:00:42.795556 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:01:18.789451 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:02:11.232807 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:02:46.857986 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:03:23.560821 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:04:05.144834 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:04:44.895317 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:05:36.966736 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:06:11.294158 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:06:55.691719 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:07:50.209541 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:08:32.137746 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:09:23.996352 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:10:03.732618 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:11:02.241303 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:11:48.630141 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:12:27.346389 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:13:18.275670 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:14:12.249412 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:14:54.274421 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:15:49.059957 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:16:24.219232 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:17:17.217372 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:17:57.172592 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host [1mSTEP[0m: Redacting sensitive information from logs E0421 21:18:47.012504 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E0421 21:19:31.858881 24227 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-su0eo7/events?resourceVersion=8788": dial tcp: lookup capz-e2e-su0eo7-public-custom-vnet-54b624ad.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host [91m[1m• Failure in Spec Teardown (AfterEach) [2520.432 seconds][0m Workload cluster creation [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43[0m [91m[1mCreating an AKS cluster [AfterEach][0m ... skipping 51 lines ... [1mSTEP[0m: Tearing down the management cluster [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mWorkload cluster creation [0m[91m[1m[AfterEach] Creating an AKS cluster [0m[90mwith a single control plane node and 1 node [0m [37m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/cluster_helpers.go:165[0m [1m[91mRan 9 of 22 Specs in 5983.070 seconds[0m [1m[91mFAIL![0m -- [32m[1m8 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m13 Skipped[0m Ginkgo ran 1 suite in 1h41m10.912953483s Test Suite Failed make[1]: *** [Makefile:173: test-e2e-run] Error 1 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:181: test-e2e] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...