This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2022-09-19 18:03
Elapsed29m32s
Revisionmain

No Test Failures!


Error lines from build-log.txt

... skipping 778 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 146 lines ...
# Get kubeconfig and store it locally.
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-u9b8gv-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
Unable to connect to the server: dial tcp 20.14.68.10:6443: i/o timeout
Unable to connect to the server: dial tcp 20.14.68.10:6443: i/o timeout
Unable to connect to the server: dial tcp 20.14.68.10:6443: i/o timeout
make[1]: *** [Makefile:312: create-workload-cluster] Error 124
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:339: create-cluster] Error 2
Unable to connect to the server: dial tcp 20.14.68.10:6443: i/o timeout
Collecting logs for cluster capz-u9b8gv in namespace default and dumping logs to /logs/artifacts
INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-695d89688-gwzfd, container manager
INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-7d44dcdd44-q4dbm, container manager
INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-846fb4999f-qpgcb, container manager
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-676fd974f6-qgw8t, container manager
STEP: Dumping workload cluster default/capz-u9b8gv logs
Sep 19 18:23:36.194: INFO: Collecting logs for Linux node capz-u9b8gv-control-plane-br9fw in cluster capz-u9b8gv in namespace default

Sep 19 18:30:10.571: INFO: Collecting boot logs for AzureMachine capz-u9b8gv-control-plane-br9fw

Failed to get logs for machine capz-u9b8gv-control-plane-l65qh, cluster default/capz-u9b8gv: [dialing public load balancer at capz-u9b8gv-e1b20475.westus3.cloudapp.azure.com: dial tcp 20.14.68.10:22: connect: connection timed out, Unable to collect VM Boot Diagnostic logs: AzureMachine provider ID is nil]
STEP: Dumping workload cluster default/capz-u9b8gv kube-system pod logs
panic: Failed to get controller-runtime client
Unexpected error:
    <*url.Error | 0xc000a85560>: {
        Op: "Get",
        URL: "https://capz-u9b8gv-e1b20475.westus3.cloudapp.azure.com:6443/api?timeout=32s",
        Err: <*net.OpError | 0xc0013c5d10>{
            Op: "dial",
            Net: "tcp",
            Source: nil,
... skipping 2 lines ...
        },
    }
    Get "https://capz-u9b8gv-e1b20475.westus3.cloudapp.azure.com:6443/api?timeout=32s": dial tcp 20.14.68.10:6443: i/o timeout
occurred

goroutine 1 [running]:
main.Fail({0xc000bbe500?, 0xc000d3b650?}, {0xc000a85560?, 0xc000bbe280?, 0x1cca79a?})
	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/logger.go:36 +0x2d
github.com/onsi/gomega/internal.(*Assertion).match(0xc000b6f4c0, {0x311bf38, 0x442c8b0}, 0x0, {0xc0004684f0, 0x1, 0x1})
	/home/prow/go/pkg/mod/github.com/onsi/gomega@v1.18.1/internal/assertion.go:100 +0x1f0
github.com/onsi/gomega/internal.(*Assertion).ToNot(0xc000b6f4c0, {0x311bf38, 0x442c8b0}, {0xc0004684f0, 0x1, 0x1})
	/home/prow/go/pkg/mod/github.com/onsi/gomega@v1.18.1/internal/assertion.go:63 +0x91
sigs.k8s.io/cluster-api/test/framework.(*clusterProxy).GetClient(0xc000878800)
... skipping 19 lines ...