This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2022-09-28 09:31
Elapsed25m20s
Revisionmain

No Test Failures!


Error lines from build-log.txt

... skipping 704 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 144 lines ...
# Get kubeconfig and store it locally.
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-drr9mf-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
Unable to connect to the server: dial tcp 20.126.183.231:6443: i/o timeout
Unable to connect to the server: dial tcp 20.126.183.231:6443: i/o timeout
Unable to connect to the server: dial tcp 20.126.183.231:6443: i/o timeout
make[1]: *** [Makefile:312: create-workload-cluster] Error 124
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:339: create-cluster] Error 2
Unable to connect to the server: dial tcp 20.126.183.231:6443: i/o timeout
Collecting logs for cluster capz-drr9mf in namespace default and dumping logs to /logs/artifacts
INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-695d89688-rxdgq, container manager
INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-7d44dcdd44-bxg7p, container manager
INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-846fb4999f-lssm4, container manager
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-7ccbdbc8fd-s66sl, container manager
STEP: Dumping workload cluster default/capz-drr9mf logs
Sep 28 09:51:31.553: INFO: Collecting logs for Linux node capz-drr9mf-control-plane-ttvfv in cluster capz-drr9mf in namespace default

Sep 28 09:52:31.555: INFO: Collecting boot logs for AzureMachine capz-drr9mf-control-plane-ttvfv

Failed to get logs for machine capz-drr9mf-control-plane-xh626, cluster default/capz-drr9mf: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep 28 09:52:32.966: INFO: Collecting logs for Linux node capz-drr9mf-md-0-d6q4g in cluster capz-drr9mf in namespace default

Sep 28 09:53:32.968: INFO: Collecting boot logs for AzureMachine capz-drr9mf-md-0-d6q4g

Failed to get logs for machine capz-drr9mf-md-0-66b6d9566d-5xtd8, cluster default/capz-drr9mf: [open /etc/azure-ssh/azure-ssh: no such file or directory, Unable to collect VM Boot Diagnostic logs: AzureMachine provider ID is nil]
Sep 28 09:53:33.000: INFO: Collecting logs for Linux node capz-drr9mf-md-0-p9bbq in cluster capz-drr9mf in namespace default

Sep 28 09:54:33.003: INFO: Collecting boot logs for AzureMachine capz-drr9mf-md-0-p9bbq

Failed to get logs for machine capz-drr9mf-md-0-66b6d9566d-qb7nf, cluster default/capz-drr9mf: [open /etc/azure-ssh/azure-ssh: no such file or directory, Unable to collect VM Boot Diagnostic logs: AzureMachine provider ID is nil]
STEP: Dumping workload cluster default/capz-drr9mf kube-system pod logs
panic: Failed to get controller-runtime client
Unexpected error:
    <*url.Error | 0xc00232e000>: {
        Op: "Get",
        URL: "https://capz-drr9mf-b8380d79.westeurope.cloudapp.azure.com:6443/api?timeout=32s",
        Err: <*net.OpError | 0xc0010de000>{
            Op: "dial",
            Net: "tcp",
            Source: nil,
... skipping 6 lines ...
        },
    }
    Get "https://capz-drr9mf-b8380d79.westeurope.cloudapp.azure.com:6443/api?timeout=32s": dial tcp 20.126.183.231:6443: i/o timeout
occurred

goroutine 1 [running]:
main.Fail({0xc000d57600?, 0xc00102c120?}, {0xc00232e000?, 0xc000d57340?, 0x1cca79a?})
	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/logger.go:36 +0x2d
github.com/onsi/gomega/internal.(*Assertion).match(0xc000d76000, {0x311bf38, 0x442c8b0}, 0x0, {0xc000d4a0a0, 0x1, 0x1})
	/home/prow/go/pkg/mod/github.com/onsi/gomega@v1.18.1/internal/assertion.go:100 +0x1f0
github.com/onsi/gomega/internal.(*Assertion).ToNot(0xc000d76000, {0x311bf38, 0x442c8b0}, {0xc000d4a0a0, 0x1, 0x1})
	/home/prow/go/pkg/mod/github.com/onsi/gomega@v1.18.1/internal/assertion.go:63 +0x91
sigs.k8s.io/cluster-api/test/framework.(*clusterProxy).GetClient(0xc0010c7800)
... skipping 19 lines ...