Recent runs || View in Spyglass
PR | edreed: [V2] build: disable provenance attestation to work around docker issue |
Result | FAILURE |
Tests | 0 failed / 0 succeeded |
Started | |
Elapsed | 20m53s |
Revision | 5ea95d0ebda60f445b259c9f22f01aee679e594a |
Refs |
1712 |
... skipping 802 lines ... certificate.cert-manager.io "selfsigned-cert" deleted # Create secret for AzureClusterIdentity ./hack/create-identity-secret.sh make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[2]: Nothing to be done for 'kubectl'. make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' Error from server (NotFound): secrets "cluster-identity-secret" not found secret/cluster-identity-secret created secret/cluster-identity-secret labeled # Create customized cloud provider configs ./hack/create-custom-cloud-provider-config.sh make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[2]: Nothing to be done for 'kubectl'. ... skipping 127 lines ... # Wait for the kubeconfig to become available. timeout --foreground 300 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets | grep capz-bdlj6u-kubeconfig; do sleep 1; done" capz-bdlj6u-kubeconfig cluster.x-k8s.io/secret 1 0s # Get kubeconfig and store it locally. /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-bdlj6u-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done" error: the server doesn't have a resource type "nodes" No resources found No resources found capz-bdlj6u-control-plane-glf89 NotReady <none> 1s v1.24.6 run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig ..." to work with the new target cluster make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ... skipping 40 lines ... Feb 1 23:49:37.234: INFO: Creating log watcher for controller kube-system/kube-proxy-4f8kk, container kube-proxy Feb 1 23:49:37.234: INFO: Describing Pod kube-system/kube-proxy-4f8kk Feb 1 23:49:37.636: INFO: Describing Pod kube-system/kube-proxy-6hz2k Feb 1 23:49:37.636: INFO: Creating log watcher for controller kube-system/kube-proxy-6hz2k, container kube-proxy Feb 1 23:49:38.037: INFO: Creating log watcher for controller kube-system/kube-proxy-gxtcb, container kube-proxy Feb 1 23:49:38.037: INFO: Describing Pod kube-system/kube-proxy-gxtcb Feb 1 23:49:38.087: INFO: Error starting logs stream for pod kube-system/kube-proxy-gxtcb, container kube-proxy: container "kube-proxy" in pod "kube-proxy-gxtcb" is waiting to start: ContainerCreating Feb 1 23:49:38.436: INFO: Creating log watcher for controller kube-system/kube-proxy-h2r45, container kube-proxy Feb 1 23:49:38.436: INFO: Describing Pod kube-system/kube-proxy-h2r45 Feb 1 23:49:38.834: INFO: Fetching kube-system pod logs took 2.78007761s Feb 1 23:49:38.834: INFO: Dumping workload cluster default/capz-bdlj6u Azure activity log Feb 1 23:49:38.834: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-bdlj6u-control-plane-glf89, container kube-scheduler Feb 1 23:49:38.835: INFO: Describing Pod kube-system/kube-scheduler-capz-bdlj6u-control-plane-glf89 ... skipping 18 lines ...