Recent runs || View in Spyglass
PR | andyzhangx: fix: target is busy unmount failure |
Result | FAILURE |
Tests | 0 failed / 0 succeeded |
Started | |
Elapsed | 12m1s |
Revision | 89c3563f6bc442bb7fcd7a7f30aa16f90d2cda06 |
Refs |
1158 |
... skipping 765 lines ... certificate.cert-manager.io "selfsigned-cert" deleted # Create secret for AzureClusterIdentity ./hack/create-identity-secret.sh make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[2]: Nothing to be done for 'kubectl'. make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' Error from server (NotFound): secrets "cluster-identity-secret" not found secret/cluster-identity-secret created secret/cluster-identity-secret labeled # Create customized cloud provider configs ./hack/create-custom-cloud-provider-config.sh make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[2]: Nothing to be done for 'kubectl'. ... skipping 49 lines ... clusterrolebinding.rbac.authorization.k8s.io/capi-kubeadm-control-plane-manager-rolebinding created service/capi-kubeadm-control-plane-webhook-service created deployment.apps/capi-kubeadm-control-plane-controller-manager created issuer.cert-manager.io/capi-kubeadm-control-plane-selfsigned-issuer created mutatingwebhookconfiguration.admissionregistration.k8s.io/capi-kubeadm-control-plane-mutating-webhook-configuration created validatingwebhookconfiguration.admissionregistration.k8s.io/capi-kubeadm-control-plane-validating-webhook-configuration created Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": context deadline exceeded Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/validate?timeout=10s": context deadline exceeded make[1]: *** [Makefile:278: create-management-cluster] Error 1 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:340: create-cluster] Error 2 Collecting logs for cluster capz-smx4rd in namespace default and dumping logs to /logs/artifacts INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-78c686c9b6-2d8wr, container manager INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-85c6b99674-2rcp9, container manager INFO: Error starting logs stream for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-85c6b99674-2rcp9, container manager: container "manager" in pod "capi-kubeadm-control-plane-controller-manager-85c6b99674-2rcp9" is waiting to start: ContainerCreating INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-fdbd64c45-g22tl, container manager INFO: Error starting logs stream for pod capi-system/capi-controller-manager-fdbd64c45-g22tl, container manager: container "manager" in pod "capi-controller-manager-fdbd64c45-g22tl" is waiting to start: ContainerCreating panic: Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. goroutine 1 [running]: github.com/onsi/ginkgo.Fail({0xc00108a080, 0x79}, {0x0?, 0x2?, 0x2?}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/ginkgo_dsl.go:291 +0xdd sigs.k8s.io/cluster-api/test/framework.GetCAPIResources({0x3139a10?, 0xc000120008}, {{0x7f60f8793ea0?, 0xc000580b60?}, {0x7ffde2041919?, 0x1?}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.8/framework/alltypes_helpers.go:67 +0x61a sigs.k8s.io/cluster-api/test/framework.DumpAllResources({0x3139a10?, 0xc000120008}, {{0x7f60f8793ea0, 0xc000580b60}, {0x7ffde2041919, 0x7}, {0xc0002b7590, 0x2c}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.8/framework/alltypes_helpers.go:120 +0x1ff main.collectManagementClusterLogs(0xc000414160, {0xc0002b75f0, 0x2e}, 0xc000404c90, {0xc0002b7590, 0x2c}) ... skipping 15 lines ...