This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 0 succeeded
Started2022-09-24 03:26
Elapsed1h21m
Revisionmain

Test Failures


capo-conformance conformance tests conformance 34m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capo\-conformance\sconformance\stests\sconformance$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-openstack/test/e2e/suites/conformance/conformance_test.go:56
Timed out after 1800.000s.
No Control Plane machines came into existence. 
Expected
    <bool>: false
to be true
/root/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.0/framework/controlplane_helpers.go:153
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Error lines from build-log.txt

... skipping 104 lines ...
|o.=.B+ +S        |
| + Bo+=          |
|  =ooo.          |
|.oo E  .o        |
|oo.. .o+         |
+----[SHA256]-----+
ERROR: (gcloud.compute.config-ssh) Could not fetch resource:
 - Required 'compute.projects.get' permission for 'projects/k8s-prow-builds'

+ true
+ [[ -n '' ]]
+ init_infrastructure
+ [[ capo-e2e-mynetwork != \d\e\f\a\u\l\t ]]
+ gcloud compute networks describe capo-e2e-mynetwork --project k8s-jkns-gce-upgrade
ERROR: (gcloud.compute.networks.describe) Could not fetch resource:
 - The resource 'projects/k8s-jkns-gce-upgrade/global/networks/capo-e2e-mynetwork' was not found

+ gcloud compute networks create --project k8s-jkns-gce-upgrade capo-e2e-mynetwork --subnet-mode custom
Created [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-upgrade/global/networks/capo-e2e-mynetwork].
NAME                SUBNET_MODE  BGP_ROUTING_MODE  IPV4_RANGE  GATEWAY_IPV4
capo-e2e-mynetwork  CUSTOM       REGIONAL
... skipping 71 lines ...
selfLinkWithId: https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-upgrade/global/networks/7225504045941900828
subnetworks:
- https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-upgrade/regions/us-east4/subnetworks/capo-e2e-mynetwork
x_gcloud_bgp_routing_mode: REGIONAL
x_gcloud_subnet_mode: CUSTOM
+ gcloud compute routers describe capo-e2e-myrouter --project=k8s-jkns-gce-upgrade --region=us-east4
ERROR: (gcloud.compute.routers.describe) Could not fetch resource:
 - The resource 'projects/k8s-jkns-gce-upgrade/regions/us-east4/routers/capo-e2e-myrouter' was not found

+ gcloud compute routers create capo-e2e-myrouter --project=k8s-jkns-gce-upgrade --region=us-east4 --network=capo-e2e-mynetwork
NAME               REGION    NETWORK
capo-e2e-myrouter  us-east4  capo-e2e-mynetwork
Creating router [capo-e2e-myrouter]...
.....done.
+ gcloud compute routers nats describe --router=capo-e2e-myrouter capo-e2e-mynat --project=k8s-jkns-gce-upgrade --region=us-east4
ERROR: (gcloud.compute.routers.nats.describe) NAT `capo-e2e-mynat` not found
+ gcloud compute routers nats create capo-e2e-mynat --project=k8s-jkns-gce-upgrade --router-region=us-east4 --router=capo-e2e-myrouter --nat-all-subnet-ip-ranges --auto-allocate-nat-external-ips
Creating NAT [capo-e2e-mynat] in router [capo-e2e-myrouter]...
..................................................done.
+ create_devstack controller 10.0.3.15 public
+ local name=controller
+ shift
... skipping 98 lines ...
+ local servername=capo-e2e-controller
+ local diskname=capo-e2e-disk
+ local imagename=capo-e2e-controller-image
+ for GCP_ZONE in "${GCP_REGION}-a" "${GCP_REGION}-b" "${GCP_REGION}-c"
+ gcloud compute images describe capo-e2e-controller-image --project k8s-jkns-gce-upgrade
+ gcloud compute instances describe capo-e2e-controller --project k8s-jkns-gce-upgrade --zone us-east4-a
ERROR: (gcloud.compute.instances.describe) Could not fetch resource:
 - The resource 'projects/k8s-jkns-gce-upgrade/zones/us-east4-a/instances/capo-e2e-controller' was not found

+ gcloud compute instances create capo-e2e-controller --project k8s-jkns-gce-upgrade --zone us-east4-a --image capo-e2e-controller-image --boot-disk-size 200G --boot-disk-type pd-ssd --can-ip-forward --tags http-server,https-server,novnc,openstack-apis --min-cpu-platform 'Intel Cascade Lake' --machine-type n2-standard-16 --network-interface=private-network-ip=10.0.3.15,network=capo-e2e-mynetwork,subnet=capo-e2e-mynetwork --metadata-from-file user-data=/logs/artifacts/devstack/cloud-init-controller.yaml
Created [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-upgrade/zones/us-east4-a/instances/capo-e2e-controller].
WARNING: Some requests generated warnings:
 - Disk size: '200 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.
... skipping 51 lines ...
Processing triggers for libc-bin (2.28-10+deb10u1) ...
+ pip3 install --ignore-installed PyYAML
Collecting PyYAML
  Downloading PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (596 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 596.3/596.3 kB 10.1 MB/s eta 0:00:00
Installing collected packages: PyYAML
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
awscli 1.25.79 requires PyYAML<5.5,>=3.10, but you have pyyaml 6.0 which is incompatible.
Successfully installed PyYAML-6.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
+ pip3 install python-cinderclient python-glanceclient python-keystoneclient python-neutronclient python-novaclient python-openstackclient python-octaviaclient
Collecting python-cinderclient
  Downloading python_cinderclient-8.3.0-py3-none-any.whl (254 kB)
... skipping 210 lines ...
+ [[ 0 -ge 10 ]]
+ attempt=1
+ set +e
+ eval 'ssh -i /root/.ssh/google_compute_engine -l cloud  -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o PasswordAuthentication=no  34.145.219.84 -- true'
++ ssh -i /root/.ssh/google_compute_engine -l cloud -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o PasswordAuthentication=no 34.145.219.84 -- true
ssh: connect to host 34.145.219.84 port 22: Connection refused
+ echo 'failed 1 times: ssh -i /root/.ssh/google_compute_engine -l cloud  -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o PasswordAuthentication=no  34.145.219.84 -- true'
failed 1 times: ssh -i /root/.ssh/google_compute_engine -l cloud  -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o PasswordAuthentication=no  34.145.219.84 -- true
+ set -e
+ sleep 30
+ [[ 1 -ge 10 ]]
+ attempt=2
+ set +e
+ eval 'ssh -i /root/.ssh/google_compute_engine -l cloud  -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o PasswordAuthentication=no  34.145.219.84 -- true'
... skipping 56 lines ...
++ echo 'ssh -i /root/.ssh/google_compute_engine -l cloud ' '-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o PasswordAuthentication=no '
+ ssh_cmd='ssh -i /root/.ssh/google_compute_engine -l cloud  -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o PasswordAuthentication=no '
+ ssh -i /root/.ssh/google_compute_engine -l cloud -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o PasswordAuthentication=no 10.0.3.15 -- '
    echo Waiting for cloud-final to complete
    start=$(date -u +%s)
    while true; do
       systemctl --quiet is-failed cloud-final && exit 1
       systemctl --quiet is-active cloud-final && exit 0
       echo Waited $((($(date -u +%s)-$start)/60)) minutes
       sleep 30
    done'
Warning: Permanently added '10.0.3.15' (ECDSA) to the list of known hosts.
Waiting for cloud-final to complete
... skipping 216 lines ...
+ local servername=capo-e2e-worker
+ local diskname=capo-e2e-disk
+ local imagename=capo-e2e-worker-image
+ for GCP_ZONE in "${GCP_REGION}-a" "${GCP_REGION}-b" "${GCP_REGION}-c"
+ gcloud compute images describe capo-e2e-worker-image --project k8s-jkns-gce-upgrade
+ gcloud compute instances describe capo-e2e-worker --project k8s-jkns-gce-upgrade --zone us-east4-a
ERROR: (gcloud.compute.instances.describe) Could not fetch resource:
 - The resource 'projects/k8s-jkns-gce-upgrade/zones/us-east4-a/instances/capo-e2e-worker' was not found

+ gcloud compute instances create capo-e2e-worker --project k8s-jkns-gce-upgrade --zone us-east4-a --image capo-e2e-worker-image --boot-disk-size 200G --boot-disk-type pd-ssd --can-ip-forward --tags http-server,https-server,novnc,openstack-apis --min-cpu-platform 'Intel Cascade Lake' --machine-type n2-standard-8 --network-interface=private-network-ip=10.0.3.16,network=capo-e2e-mynetwork,subnet=capo-e2e-mynetwork --metadata-from-file user-data=/logs/artifacts/devstack/cloud-init-worker.yaml
Created [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-upgrade/zones/us-east4-a/instances/capo-e2e-worker].
WARNING: Some requests generated warnings:
 - Disk size: '200 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.
... skipping 619 lines ...
INFO: Loading image: "gcr.io/k8s-staging-cluster-api/kubeadm-control-plane-controller:v1.2.0"
INFO: Image gcr.io/k8s-staging-cluster-api/kubeadm-control-plane-controller:v1.2.0 is present in local container image cache
INFO: Loading image: "gcr.io/k8s-staging-capi-openstack/capi-openstack-controller:e2e"
INFO: Image gcr.io/k8s-staging-capi-openstack/capi-openstack-controller:e2e is present in local container image cache
INFO: Loading image: "quay.io/jetstack/cert-manager-cainjector:v1.8.3"
INFO: Image quay.io/jetstack/cert-manager-cainjector:v1.8.3 not present in local container image cache, will pull
INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-cainjector:v1.8.3" into the kind cluster "capo-e2e": error pulling image "quay.io/jetstack/cert-manager-cainjector:v1.8.3": failure pulling container image: Error response from daemon: manifest for quay.io/jetstack/cert-manager-cainjector:v1.8.3 not found: manifest unknown: manifest unknown
INFO: Loading image: "quay.io/jetstack/cert-manager-webhook:v1.8.3"
INFO: Image quay.io/jetstack/cert-manager-webhook:v1.8.3 not present in local container image cache, will pull
INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-webhook:v1.8.3" into the kind cluster "capo-e2e": error pulling image "quay.io/jetstack/cert-manager-webhook:v1.8.3": failure pulling container image: Error response from daemon: manifest for quay.io/jetstack/cert-manager-webhook:v1.8.3 not found: manifest unknown: manifest unknown
INFO: Loading image: "quay.io/jetstack/cert-manager-controller:v1.8.3"
INFO: Image quay.io/jetstack/cert-manager-controller:v1.8.3 not present in local container image cache, will pull
INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-controller:v1.8.3" into the kind cluster "capo-e2e": error pulling image "quay.io/jetstack/cert-manager-controller:v1.8.3": failure pulling container image: Error response from daemon: manifest for quay.io/jetstack/cert-manager-controller:v1.8.3 not found: manifest unknown: manifest unknown
STEP: [2022-09-24T04:12:37Z] Initializing the bootstrap cluster
INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure openstack
INFO: Waiting for provider controllers to be running
STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available
INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-748fb6d88b-pjsd4, container manager
STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available
... skipping 38 lines ...
STEP: Waiting for cluster to enter the provisioned phase
cannot dump machines, cluster doesn't has a bastion host (yet) with a floating ip
cannot dump machines, cluster doesn't has a bastion host (yet) with a floating ip
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by conformance-p7abyk/cluster-conformance-p7abyk-control-plane to be provisioned
STEP: Waiting for one control plane node to exist
error getting internal ip for server cluster-conformance-p7abyk-control-plane-4dxd2: internal ip doesn't exist (yet)
[AfterEach] conformance tests
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-openstack/test/e2e/suites/conformance/conformance_test.go:119
STEP: [2022-09-24T04:45:52Z] Setting environment variable: key=USE_CI_ARTIFACTS, value=false
STEP: [2022-09-24T04:45:52Z] Running DumpSpecResourcesAndCleanup for namespace "conformance-p7abyk"
STEP: [2022-09-24T04:45:52Z] Running dumpOpenStack
folder created for OpenStack clusters: /logs/artifacts/clusters/bootstrap/openstack-resources
STEP: [2022-09-24T04:45:53Z] Dumping all OpenStack server instances in the "conformance-p7abyk" namespace
STEP: Deleting cluster cluster-conformance-p7abyk
INFO: Waiting for the Cluster conformance-p7abyk/cluster-conformance-p7abyk to be deleted
STEP: Waiting for cluster cluster-conformance-p7abyk to be deleted
couldn't dial from local to machine 10.6.0.254: ssh: handshake failed: EOF
couldn't dial from local to bastion host 172.24.4.190: ssh: handshake failed: EOF
couldn't dial from local to bastion host 172.24.4.190: ssh: handshake failed: EOF
cannot dump machines, cluster doesn't has a bastion host (yet) with a floating ip
STEP: [2022-09-24T04:47:20Z] Deleting namespace used for hosting the "conformance" test spec
INFO: Deleting namespace conformance-p7abyk

• Failure [2042.748 seconds]
conformance tests
... skipping 71 lines ...

JUnit report was created: /logs/artifacts/junit.e2e_suite.1.xml


Summarizing 1 Failure:

[Fail] conformance tests [Measurement] conformance 
/root/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.0/framework/controlplane_helpers.go:153

Ran 1 of 1 Specs in 2160.941 seconds
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 0 Skipped
--- FAIL: TestConformance (2160.94s)

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 8 lines ...
  Learn more at: https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#removed-measure
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-openstack/test/e2e/suites/conformance/conformance_test.go:56

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=1.16.5

FAIL

Ginkgo ran 1 suite in 37m11.783146454s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 3 lines ...
To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc

real	37m11.791s
user	8m1.374s
sys	1m35.282s
make[1]: *** [Makefile:186: test-conformance] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-openstack'
make: *** [Makefile:189: test-conformance-fast] Error 2
./scripts/ci-conformance.sh: line 39:   576 Killed                  python3 -u hack/boskos.py --heartbeat >> "$ARTIFACTS/logs/boskos.log" 2>&1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
... skipping 5 lines ...