This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 17 succeeded
Started2022-09-08 13:15
Elapsed54m26s
Revision
uploadercrier

No Test Failures!


Show 17 Passed Tests

Show 3 Skipped Tests

Error lines from build-log.txt

... skipping 926 lines ...
STEP: Waiting for the control plane to be ready
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
INFO: Waiting for the machine pools to be provisioned
STEP: PASSED!
STEP: Dumping logs from the "quick-start-gvce9i" workload cluster
Failed to get logs for machine quick-start-gvce9i-control-plane-m9j2d, cluster quick-start-1y3j49/quick-start-gvce9i: exit status 2
Failed to get logs for machine quick-start-gvce9i-md-0-5659b8458d-tnjtf, cluster quick-start-1y3j49/quick-start-gvce9i: exit status 2
STEP: Dumping all the Cluster API resources in the "quick-start-1y3j49" namespace
STEP: Deleting cluster quick-start-1y3j49/quick-start-gvce9i
STEP: Deleting cluster quick-start-gvce9i
INFO: Waiting for the Cluster quick-start-1y3j49/quick-start-gvce9i to be deleted
STEP: Waiting for cluster quick-start-gvce9i to be deleted
STEP: Deleting namespace used for hosting the "quick-start" test spec
... skipping 38 lines ...
STEP: Waiting for the control plane to be ready
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
INFO: Waiting for the machine pools to be provisioned
STEP: PASSED!
STEP: Dumping logs from the "quick-start-w3jz62" workload cluster
Failed to get logs for machine quick-start-w3jz62-control-plane-hf7nr, cluster quick-start-vjuuo8/quick-start-w3jz62: exit status 2
Failed to get logs for machine quick-start-w3jz62-md-0-8469c7878f-26gcs, cluster quick-start-vjuuo8/quick-start-w3jz62: exit status 2
STEP: Dumping all the Cluster API resources in the "quick-start-vjuuo8" namespace
STEP: Deleting cluster quick-start-vjuuo8/quick-start-w3jz62
STEP: Deleting cluster quick-start-w3jz62
INFO: Waiting for the Cluster quick-start-vjuuo8/quick-start-w3jz62 to be deleted
STEP: Waiting for cluster quick-start-w3jz62 to be deleted
STEP: Deleting namespace used for hosting the "quick-start" test spec
... skipping 40 lines ...
STEP: Waiting for the control plane to be ready
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
INFO: Waiting for the machine pools to be provisioned
STEP: PASSED!
STEP: Dumping logs from the "quick-start-70510a" workload cluster
Failed to get logs for machine quick-start-70510a-bnlnz-zjvpt, cluster quick-start-7d6dmg/quick-start-70510a: exit status 2
Failed to get logs for machine quick-start-70510a-md-0-nfk42-5b7966c694-ghmd4, cluster quick-start-7d6dmg/quick-start-70510a: exit status 2
STEP: Dumping all the Cluster API resources in the "quick-start-7d6dmg" namespace
STEP: Deleting cluster quick-start-7d6dmg/quick-start-70510a
STEP: Deleting cluster quick-start-70510a
INFO: Waiting for the Cluster quick-start-7d6dmg/quick-start-70510a to be deleted
STEP: Waiting for cluster quick-start-70510a to be deleted
STEP: Deleting namespace used for hosting the "quick-start" test spec
... skipping 49 lines ...
STEP: Waiting for the machine pool workload nodes
STEP: Scaling the machine pool to zero
INFO: Patching the replica count in Machine Pool machine-pool-e55tfe/machine-pool-bdi7sm-mp-0
STEP: Waiting for the machine pool workload nodes
STEP: PASSED!
STEP: Dumping logs from the "machine-pool-bdi7sm" workload cluster
Failed to get logs for machine machine-pool-bdi7sm-control-plane-z6w89, cluster machine-pool-e55tfe/machine-pool-bdi7sm: exit status 2
STEP: Dumping all the Cluster API resources in the "machine-pool-e55tfe" namespace
STEP: Deleting cluster machine-pool-e55tfe/machine-pool-bdi7sm
STEP: Deleting cluster machine-pool-bdi7sm
INFO: Waiting for the Cluster machine-pool-e55tfe/machine-pool-bdi7sm to be deleted
STEP: Waiting for cluster machine-pool-bdi7sm to be deleted
STEP: Deleting namespace used for hosting the "machine-pool" test spec
... skipping 104 lines ...
STEP: Deleting cluster clusterctl-upgrade/clusterctl-upgrade-xghb6b
STEP: Deleting namespace clusterctl-upgrade used for hosting the "clusterctl-upgrade" test
INFO: Deleting namespace clusterctl-upgrade
STEP: Deleting providers
INFO: clusterctl delete --all
STEP: Dumping logs from the "clusterctl-upgrade-xghb6b" workload cluster
Failed to get logs for machine clusterctl-upgrade-xghb6b-control-plane-swjft, cluster clusterctl-upgrade-lho03a/clusterctl-upgrade-xghb6b: exit status 2
Failed to get logs for machine clusterctl-upgrade-xghb6b-md-0-7745447bf4-bcqzt, cluster clusterctl-upgrade-lho03a/clusterctl-upgrade-xghb6b: exit status 2
STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-lho03a" namespace
STEP: Deleting cluster clusterctl-upgrade-lho03a/clusterctl-upgrade-xghb6b
STEP: Deleting cluster clusterctl-upgrade-xghb6b
INFO: Waiting for the Cluster clusterctl-upgrade-lho03a/clusterctl-upgrade-xghb6b to be deleted
STEP: Waiting for cluster clusterctl-upgrade-xghb6b to be deleted
STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
... skipping 52 lines ...
STEP: Waiting for deployment node-drain-0p8ekr-unevictable-workload/unevictable-pod-rmr to be available
STEP: Scale down the controlplane of the workload cluster and make sure that nodes running workload can be deleted even the draining process is blocked.
INFO: Scaling controlplane node-drain-0p8ekr/node-drain-lzlc7i-control-plane from 3 to 1 replicas
INFO: Waiting for correct number of replicas to exist
STEP: PASSED!
STEP: Dumping logs from the "node-drain-lzlc7i" workload cluster
Failed to get logs for machine node-drain-lzlc7i-control-plane-7px54, cluster node-drain-0p8ekr/node-drain-lzlc7i: exit status 2
STEP: Dumping all the Cluster API resources in the "node-drain-0p8ekr" namespace
STEP: Deleting cluster node-drain-0p8ekr/node-drain-lzlc7i
STEP: Deleting cluster node-drain-lzlc7i
INFO: Waiting for the Cluster node-drain-0p8ekr/node-drain-lzlc7i to be deleted
STEP: Waiting for cluster node-drain-lzlc7i to be deleted
STEP: Deleting namespace used for hosting the "node-drain" test spec
... skipping 70 lines ...
STEP: Ensure API servers are stable before doing move
STEP: Moving the cluster back to bootstrap
STEP: Moving workload clusters
INFO: Waiting for the cluster to be reconciled after moving back to booststrap
STEP: Waiting for cluster to enter the provisioned phase
STEP: Dumping logs from the "self-hosted-9dknlq" workload cluster
Failed to get logs for machine self-hosted-9dknlq-md-0-zcc7c-78dc5c75d-778ht, cluster self-hosted-svgpwo/self-hosted-9dknlq: exit status 2
Failed to get logs for machine self-hosted-9dknlq-w4d4x-btrvf, cluster self-hosted-svgpwo/self-hosted-9dknlq: exit status 2
STEP: Dumping all the Cluster API resources in the "self-hosted-svgpwo" namespace
STEP: Deleting cluster self-hosted-svgpwo/self-hosted-9dknlq
STEP: Deleting cluster self-hosted-9dknlq
INFO: Waiting for the Cluster self-hosted-svgpwo/self-hosted-9dknlq to be deleted
STEP: Waiting for cluster self-hosted-9dknlq to be deleted
STEP: Deleting namespace used for hosting the "self-hosted" test spec
... skipping 35 lines ...

STEP: Waiting for the control plane to be ready
STEP: Taking stable ownership of the Machines
STEP: Taking ownership of the cluster's PKI material
STEP: PASSED!
STEP: Dumping logs from the "kcp-adoption-75gg0d" workload cluster
Failed to get logs for machine kcp-adoption-75gg0d-control-plane-0, cluster kcp-adoption-w8kwf6/kcp-adoption-75gg0d: exit status 2
STEP: Dumping all the Cluster API resources in the "kcp-adoption-w8kwf6" namespace
STEP: Deleting cluster kcp-adoption-w8kwf6/kcp-adoption-75gg0d
STEP: Deleting cluster kcp-adoption-75gg0d
INFO: Waiting for the Cluster kcp-adoption-w8kwf6/kcp-adoption-75gg0d to be deleted
STEP: Waiting for cluster kcp-adoption-75gg0d to be deleted
STEP: Deleting namespace used for hosting the "kcp-adoption" test spec
... skipping 46 lines ...
Patching MachineHealthCheck unhealthy condition to one of the nodes
INFO: Patching the node condition to the node
Waiting for remediation
Waiting until the node with unhealthy node condition is remediated
STEP: PASSED!
STEP: Dumping logs from the "mhc-remediation-5o7y0f" workload cluster
Failed to get logs for machine mhc-remediation-5o7y0f-control-plane-ptdqq, cluster mhc-remediation-sanq23/mhc-remediation-5o7y0f: exit status 2
Failed to get logs for machine mhc-remediation-5o7y0f-md-0-56f9847f98-f2x6z, cluster mhc-remediation-sanq23/mhc-remediation-5o7y0f: exit status 2
STEP: Dumping all the Cluster API resources in the "mhc-remediation-sanq23" namespace
STEP: Deleting cluster mhc-remediation-sanq23/mhc-remediation-5o7y0f
STEP: Deleting cluster mhc-remediation-5o7y0f
INFO: Waiting for the Cluster mhc-remediation-sanq23/mhc-remediation-5o7y0f to be deleted
STEP: Waiting for cluster mhc-remediation-5o7y0f to be deleted
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
... skipping 52 lines ...
INFO: Waiting for etcd to have the upgraded image tag
INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-9p04te/k8s-upgrade-and-conformance-hxhuhq-md-0-tbbd5 to be upgraded to v1.24.0
INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.24.0
STEP: Waiting until nodes are ready
STEP: PASSED!
STEP: Dumping logs from the "k8s-upgrade-and-conformance-hxhuhq" workload cluster
Failed to get logs for machine k8s-upgrade-and-conformance-hxhuhq-md-0-tbbd5-85499cc5fc-jknkq, cluster k8s-upgrade-and-conformance-9p04te/k8s-upgrade-and-conformance-hxhuhq: exit status 2
Failed to get logs for machine k8s-upgrade-and-conformance-hxhuhq-s4rkn-4h4wg, cluster k8s-upgrade-and-conformance-9p04te/k8s-upgrade-and-conformance-hxhuhq: exit status 2
Failed to get logs for machine k8s-upgrade-and-conformance-hxhuhq-s4rkn-d4dxh, cluster k8s-upgrade-and-conformance-9p04te/k8s-upgrade-and-conformance-hxhuhq: exit status 2
Failed to get logs for machine k8s-upgrade-and-conformance-hxhuhq-s4rkn-jh2fq, cluster k8s-upgrade-and-conformance-9p04te/k8s-upgrade-and-conformance-hxhuhq: exit status 2
STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-9p04te" namespace
STEP: Deleting cluster k8s-upgrade-and-conformance-9p04te/k8s-upgrade-and-conformance-hxhuhq
STEP: Deleting cluster k8s-upgrade-and-conformance-hxhuhq
INFO: Waiting for the Cluster k8s-upgrade-and-conformance-9p04te/k8s-upgrade-and-conformance-hxhuhq to be deleted
STEP: Waiting for cluster k8s-upgrade-and-conformance-hxhuhq to be deleted
STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec
... skipping 70 lines ...
STEP: Ensure API servers are stable before doing move
STEP: Moving the cluster back to bootstrap
STEP: Moving workload clusters
INFO: Waiting for the cluster to be reconciled after moving back to booststrap
STEP: Waiting for cluster to enter the provisioned phase
STEP: Dumping logs from the "self-hosted-xxknbw" workload cluster
Failed to get logs for machine self-hosted-xxknbw-control-plane-c5d9p, cluster self-hosted-rs93nc/self-hosted-xxknbw: exit status 2
Failed to get logs for machine self-hosted-xxknbw-md-0-7b9954cc77-m9dc5, cluster self-hosted-rs93nc/self-hosted-xxknbw: exit status 2
STEP: Dumping all the Cluster API resources in the "self-hosted-rs93nc" namespace
STEP: Deleting cluster self-hosted-rs93nc/self-hosted-xxknbw
STEP: Deleting cluster self-hosted-xxknbw
INFO: Waiting for the Cluster self-hosted-rs93nc/self-hosted-xxknbw to be deleted
STEP: Waiting for cluster self-hosted-xxknbw to be deleted
STEP: Deleting namespace used for hosting the "self-hosted" test spec
... skipping 48 lines ...
Patching MachineHealthCheck unhealthy condition to one of the nodes
INFO: Patching the node condition to the node
Waiting for remediation
Waiting until the node with unhealthy node condition is remediated
STEP: PASSED!
STEP: Dumping logs from the "mhc-remediation-pzkmvv" workload cluster
Failed to get logs for machine mhc-remediation-pzkmvv-control-plane-kwcbn, cluster mhc-remediation-61of1w/mhc-remediation-pzkmvv: exit status 2
Failed to get logs for machine mhc-remediation-pzkmvv-control-plane-qg95b, cluster mhc-remediation-61of1w/mhc-remediation-pzkmvv: exit status 2
Failed to get logs for machine mhc-remediation-pzkmvv-control-plane-zknj5, cluster mhc-remediation-61of1w/mhc-remediation-pzkmvv: exit status 2
Failed to get logs for machine mhc-remediation-pzkmvv-md-0-6fd8877c85-p5qbj, cluster mhc-remediation-61of1w/mhc-remediation-pzkmvv: exit status 2
STEP: Dumping all the Cluster API resources in the "mhc-remediation-61of1w" namespace
STEP: Deleting cluster mhc-remediation-61of1w/mhc-remediation-pzkmvv
STEP: Deleting cluster mhc-remediation-pzkmvv
INFO: Waiting for the Cluster mhc-remediation-61of1w/mhc-remediation-pzkmvv to be deleted
STEP: Waiting for cluster mhc-remediation-pzkmvv to be deleted
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
... skipping 44 lines ...
INFO: Waiting for correct number of replicas to exist
STEP: Scaling the MachineDeployment down to 1
INFO: Scaling machine deployment md-scale-acd2mm/md-scale-getcha-md-0 from 3 to 1 replicas
INFO: Waiting for correct number of replicas to exist
STEP: PASSED!
STEP: Dumping logs from the "md-scale-getcha" workload cluster
Failed to get logs for machine md-scale-getcha-control-plane-zhrqg, cluster md-scale-acd2mm/md-scale-getcha: exit status 2
Failed to get logs for machine md-scale-getcha-md-0-79544c6d75-gjz5m, cluster md-scale-acd2mm/md-scale-getcha: exit status 2
STEP: Dumping all the Cluster API resources in the "md-scale-acd2mm" namespace
STEP: Deleting cluster md-scale-acd2mm/md-scale-getcha
STEP: Deleting cluster md-scale-getcha
INFO: Waiting for the Cluster md-scale-acd2mm/md-scale-getcha to be deleted
STEP: Waiting for cluster md-scale-getcha to be deleted
STEP: Deleting namespace used for hosting the "md-scale" test spec
... skipping 44 lines ...
INFO: Waiting for rolling upgrade to start.
INFO: Waiting for MachineDeployment rolling upgrade to start
INFO: Waiting for rolling upgrade to complete.
INFO: Waiting for MachineDeployment rolling upgrade to complete
STEP: PASSED!
STEP: Dumping logs from the "md-rollout-9ah1w8" workload cluster
Failed to get logs for machine md-rollout-9ah1w8-control-plane-zlzq4, cluster md-rollout-4t1gbc/md-rollout-9ah1w8: exit status 2
Failed to get logs for machine md-rollout-9ah1w8-md-0-57458bc69-tbtn8, cluster md-rollout-4t1gbc/md-rollout-9ah1w8: exit status 2
STEP: Dumping all the Cluster API resources in the "md-rollout-4t1gbc" namespace
STEP: Deleting cluster md-rollout-4t1gbc/md-rollout-9ah1w8
STEP: Deleting cluster md-rollout-9ah1w8
INFO: Waiting for the Cluster md-rollout-4t1gbc/md-rollout-9ah1w8 to be deleted
STEP: Waiting for cluster md-rollout-9ah1w8 to be deleted
STEP: Deleting namespace used for hosting the "md-rollout" test spec
... skipping 48 lines ...
INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "default-worker") to complete.
STEP: Rebasing the Cluster to a ClusterClass with a modified label for MachineDeployments and wait for changes to be applied to the MachineDeployment objects
INFO: Waiting for MachineDeployment rollout to complete.
INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "default-worker") to complete.
STEP: PASSED!
STEP: Dumping logs from the "clusterclass-changes-t04cs1" workload cluster
Failed to get logs for machine clusterclass-changes-t04cs1-jqc56-hnsck, cluster clusterclass-changes-sr7yu2/clusterclass-changes-t04cs1: exit status 2
Failed to get logs for machine clusterclass-changes-t04cs1-md-0-clg8c-6457d4c54f-fkshx, cluster clusterclass-changes-sr7yu2/clusterclass-changes-t04cs1: exited with status: 2, &{%!s(*os.file=&{{{0 0 0} 37 {0} <nil> 0 1 true true true} /tmp/clusterclass-changes-t04cs1-md-0-clg8c-6457d4c54f-fkshx4043625418 <nil> false false false})}
Failed to get logs for machine clusterclass-changes-t04cs1-md-0-clg8c-7454cbf957-6q554, cluster clusterclass-changes-sr7yu2/clusterclass-changes-t04cs1: exit status 2
STEP: Dumping all the Cluster API resources in the "clusterclass-changes-sr7yu2" namespace
STEP: Deleting cluster clusterclass-changes-sr7yu2/clusterclass-changes-t04cs1
STEP: Deleting cluster clusterclass-changes-t04cs1
INFO: Waiting for the Cluster clusterclass-changes-sr7yu2/clusterclass-changes-t04cs1 to be deleted
STEP: Waiting for cluster clusterclass-changes-t04cs1 to be deleted
STEP: Deleting namespace used for hosting the "clusterclass-changes" test spec
... skipping 50 lines ...
INFO: Waiting for etcd to have the upgraded image tag
INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-8fgocs/k8s-upgrade-and-conformance-gnw8ul-md-0-gdfgq to be upgraded to v1.24.0
INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.24.0
STEP: Waiting until nodes are ready
STEP: PASSED!
STEP: Dumping logs from the "k8s-upgrade-and-conformance-gnw8ul" workload cluster
Failed to get logs for machine k8s-upgrade-and-conformance-gnw8ul-md-0-gdfgq-6445b9f467-77gv9, cluster k8s-upgrade-and-conformance-8fgocs/k8s-upgrade-and-conformance-gnw8ul: exit status 2
Failed to get logs for machine k8s-upgrade-and-conformance-gnw8ul-md-0-gdfgq-6445b9f467-rbt4z, cluster k8s-upgrade-and-conformance-8fgocs/k8s-upgrade-and-conformance-gnw8ul: exit status 2
Failed to get logs for machine k8s-upgrade-and-conformance-gnw8ul-zv2l9-pd4zx, cluster k8s-upgrade-and-conformance-8fgocs/k8s-upgrade-and-conformance-gnw8ul: exit status 2
STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-8fgocs" namespace
STEP: Deleting cluster k8s-upgrade-and-conformance-8fgocs/k8s-upgrade-and-conformance-gnw8ul
STEP: Deleting cluster k8s-upgrade-and-conformance-gnw8ul
INFO: Waiting for the Cluster k8s-upgrade-and-conformance-8fgocs/k8s-upgrade-and-conformance-gnw8ul to be deleted
STEP: Waiting for cluster k8s-upgrade-and-conformance-gnw8ul to be deleted
STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec
... skipping 52 lines ...
INFO: Waiting for etcd to have the upgraded image tag
INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-652q3s/k8s-upgrade-and-conformance-2pgxpt-md-0-xlr9k to be upgraded to v1.24.0
INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.24.0
STEP: Waiting until nodes are ready
STEP: PASSED!
STEP: Dumping logs from the "k8s-upgrade-and-conformance-2pgxpt" workload cluster
Failed to get logs for machine k8s-upgrade-and-conformance-2pgxpt-ljjwx-n7c9w, cluster k8s-upgrade-and-conformance-652q3s/k8s-upgrade-and-conformance-2pgxpt: exit status 2
Failed to get logs for machine k8s-upgrade-and-conformance-2pgxpt-ljjwx-xxmmt, cluster k8s-upgrade-and-conformance-652q3s/k8s-upgrade-and-conformance-2pgxpt: exit status 2
Failed to get logs for machine k8s-upgrade-and-conformance-2pgxpt-ljjwx-z4v5k, cluster k8s-upgrade-and-conformance-652q3s/k8s-upgrade-and-conformance-2pgxpt: exit status 2
Failed to get logs for machine k8s-upgrade-and-conformance-2pgxpt-md-0-xlr9k-c7c85cdf9-mqqph, cluster k8s-upgrade-and-conformance-652q3s/k8s-upgrade-and-conformance-2pgxpt: exit status 2
STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-652q3s" namespace
STEP: Deleting cluster k8s-upgrade-and-conformance-652q3s/k8s-upgrade-and-conformance-2pgxpt
STEP: Deleting cluster k8s-upgrade-and-conformance-2pgxpt
INFO: Waiting for the Cluster k8s-upgrade-and-conformance-652q3s/k8s-upgrade-and-conformance-2pgxpt to be deleted
STEP: Waiting for cluster k8s-upgrade-and-conformance-2pgxpt to be deleted
STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec
... skipping 4 lines ...
When upgrading a workload cluster using ClusterClass with a HA control plane using scale-in rollout
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade_test.go:101
  Should create and upgrade a workload cluster and run kubetest
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:115
------------------------------
STEP: Dumping logs from the bootstrap cluster
Failed to get logs for the bootstrap cluster node test-xla4am-control-plane: exit status 2
STEP: Tearing down the management cluster


Ran 17 of 20 Specs in 2864.600 seconds
SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 3 Skipped


Ginkgo ran 1 suite in 48m50.608095128s
Test Suite Passed
make: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e'
+ cleanup
... skipping 25 lines ...