This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 12 succeeded
Started2022-09-18 03:20
Elapsed36m54s
Revisionrelease-1.0

Test Failures


capi-e2e When testing KCP upgrade in a HA cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd 11m15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\stesting\sKCP\supgrade\sin\sa\sHA\scluster\sShould\ssuccessfully\supgrade\sKubernetes\,\sDNS\,\skube\-proxy\,\sand\setcd$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/kcp_upgrade.go:75
Timed out after 600.001s.
Expected
    <int>: 2
to equal
    <int>: 3
/home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/controlplane_helpers.go:108
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 12 Passed Tests

Show 2 Skipped Tests

Error lines from build-log.txt

... skipping 850 lines ...
STEP: Waiting for the machine pool workload nodes to exist
STEP: Scaling the machine pool down
INFO: Patching the replica count in Machine Pool machine-pool-l0y5qe/machine-pool-56z6uc-mp-0
STEP: Waiting for the machine pool workload nodes to exist
STEP: PASSED!
STEP: Dumping logs from the "machine-pool-56z6uc" workload cluster
Failed to get logs for machine machine-pool-56z6uc-control-plane-dmpj4, cluster machine-pool-l0y5qe/machine-pool-56z6uc: exit status 2
Failed to get logs for machine pool machine-pool-56z6uc-mp-0, cluster machine-pool-l0y5qe/machine-pool-56z6uc: exit status 2
STEP: Dumping all the Cluster API resources in the "machine-pool-l0y5qe" namespace
STEP: Deleting cluster machine-pool-l0y5qe/machine-pool-56z6uc
STEP: Deleting cluster machine-pool-56z6uc
INFO: Waiting for the Cluster machine-pool-l0y5qe/machine-pool-56z6uc to be deleted
STEP: Waiting for cluster machine-pool-56z6uc to be deleted
STEP: Deleting namespace used for hosting the "machine-pool" test spec
... skipping 44 lines ...
INFO: Waiting for rolling upgrade to start.
INFO: Waiting for MachineDeployment rolling upgrade to start
INFO: Waiting for rolling upgrade to complete.
INFO: Waiting for MachineDeployment rolling upgrade to complete
STEP: PASSED!
STEP: Dumping logs from the "md-rollout-egqfrn" workload cluster
Failed to get logs for machine md-rollout-egqfrn-control-plane-d5hlv, cluster md-rollout-acemgj/md-rollout-egqfrn: exit status 2
Failed to get logs for machine md-rollout-egqfrn-md-0-b4677d785-48c6q, cluster md-rollout-acemgj/md-rollout-egqfrn: exit status 2
STEP: Dumping all the Cluster API resources in the "md-rollout-acemgj" namespace
STEP: Deleting cluster md-rollout-acemgj/md-rollout-egqfrn
STEP: Deleting cluster md-rollout-egqfrn
INFO: Waiting for the Cluster md-rollout-acemgj/md-rollout-egqfrn to be deleted
STEP: Waiting for cluster md-rollout-egqfrn to be deleted
STEP: Deleting namespace used for hosting the "md-rollout" test spec
... skipping 52 lines ...
STEP: Waiting for deployment node-drain-4v2ybb-unevictable-workload/unevictable-pod-vhd to be available
STEP: Scale down the controlplane of the workload cluster and make sure that nodes running workload can be deleted even the draining process is blocked.
INFO: Scaling controlplane node-drain-4v2ybb/node-drain-7ceqax-control-plane from 0xc000ad69a0 to 1 replicas
INFO: Waiting for correct number of replicas to exist
STEP: PASSED!
STEP: Dumping logs from the "node-drain-7ceqax" workload cluster
Failed to get logs for machine node-drain-7ceqax-control-plane-fdfnr, cluster node-drain-4v2ybb/node-drain-7ceqax: exit status 2
STEP: Dumping all the Cluster API resources in the "node-drain-4v2ybb" namespace
STEP: Deleting cluster node-drain-4v2ybb/node-drain-7ceqax
STEP: Deleting cluster node-drain-7ceqax
INFO: Waiting for the Cluster node-drain-4v2ybb/node-drain-7ceqax to be deleted
STEP: Waiting for cluster node-drain-7ceqax to be deleted
STEP: Deleting namespace used for hosting the "node-drain" test spec
... skipping 38 lines ...
STEP: Waiting for the control plane to be ready
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
INFO: Waiting for the machine pools to be provisioned
STEP: PASSED!
STEP: Dumping logs from the "quick-start-296bgb" workload cluster
Failed to get logs for machine quick-start-296bgb-control-plane-drqq6, cluster quick-start-qkyr2n/quick-start-296bgb: exit status 2
Failed to get logs for machine quick-start-296bgb-md-0-7fd8fd8897-7c9h6, cluster quick-start-qkyr2n/quick-start-296bgb: exit status 2
STEP: Dumping all the Cluster API resources in the "quick-start-qkyr2n" namespace
STEP: Deleting cluster quick-start-qkyr2n/quick-start-296bgb
STEP: Deleting cluster quick-start-296bgb
INFO: Waiting for the Cluster quick-start-qkyr2n/quick-start-296bgb to be deleted
STEP: Waiting for cluster quick-start-296bgb to be deleted
STEP: Deleting namespace used for hosting the "quick-start" test spec
... skipping 101 lines ...
INFO: Waiting for the Cluster clusterctl-upgrade/clusterctl-upgrade-hid6xj to be deleted
STEP: Waiting for cluster clusterctl-upgrade-hid6xj to be deleted
STEP: Deleting cluster clusterctl-upgrade and clusterctl-upgrade-ocft6z
STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test
INFO: Deleting namespace clusterctl-upgrade
STEP: Dumping logs from the "clusterctl-upgrade-ocft6z" workload cluster
Failed to get logs for machine clusterctl-upgrade-ocft6z-control-plane-8z8hj, cluster clusterctl-upgrade-hukso1/clusterctl-upgrade-ocft6z: exit status 2
Failed to get logs for machine clusterctl-upgrade-ocft6z-md-0-74bb547675-sjd9c, cluster clusterctl-upgrade-hukso1/clusterctl-upgrade-ocft6z: exit status 2
STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-hukso1" namespace
STEP: Deleting cluster clusterctl-upgrade-hukso1/clusterctl-upgrade-ocft6z
STEP: Deleting cluster clusterctl-upgrade-ocft6z
INFO: Waiting for the Cluster clusterctl-upgrade-hukso1/clusterctl-upgrade-ocft6z to be deleted
STEP: Waiting for cluster clusterctl-upgrade-ocft6z to be deleted
STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
... skipping 51 lines ...
STEP: Ensuring kube-proxy has the correct image
INFO: Waiting for CoreDNS to have the upgraded image tag
STEP: Ensuring CoreDNS has the correct image
INFO: Waiting for etcd to have the upgraded image tag
STEP: PASSED!
STEP: Dumping logs from the "kcp-upgrade-d3ek2n" workload cluster
Failed to get logs for machine kcp-upgrade-d3ek2n-control-plane-bf989, cluster kcp-upgrade-t3eejp/kcp-upgrade-d3ek2n: exit status 2
Failed to get logs for machine kcp-upgrade-d3ek2n-control-plane-kfwjn, cluster kcp-upgrade-t3eejp/kcp-upgrade-d3ek2n: exit status 2
Failed to get logs for machine kcp-upgrade-d3ek2n-control-plane-qxmkc, cluster kcp-upgrade-t3eejp/kcp-upgrade-d3ek2n: exit status 2
Failed to get logs for machine kcp-upgrade-d3ek2n-md-0-7cd8697f8d-sbsm6, cluster kcp-upgrade-t3eejp/kcp-upgrade-d3ek2n: exit status 2
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-t3eejp" namespace
STEP: Deleting cluster kcp-upgrade-t3eejp/kcp-upgrade-d3ek2n
STEP: Deleting cluster kcp-upgrade-d3ek2n
INFO: Waiting for the Cluster kcp-upgrade-t3eejp/kcp-upgrade-d3ek2n to be deleted
STEP: Waiting for cluster kcp-upgrade-d3ek2n to be deleted
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
... skipping 35 lines ...

STEP: Waiting for the control plane to be ready
STEP: Taking stable ownership of the Machines
STEP: Taking ownership of the cluster's PKI material
STEP: PASSED!
STEP: Dumping logs from the "kcp-adoption-m6pbrk" workload cluster
Failed to get logs for machine kcp-adoption-m6pbrk-control-plane-0, cluster kcp-adoption-75kggh/kcp-adoption-m6pbrk: exit status 2
STEP: Dumping all the Cluster API resources in the "kcp-adoption-75kggh" namespace
STEP: Deleting cluster kcp-adoption-75kggh/kcp-adoption-m6pbrk
STEP: Deleting cluster kcp-adoption-m6pbrk
INFO: Waiting for the Cluster kcp-adoption-75kggh/kcp-adoption-m6pbrk to be deleted
STEP: Waiting for cluster kcp-adoption-m6pbrk to be deleted
STEP: Deleting namespace used for hosting the "kcp-adoption" test spec
... skipping 44 lines ...
INFO: Waiting for correct number of replicas to exist
STEP: Scaling the MachineDeployment down to 1
INFO: Scaling machine deployment md-scale-ec2qoh/md-scale-sypfwb-md-0 from 3 to 1 replicas
INFO: Waiting for correct number of replicas to exist
STEP: PASSED!
STEP: Dumping logs from the "md-scale-sypfwb" workload cluster
Failed to get logs for machine md-scale-sypfwb-control-plane-wcbcz, cluster md-scale-ec2qoh/md-scale-sypfwb: exit status 2
Failed to get logs for machine md-scale-sypfwb-md-0-dfc7dd8b8-jz2cw, cluster md-scale-ec2qoh/md-scale-sypfwb: exit status 2
STEP: Dumping all the Cluster API resources in the "md-scale-ec2qoh" namespace
STEP: Deleting cluster md-scale-ec2qoh/md-scale-sypfwb
STEP: Deleting cluster md-scale-sypfwb
INFO: Waiting for the Cluster md-scale-ec2qoh/md-scale-sypfwb to be deleted
STEP: Waiting for cluster md-scale-sypfwb to be deleted
STEP: Deleting namespace used for hosting the "md-scale" test spec
... skipping 47 lines ...
STEP: Ensuring kube-proxy has the correct image
INFO: Waiting for CoreDNS to have the upgraded image tag
STEP: Ensuring CoreDNS has the correct image
INFO: Waiting for etcd to have the upgraded image tag
STEP: PASSED!
STEP: Dumping logs from the "kcp-upgrade-y6ty9x" workload cluster
Failed to get logs for machine kcp-upgrade-y6ty9x-control-plane-6sxsp, cluster kcp-upgrade-vam5ly/kcp-upgrade-y6ty9x: exit status 2
Failed to get logs for machine kcp-upgrade-y6ty9x-md-0-54444856d-cj4g7, cluster kcp-upgrade-vam5ly/kcp-upgrade-y6ty9x: exit status 2
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-vam5ly" namespace
STEP: Deleting cluster kcp-upgrade-vam5ly/kcp-upgrade-y6ty9x
STEP: Deleting cluster kcp-upgrade-y6ty9x
INFO: Waiting for the Cluster kcp-upgrade-vam5ly/kcp-upgrade-y6ty9x to be deleted
STEP: Waiting for cluster kcp-upgrade-y6ty9x to be deleted
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
... skipping 46 lines ...
Patching MachineHealthCheck unhealthy condition to one of the nodes
INFO: Patching the node condition to the node
Waiting for remediation
Waiting until the node with unhealthy node condition is remediated
STEP: PASSED!
STEP: Dumping logs from the "mhc-remediation-awmgl4" workload cluster
Failed to get logs for machine mhc-remediation-awmgl4-control-plane-ttvhz, cluster mhc-remediation-5zilyp/mhc-remediation-awmgl4: exit status 2
Failed to get logs for machine mhc-remediation-awmgl4-md-0-6bb5649b78-mz6b9, cluster mhc-remediation-5zilyp/mhc-remediation-awmgl4: exit status 2
STEP: Dumping all the Cluster API resources in the "mhc-remediation-5zilyp" namespace
STEP: Deleting cluster mhc-remediation-5zilyp/mhc-remediation-awmgl4
STEP: Deleting cluster mhc-remediation-awmgl4
INFO: Waiting for the Cluster mhc-remediation-5zilyp/mhc-remediation-awmgl4 to be deleted
STEP: Waiting for cluster mhc-remediation-awmgl4 to be deleted
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
... skipping 70 lines ...
STEP: Ensure API servers are stable before doing move
STEP: Moving the cluster back to bootstrap
STEP: Moving workload clusters
INFO: Waiting for the cluster to be reconciled after moving back to booststrap
STEP: Waiting for cluster to enter the provisioned phase
STEP: Dumping logs from the "self-hosted-3j365a" workload cluster
Failed to get logs for machine self-hosted-3j365a-control-plane-qg47m, cluster self-hosted-9lbumm/self-hosted-3j365a: exit status 2
Failed to get logs for machine self-hosted-3j365a-md-0-964bd64c-dkq24, cluster self-hosted-9lbumm/self-hosted-3j365a: exit status 2
STEP: Dumping all the Cluster API resources in the "self-hosted-9lbumm" namespace
STEP: Deleting cluster self-hosted-9lbumm/self-hosted-3j365a
STEP: Deleting cluster self-hosted-3j365a
INFO: Waiting for the Cluster self-hosted-9lbumm/self-hosted-3j365a to be deleted
STEP: Waiting for cluster self-hosted-3j365a to be deleted
STEP: Deleting namespace used for hosting the "self-hosted" test spec
... skipping 48 lines ...
Patching MachineHealthCheck unhealthy condition to one of the nodes
INFO: Patching the node condition to the node
Waiting for remediation
Waiting until the node with unhealthy node condition is remediated
STEP: PASSED!
STEP: Dumping logs from the "mhc-remediation-e3yvr1" workload cluster
Failed to get logs for machine mhc-remediation-e3yvr1-control-plane-6vch7, cluster mhc-remediation-8v2406/mhc-remediation-e3yvr1: exit status 2
Failed to get logs for machine mhc-remediation-e3yvr1-control-plane-kfldc, cluster mhc-remediation-8v2406/mhc-remediation-e3yvr1: exit status 2
Failed to get logs for machine mhc-remediation-e3yvr1-control-plane-mvwcr, cluster mhc-remediation-8v2406/mhc-remediation-e3yvr1: exit status 2
Failed to get logs for machine mhc-remediation-e3yvr1-md-0-5dbf66c4c5-p7krc, cluster mhc-remediation-8v2406/mhc-remediation-e3yvr1: exit status 2
STEP: Dumping all the Cluster API resources in the "mhc-remediation-8v2406" namespace
STEP: Deleting cluster mhc-remediation-8v2406/mhc-remediation-e3yvr1
STEP: Deleting cluster mhc-remediation-e3yvr1
INFO: Waiting for the Cluster mhc-remediation-8v2406/mhc-remediation-e3yvr1 to be deleted
STEP: Waiting for cluster mhc-remediation-e3yvr1 to be deleted
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
... skipping 34 lines ...
INFO: Waiting for the first control plane machine managed by kcp-upgrade-l5v9gn/kcp-upgrade-0t19am-control-plane to be provisioned
STEP: Waiting for one control plane node to exist
INFO: Waiting for control plane to be ready
INFO: Waiting for the remaining control plane machines managed by kcp-upgrade-l5v9gn/kcp-upgrade-0t19am-control-plane to be provisioned
STEP: Waiting for all control plane nodes to exist
STEP: Dumping logs from the "kcp-upgrade-0t19am" workload cluster
Failed to get logs for machine kcp-upgrade-0t19am-control-plane-9tnkf, cluster kcp-upgrade-l5v9gn/kcp-upgrade-0t19am: error creating container exec: Error response from daemon: Container 32689442f9d6ff54ef858ea2e7cc2b7a26e0f8ab9bde4c4d3e7d47b849ac5095 is not running
Failed to get logs for machine kcp-upgrade-0t19am-control-plane-tr7sx, cluster kcp-upgrade-l5v9gn/kcp-upgrade-0t19am: exit status 2
Failed to get logs for machine kcp-upgrade-0t19am-control-plane-zrcmn, cluster kcp-upgrade-l5v9gn/kcp-upgrade-0t19am: exit status 2
Failed to get logs for machine kcp-upgrade-0t19am-md-0-676fb44f69-9dkn7, cluster kcp-upgrade-l5v9gn/kcp-upgrade-0t19am: exit status 2
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-l5v9gn" namespace
STEP: Deleting cluster kcp-upgrade-l5v9gn/kcp-upgrade-0t19am
STEP: Deleting cluster kcp-upgrade-0t19am
INFO: Waiting for the Cluster kcp-upgrade-l5v9gn/kcp-upgrade-0t19am to be deleted
STEP: Waiting for cluster kcp-upgrade-0t19am to be deleted
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
... skipping 52 lines ...
  testing.tRunner(0xc000602480, 0x20fcce0)
  	/usr/local/go/src/testing/testing.go:1203 +0xe5
  created by testing.(*T).Run
  	/usr/local/go/src/testing/testing.go:1248 +0x2b3
------------------------------
STEP: Dumping logs from the bootstrap cluster
Failed to get logs for the bootstrap cluster node test-wwav93-control-plane: exit status 2
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] When testing KCP upgrade in a HA cluster [It] Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd 
/home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/controlplane_helpers.go:108

Ran 13 of 15 Specs in 1921.004 seconds
FAIL! -- 12 Passed | 1 Failed | 0 Pending | 2 Skipped


Ginkgo ran 1 suite in 33m4.190996567s
Test Suite Failed
make: *** [Makefile:107: run] Error 1
make: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e'
+ cleanup
++ pgrep -f 'docker events'
+ kill 14084
++ pgrep -f 'ctr -n moby events'
+ kill 14085
... skipping 21 lines ...