Recent runs || View in Spyglass
PR | sbueringer: [release-1.2] 🌱 Bump kpromo to v3.5.1 |
Result | ABORTED |
Tests | 1 failed / 1 succeeded |
Started | |
Elapsed | 15m30s |
Revision | 597a19663377ef7cb112f9dcf06ea8e2678a6721 |
Refs |
8305 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\supgrading\sa\sworkload\scluster\susing\sClusterClass\swith\sRuntimeSDK\s\[PR\-Informing\]\s\[ClusterClass\]\sShould\screate\,\supgrade\sand\sdelete\sa\sworkload\scluster$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade_runtimesdk.go:129 Timed out after 60.001s. Failed to list MachineList object for Cluster k8s-upgrade-with-runtimesdk-xe2ilx/k8s-upgrade-with-runtimesdk-kfo1gz Expected success, but got an error: <*url.Error | 0xc0007afd40>: { Op: "Get", URL: "https://127.0.0.1:41999/apis/cluster.x-k8s.io/v1beta1/namespaces/k8s-upgrade-with-runtimesdk-xe2ilx/machines?labelSelector=cluster.x-k8s.io%2Fcontrol-plane", Err: <*net.OpError | 0xc00085d950>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000b69470>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 41999, Zone: "", }, Err: <*os.SyscallError | 0xc000aa9800>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Get "https://127.0.0.1:41999/apis/cluster.x-k8s.io/v1beta1/namespaces/k8s-upgrade-with-runtimesdk-xe2ilx/machines?labelSelector=cluster.x-k8s.io%2Fcontrol-plane": dial tcp 127.0.0.1:41999: connect: connection refused /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machine_helpers.go:111from junit.e2e_suite.2.xml
STEP: Creating a namespace for hosting the "k8s-upgrade-with-runtimesdk" test spec INFO: Creating namespace k8s-upgrade-with-runtimesdk-xe2ilx INFO: Creating event watcher for namespace "k8s-upgrade-with-runtimesdk-xe2ilx" STEP: Deploy Test Extension serviceaccount/test-extension created role.rbac.authorization.k8s.io/test-extension created rolebinding.rbac.authorization.k8s.io/test-extension created service/webhook-service created deployment.apps/test-extension created certificate.cert-manager.io/serving-cert created issuer.cert-manager.io/selfsigned-issuer created STEP: Deploy Test Extension ExtensionConfig and ConfigMap STEP: Wait for test extension deployment to be availabel STEP: Waiting for deployment k8s-upgrade-with-runtimesdk-xe2ilx/test-extension to be available STEP: Watch Deployment logs of test extension INFO: Creating log watcher for controller k8s-upgrade-with-runtimesdk-xe2ilx/test-extension, pod test-extension-ff8f49699-b662x, container extension STEP: Creating a workload cluster INFO: Creating the workload cluster with name "k8s-upgrade-with-runtimesdk-kfo1gz" using the "upgrades-runtimesdk" template (Kubernetes v1.25.3, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-with-runtimesdk-kfo1gz --infrastructure (default) --kubernetes-version v1.25.3 --control-plane-machine-count 1 --worker-machine-count 2 --flavor upgrades-runtimesdk INFO: Applying the cluster template yaml to the cluster clusterclass.cluster.x-k8s.io/quick-start-runtimesdk created dockerclustertemplate.infrastructure.cluster.x-k8s.io/quick-start-cluster created kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-default-worker-bootstraptemplate created configmap/cni-k8s-upgrade-with-runtimesdk-kfo1gz-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-with-runtimesdk-kfo1gz-crs-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-with-runtimesdk-kfo1gz-mp-0-config created cluster.cluster.x-k8s.io/k8s-upgrade-with-runtimesdk-kfo1gz created machinepool.cluster.x-k8s.io/k8s-upgrade-with-runtimesdk-kfo1gz-mp-0 created dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-with-runtimesdk-kfo1gz-dmp-0 created INFO: Calling PreWaitForCluster INFO: Blocking with BeforeClusterCreate hook STEP: Setting BeforeClusterCreate response to Status:Success to unblock the reconciliation INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-with-runtimesdk-xe2ilx/k8s-upgrade-with-runtimesdk-kfo1gz-fvdmc to be provisioned STEP: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane k8s-upgrade-with-runtimesdk-xe2ilx/k8s-upgrade-with-runtimesdk-kfo1gz-fvdmc to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready STEP: Checking all the the control plane machines are in the expected failure domains INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist STEP: Checking all the machines controlled by k8s-upgrade-with-runtimesdk-kfo1gz-md-0-676k6 are in the "fd4" failure domain INFO: Waiting for the machine pools to be provisioned STEP: Waiting for the machine pool workload nodes STEP: Upgrading the Cluster topology INFO: Patching the new Kubernetes version to Cluster topology INFO: Calling PreWaitForControlPlaneToBeUpgraded INFO: Blocking with BeforeClusterUpgrade hook STEP: Setting BeforeClusterUpgrade response to Status:Success to unblock the reconciliation
Filter through log files | View test history on testgrid
capi-e2e When following the Cluster API quick-start with ClusterClass [PR-Informing] [ClusterClass] Should create a workload cluster
capi-e2e When following the Cluster API quick-start [PR-Blocking] Should create a workload cluster
capi-e2e When following the Cluster API quick-start with IPv6 [IPv6] [PR-Informing] Should create a workload cluster
capi-e2e When following the Cluster API quick-start with Ignition Should create a workload cluster
capi-e2e When testing Cluster API working on self-hosted clusters Should pivot the bootstrap cluster to a self-hosted cluster
capi-e2e When testing Cluster API working on self-hosted clusters using ClusterClass [ClusterClass] Should pivot the bootstrap cluster to a self-hosted cluster
capi-e2e When testing ClusterClass changes [ClusterClass] Should successfully rollout the managed topology upon changes to the ClusterClass
capi-e2e When testing K8S conformance [Conformance] Should create a workload cluster and run kubetest
capi-e2e When testing KCP adoption Should adopt up-to-date control plane Machines without modification
capi-e2e When testing MachineDeployment rolling upgrades Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capi-e2e When testing MachineDeployment scale out/in Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capi-e2e When testing MachinePools Should successfully create a cluster with machine pool machines
capi-e2e When testing clusterctl upgrades [clusterctl-Upgrade] Should create a management cluster and then upgrade all the providers
capi-e2e When testing node drain timeout A node should be forcefully removed if it cannot be drained in time
capi-e2e When testing unhealthy machines remediation Should successfully trigger KCP remediation
capi-e2e When testing unhealthy machines remediation Should successfully trigger machine deployment remediation
capi-e2e When upgrading a workload cluster using ClusterClass [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest
capi-e2e When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest
capi-e2e When upgrading a workload cluster using ClusterClass with a HA control plane [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest
capi-e2e When upgrading a workload cluster using ClusterClass with a HA control plane using scale-in rollout [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest
... skipping 980 lines ... INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist STEP: Checking all the machines controlled by quick-start-1r6obn-md-0-6x82n are in the "fd4" failure domain INFO: Waiting for the machine pools to be provisioned STEP: PASSED! STEP: Dumping logs from the "quick-start-1r6obn" workload cluster Failed to get logs for machine quick-start-1r6obn-gzpvk-7kkp5, cluster quick-start-oumspd/quick-start-1r6obn: exit status 2 Failed to get logs for machine quick-start-1r6obn-md-0-6x82n-7b8b46d4f5-zbtgb, cluster quick-start-oumspd/quick-start-1r6obn: exit status 2 STEP: Dumping all the Cluster API resources in the "quick-start-oumspd" namespace STEP: Deleting cluster quick-start-oumspd/quick-start-1r6obn STEP: Deleting cluster quick-start-1r6obn INFO: Waiting for the Cluster quick-start-oumspd/quick-start-1r6obn to be deleted STEP: Waiting for cluster quick-start-1r6obn to be deleted STEP: Deleting namespace used for hosting the "quick-start" test spec ... skipping 3 lines ... • [SLOW TEST:115.928 seconds] When following the Cluster API quick-start with ClusterClass [PR-Informing] [ClusterClass] /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/quick_start_test.go:39 Should create a workload cluster /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/quick_start.go:78 ------------------------------ {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:168","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2023-03-17T10:39:39Z"} ++ early_exit_handler ++ '[' -n 179 ']' ++ kill -TERM 179 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ cleanup ... skipping 12 lines ...