This job view page is being replaced by Spyglass soon. Check out the new job view.
PRshysank: v1alpha4 -> v1beta1 clusterctl upgrade test
ResultABORTED
Tests 0 failed / 0 succeeded
Started2021-11-17 23:32
Elapsed1h21m
Revisionee7a6ed67cb87d871a770045a4904a1eda93ad60
Refs 1810

No Test Failures!


Error lines from build-log.txt

... skipping 474 lines ...
Nov 17 23:46:13.054: INFO: INFO: Collecting boot logs for AzureMachine quick-start-7t6vah-md-0-xzrbl

Nov 17 23:46:13.517: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster quick-start-7t6vah in namespace quick-start-el9da8

Nov 17 23:46:47.861: INFO: INFO: Collecting boot logs for AzureMachine quick-start-7t6vah-md-win-qmtqq

Failed to get logs for machine quick-start-7t6vah-md-win-5d57685d8f-74nwn, cluster quick-start-el9da8/quick-start-7t6vah: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 17 23:46:48.267: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster quick-start-7t6vah in namespace quick-start-el9da8

Nov 18 00:03:32.734: INFO: INFO: Collecting boot logs for AzureMachine quick-start-7t6vah-md-win-vlql6

Failed to get logs for machine quick-start-7t6vah-md-win-5d57685d8f-slwh4, cluster quick-start-el9da8/quick-start-7t6vah: [running command "get-service": wait: remote command exited without exit status or exit signal, [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]]
STEP: Dumping workload cluster quick-start-el9da8/quick-start-7t6vah kube-system pod logs
STEP: Fetching kube-system pod logs took 1.049154073s
STEP: Dumping workload cluster quick-start-el9da8/quick-start-7t6vah Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-5w7gf, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-quick-start-7t6vah-control-plane-7fq8x, container etcd
STEP: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-7t6vah-control-plane-7fq8x, container kube-controller-manager
... skipping 14 lines ...
STEP: Fetching activity logs took 1.035732987s
STEP: Dumping all the Cluster API resources in the "quick-start-el9da8" namespace
STEP: Deleting cluster quick-start-el9da8/quick-start-7t6vah
STEP: Deleting cluster quick-start-7t6vah
INFO: Waiting for the Cluster quick-start-el9da8/quick-start-7t6vah to be deleted
STEP: Waiting for cluster quick-start-7t6vah to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-n4l8v, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6cph7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-k8xw9, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-quick-start-7t6vah-control-plane-7fq8x, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wh68z, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-k8xw9, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-quick-start-7t6vah-control-plane-7fq8x, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-28w9t, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vdkx5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-quick-start-7t6vah-control-plane-7fq8x, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-csk4q, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5w7gf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5vhth, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-zm82q, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-zm82q, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-q8vkv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-quick-start-7t6vah-control-plane-7fq8x, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "quick-start" test spec
INFO: Deleting namespace quick-start-el9da8
STEP: Redacting sensitive information from logs


• [SLOW TEST:2000.546 seconds]
... skipping 74 lines ...
Nov 18 00:06:04.647: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-q8zq58-md-0-m86sz

Nov 18 00:06:05.052: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster kcp-upgrade-q8zq58 in namespace kcp-upgrade-8a2r6z

Nov 18 00:06:50.992: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-q8zq58-md-win-w5wtb

Failed to get logs for machine kcp-upgrade-q8zq58-md-win-6dcdf4f44f-6vvpr, cluster kcp-upgrade-8a2r6z/kcp-upgrade-q8zq58: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 18 00:06:51.422: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-q8zq58 in namespace kcp-upgrade-8a2r6z

Nov 18 00:07:24.060: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-q8zq58-md-win-z95w8

Failed to get logs for machine kcp-upgrade-q8zq58-md-win-6dcdf4f44f-ql4kn, cluster kcp-upgrade-8a2r6z/kcp-upgrade-q8zq58: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster kcp-upgrade-8a2r6z/kcp-upgrade-q8zq58 kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-cc7kx, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-84klw, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-h9jwx, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-q8zq58-control-plane-k6kpv, container kube-controller-manager
STEP: Fetching kube-system pod logs took 909.563421ms
... skipping 20 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-windows-vjvhq, container calico-node-felix
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-gn48s, container coredns
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-q8zq58-control-plane-ltkn6, container etcd
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-q8zq58-control-plane-k6kpv, container etcd
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-q8zq58-control-plane-xq99b, container kube-scheduler
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-q8zq58-control-plane-xq99b, container etcd
STEP: Got error while iterating over activity logs for resource group capz-e2e-ehjfr2: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000614446s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-8a2r6z" namespace
STEP: Deleting cluster kcp-upgrade-8a2r6z/kcp-upgrade-q8zq58
STEP: Deleting cluster kcp-upgrade-q8zq58
INFO: Waiting for the Cluster kcp-upgrade-8a2r6z/kcp-upgrade-q8zq58 to be deleted
STEP: Waiting for cluster kcp-upgrade-q8zq58 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-g9lcg, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6k8d2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-q8zq58-control-plane-ltkn6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-q8zq58-control-plane-xq99b, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-vjvhq, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-q8zq58-control-plane-ltkn6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-h9jwx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-q8zq58-control-plane-ltkn6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4zg8r, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-q8zq58-control-plane-k6kpv, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-s6h6t, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-v6bfb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kbm28, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-q8zq58-control-plane-ltkn6, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-q8zq58-control-plane-k6kpv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fchmj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-q8zq58-control-plane-k6kpv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-vjvhq, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-q8zq58-control-plane-xq99b, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5nxb4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-g9lcg, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-q8zq58-control-plane-xq99b, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-q8zq58-control-plane-k6kpv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-zlxqg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-q8zq58-control-plane-xq99b, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-8a2r6z
STEP: Redacting sensitive information from logs


• [SLOW TEST:2320.579 seconds]
... skipping 56 lines ...
STEP: Dumping logs from the "kcp-upgrade-u3wz4m" workload cluster
STEP: Dumping workload cluster kcp-upgrade-7wod8i/kcp-upgrade-u3wz4m logs
Nov 17 23:52:59.111: INFO: INFO: Collecting logs for node kcp-upgrade-u3wz4m-control-plane-sssj8 in cluster kcp-upgrade-u3wz4m in namespace kcp-upgrade-7wod8i

Nov 17 23:55:08.664: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-u3wz4m-control-plane-sssj8

Failed to get logs for machine kcp-upgrade-u3wz4m-control-plane-f2tqr, cluster kcp-upgrade-7wod8i/kcp-upgrade-u3wz4m: dialing public load balancer at kcp-upgrade-u3wz4m-57efbbb7.westeurope.cloudapp.azure.com: dial tcp 20.76.41.98:22: connect: connection timed out
Nov 17 23:55:09.935: INFO: INFO: Collecting logs for node kcp-upgrade-u3wz4m-md-0-xkksj in cluster kcp-upgrade-u3wz4m in namespace kcp-upgrade-7wod8i

Nov 17 23:57:19.736: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-u3wz4m-md-0-xkksj

Failed to get logs for machine kcp-upgrade-u3wz4m-md-0-68d5b84b6f-lsb5b, cluster kcp-upgrade-7wod8i/kcp-upgrade-u3wz4m: dialing public load balancer at kcp-upgrade-u3wz4m-57efbbb7.westeurope.cloudapp.azure.com: dial tcp 20.76.41.98:22: connect: connection timed out
Nov 17 23:57:20.970: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster kcp-upgrade-u3wz4m in namespace kcp-upgrade-7wod8i

Nov 18 00:03:52.952: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-u3wz4m-md-win-26g6f

Failed to get logs for machine kcp-upgrade-u3wz4m-md-win-764fb976bb-gzzk6, cluster kcp-upgrade-7wod8i/kcp-upgrade-u3wz4m: dialing public load balancer at kcp-upgrade-u3wz4m-57efbbb7.westeurope.cloudapp.azure.com: dial tcp 20.76.41.98:22: connect: connection timed out
Nov 18 00:03:54.392: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster kcp-upgrade-u3wz4m in namespace kcp-upgrade-7wod8i

Nov 18 00:10:26.168: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-u3wz4m-md-win-m7w4h

Failed to get logs for machine kcp-upgrade-u3wz4m-md-win-764fb976bb-tm75h, cluster kcp-upgrade-7wod8i/kcp-upgrade-u3wz4m: dialing public load balancer at kcp-upgrade-u3wz4m-57efbbb7.westeurope.cloudapp.azure.com: dial tcp 20.76.41.98:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-7wod8i/kcp-upgrade-u3wz4m kube-system pod logs
STEP: Fetching kube-system pod logs took 1.035631148s
STEP: Dumping workload cluster kcp-upgrade-7wod8i/kcp-upgrade-u3wz4m Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-c75nc, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-u3wz4m-control-plane-sssj8, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-u3wz4m-control-plane-sssj8, container etcd
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-75fbb, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-xngxg, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-674nj, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-u3wz4m-control-plane-sssj8, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-xqc85, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-7hctx, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-8ebtqh: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000347113s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-7wod8i" namespace
STEP: Deleting cluster kcp-upgrade-7wod8i/kcp-upgrade-u3wz4m
STEP: Deleting cluster kcp-upgrade-u3wz4m
INFO: Waiting for the Cluster kcp-upgrade-7wod8i/kcp-upgrade-u3wz4m to be deleted
STEP: Waiting for cluster kcp-upgrade-u3wz4m to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-u3wz4m-control-plane-sssj8, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-gvbl7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-674nj, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-674nj, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-sgn2n, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7hdgt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-75fbb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-jhn9p, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-u3wz4m-control-plane-sssj8, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-u3wz4m-control-plane-sssj8, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-7hctx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-sxbvr, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-u3wz4m-control-plane-sssj8, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-c75nc, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-jhn9p, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xngxg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xqc85, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-7wod8i
STEP: Redacting sensitive information from logs


• [SLOW TEST:2334.156 seconds]
... skipping 7 lines ...
Running the Cluster API E2E tests Running the self-hosted spec 
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107

STEP: Creating namespace "self-hosted" for hosting the cluster
Nov 18 00:18:08.739: INFO: starting to create namespace for hosting the "self-hosted" test spec
2021/11/18 00:18:08 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-ia458a" using the "management" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-ia458a --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management
... skipping 73 lines ...
STEP: Fetching activity logs took 568.165724ms
STEP: Dumping all the Cluster API resources in the "self-hosted" namespace
STEP: Deleting all clusters in the self-hosted namespace
STEP: Deleting cluster self-hosted-ia458a
INFO: Waiting for the Cluster self-hosted/self-hosted-ia458a to be deleted
STEP: Waiting for cluster self-hosted-ia458a to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-gmmz7, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-self-hosted-ia458a-control-plane-6tqkm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-self-hosted-ia458a-control-plane-6tqkm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-zzg5x, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-bnkxj, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-b6rqt, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-self-hosted-ia458a-control-plane-6tqkm, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-self-hosted-ia458a-control-plane-6tqkm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-2hdrz, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "self-hosted" test spec
INFO: Deleting namespace self-hosted
STEP: Checking if any resources are left over in Azure for spec "self-hosted"
STEP: Redacting sensitive information from logs
STEP: Redacting sensitive information from logs

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:41
  Running the self-hosted spec
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:160
    Should pivot the bootstrap cluster to a self-hosted cluster
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:169","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2021-11-18T00:39:24Z"}
++ early_exit_handler
++ '[' -n 157 ']'
++ kill -TERM 157
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 5 lines ...