Re: Proposal for Remote Execution
- From: Jim MacArthur <jim macarthur codethink co uk>
- To: buildstream-list gnome org
- Cc: Jürg Billeter <juerg billeter codethink co uk>
- Subject: Re: Proposal for Remote Execution
- Date: Thu, 14 Jun 2018 13:34:14 +0100
On 11/04/18 21:37, Jürg Billeter wrote:
Remote Execution
~~~~~~~~~~~~~~~~
With CAS support in place in the artifact cache, the virtual file system
API, and the FUSE layer, actual remote execution support can be added.
The core part on the client side is to implement a remote execution backend
for the Sandbox class.
As part of the build job, Sandbox.run() will upload missing blobs from local
CAS to remote CAS and submit an action to the Execution service. The output
files will not immediately be downloaded, avoiding unnecessary network
bandwidth, however, the digests will be added to the virtual root directory
of the sandbox as appropriate.
Trying to implement this part of the plan has thrown up a few questions
about what gets transmitted and received.
The only way I can imagine this working is if the initial sandbox,
before a build starts, is uploaded. I don't think the remote workers can
use individual source objects (which would save a lot of space) since
the remote workers won't know how to layer the sources. So we'd end up
with a custom tree being uploaded for each command, and the action would
simply be "run commands x,y,z on tree abc456, and store the resulting
key/ref" (abc456 being the input_root_digest field of an Action).
Does that match with your understanding?
If so, do we run this for all commands (e.g. configure-commands as well
as build-commands)? and would it make sense for the client-side remote
execution work to be a variant of the existing Sandbox class?
Jim
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]