On 11/04/18 21:37, Jürg Billeter wrote:
> Remote Execution
> ~~~~~~~~~~~~~~~~
> With CAS support in place in the artifact cache, the virtual file system
> API, and the FUSE layer, actual remote execution support can be added.
> The core part on the client side is to implement a remote execution backend
> for the Sandbox class.
>
> As part of the build job, Sandbox.run() will upload missing blobs from local
> CAS to remote CAS and submit an action to the Execution service. The output
> files will not immediately be downloaded, avoiding unnecessary network
> bandwidth, however, the digests will be added to the virtual root directory
> of the sandbox as appropriate.
Trying to implement this part of the plan has thrown up a few questions
about what gets transmitted and received.
The only way I can imagine this working is if the initial sandbox,
before a build starts, is uploaded. I don't think the remote workers can
use individual source objects (which would save a lot of space) since
the remote workers won't know how to layer the sources. So we'd end up
with a custom tree being uploaded for each command, and the action would
simply be "run commands x,y,z on tree abc456, and store the resulting
key/ref" (abc456 being the input_root_digest field of an Action).
Does that match with your understanding?
Yes.
If so, do we run this for all commands (e.g. configure-commands as well
as build-commands)?
For now, yes. We might end up specializing and sending all commands as a single batch command.
We should revisit this after we have it working.
and would it make sense for the client-side remote
execution work to be a variant of the existing Sandbox class?
That was my understanding of where this was going. Deferring to Juerg to be sure :).
Jim
Cheers,
Sander
_______________________________________________
Buildstream-list mailing list
Buildstream-list gnome org
https://mail.gnome.org/mailman/listinfo/buildstream-list