Re: [BuildStream] Modifying/excluding elements from junctions
- From: Tristan Van Berkom <tristan vanberkom codethink co uk>
- To: Abderrahim Kitouni <akitouni gnome org>, buildstream-list gnome org
- Subject: Re: [BuildStream] Modifying/excluding elements from junctions
- Date: Fri, 15 May 2020 13:33:24 +0900
Hi Abderrahim,
Thanks for taking this to the mailing list.
For interested readers, the referenced link for [1] in your post is:
https://gitlab.com/BuildStream/buildstream/-/merge_requests/1913
I want to provide some context as to why I am a proponent of the
"replace" strategy, and am opposed to the "rebuild" strategy, as you
referred to them - I think it's important to explain because I know
that not everyone will understand the underlying motivations... but I
might end up ranting out an entire book, so I will leave this long
explanation at the end of my mail for people to read if they want.
So lets jump right into your mail...
On Thu, 2020-05-14 at 11:41 +0100, Abderrahim Kitouni wrote:
Hi all,
[...]
Before discussing which buildstream features we need for the solution,
I'd like to first talk about the semantics of what's a solution to
this problem. I'm using the terminology above (freedesktop-sdk,
gnome-build-meta, freedesktop, gnome) to keep things manageable, but
this applies to any project in the same situation. There are three
possible ways to do it:
- overlap: just stage the gnome-build-meta version on top of the
freedesktop-sdk version. (This is what we currently do for glib)
con:
- can keep files from the freedesktop-sdk version
- involves adding a dummy dependency to ensure the gnome-build-meta
version is always staged on top of the freedesktop-sdk version.
- replace: remove the freedesktop-sdk version of the element and
replace it with the new version, without rebuilding reverse
dependencies. (we do this for e.g. gstreamer plugins)
pro:
- uses artifact from junction as-is (no need to rebuild, have the
exact same binaries)
con:
- can lead to subtle failures if the replacing element isn't exactly
ABI compatible with the replaced element.
I am a proponent of this approach, but not because it is generally a
good thing, none of these approaches are ideal (as you list cons
everywhere which I can agree with).
I would not call this replacing really, with BuildStream I would phrase
this differently and say:
"I want to take a hand full of artifacts I want from the upstream,
and I want to add the artifacts I want from my project, I want
to put them together and create some kind of product".
I.e. nothing changes, there is no rule that you must consume runtime
dependencies of artifacts, if you choose to omit an element from the
upstream defined dependency chain when you build, that's up to you.
As you point out, this can have negative effects on the reverse
dependencies of this element in the upstream, if you also consume those
reverse dependencies.
The reason why I support this approach is not because it's perfect, I
support this approach because the downstream does what it wants and has
every right to do, the downstream can still perform it's own validation
on the combinations it chooses to support, and encapsulation of the
upstream is never breached.
- rebuild: replace the freedesktop-sdk element with another one,
rebuilding all reverse dependencies. (we do this for a few other
elements)
pro:
- everything is built consistently (e.g. ABI, or changes in headers
are applied)
con:
- need to rebuild everything (we end up with things that aren't
exactly the same as the junction).
I think you missed out some other cons:
By replacing the definition of an element in an upstream project so
that that project builds the element differently:
o Your replacement may be configured differently, or built against
a different set of dependencies, resulting in a different
configuration.
o The reverse dependencies of the replaced element may have hard
requirements on the element being configured and built precisely
as the upstream project defines it.
o The downstream will not be consuming the artifacts of the upstream
project "as is", essentially voiding any validation which the
upstream may have performed for the affected element and all of
it's reverse dependencies.
o The upstream version of the element might depend on private
elements which your downstream cannot legitimately know exist, and
the same thing can be true for private include files and variables.
This can lead to breakage when reving the upstream if your element
is illegally accessing private resources from the upstream which
may change.
Essentially this has much the same cons as the "replace" approach,
except breakage can occur not only at the binary artifact compatibility
level, but also at the project level.
all of the above are currently possible, although they involve jumping
through some hoops: using filter elements, and manually keeping
dependencies in sync for "replace"; patching (or copying and modifying
element from) the junction for "rebuild".
The first question is which of these workflow do we want to support?
Then, there are a few other things to consider regarding how to
actually implement it (what syntax to use? should it be restricted to
junctions? etc.).
Valentin David has a MR implementing the "rebuild" workflow [1]. The
discussion there is also worth reading.
For my part, I'd like to have the "replace" workflow, with a syntax to
allow staging an element instead of another so whenever an element
depends on both the replacing and the replaced element, only the
replaced element should be staged. This could also be used to use a
stack element from the junction, but exclude one of its dependencies.
Ironically, in a build system that we were working with before starting
BuildStream, I was proposing a "replaces: otherelement" feature, in
order to say that:
"This element replaces that other element (including that element's
runtime dependencies in this case), so that reverse dependencies
which depend on me, will depend on me instead of that other
element"
However, BuildStream is much more powerful and flexible, and as such I
don't think such a feature is needed.
My preference here is still what I mentioned on the MR[2], which is to
extend the `compose` element to allow it be selective about which
elements are included in the composition (allowing hand picking or
excluding of elements).
In the example of gnome-build-meta and GTK+, gnome-build-meta could
start with creating a compose element consisting of the dependencies it
wants from freedesktop-sdk, sans the GTK+ artifact, and build it's own
in place, starting from there.
Valentin argues on the referenced discussion that `compose` is not
practical because of overlaps, but from my perspective; making
`compose` more versatile and useful solves more problems for more
unpredictable use cases, and I'm really interested in the use cases I
cannot predict.
The way I see it, `compose` is not intended to work in the way it's
currently being used, not specifically at least, it's only supposed to
"create compositions of artifacts", which can be useful for so many
things we cannot predict, if there are limitations there, we should
address that either by baking features into `compose`, or by creating
different elements which can easily be combined with `compose`.
Cheers,
-Tristan
[2]: https://gitlab.com/BuildStream/buildstream/-/merge_requests/1913#note_339922579
PS: Here is my promised long winded explanation and context.
When we first designed BuildStream, there was already a lot of build
systems in the wild and we borrowed a lot of concepts from other works,
I think it's safe to say that junctions was not one of these.
What we had to solve was: How do multiple projects, developed by
separate organizations, participate in a build process to produce a
final product or "appliance" ?
The decision to invent "junctions" was driven by a long history of
pain; we had projects like Buildroot, which basically applies a "fork
and extend" model (maybe it evolved since then I'm not sure), and we
had projects like Yocto which provides layers; which was a great
advancement in the integration space because it allowed multiple
separate organizations to provide their own stand alone layers, and you
could integrate them all into the same build - this however becomes
painful if you want to integrate these layers continuously.
For an analogy of what working with "layers" looks like: imagine a
world where there was no such thing as dynamic linking, and people have
some idea about API stability, but it's not considered all that
important yet.
o You have an organization which provides sources and reference
builds of libc.
o You have various free software organizations and vendors which
provide patches to add features and implement functionalities on
top of this static libc
o As an application author, you need to pull in these patches from
various sources to patch your libc and add new APIs, essentially
building your application.
o If ever you wanted security updates, you would pull the latest
patches from the various sources, and try to patch it all together
again and hope that it builds.
Sadly, in many cases today the life of an integrator looks like this,
as ridiculous as it sounds; we pull in layers and patches from various
sources, throw them together and hope for the best, if you've updated
your base layer and runtime, there is probably a lot of work you need
to do to refactor all of your third party layers to build again.
So instead, we wanted to try something new, we hoped we could create a
world where you can consume "software stacks" and integrate them
together safely, with little to no friction, like API stable dynamic C
library surfaces.
The sacrifice we would make to achieve this of course, is that you
cannot start patching your upstreams and breaking encapsulation, not
any more than you can have the GTK+ library poking at private data
structures from glib or gdk-pixbuf.
So we said:
o Projects produce artifacts
o Projects can be configured with options, however the options must
be limited (notice there is no freeform string or integer options),
such that maintainers can ensure the artifacts produced are
actually supported
o Projects have an API surface, which is the elements and include
files which they declare as public, and the variable names and
meanings which are exported by those include files, etc.
o If a downstream project wants a feature, they can fork and patch
the upstream.
If the downstream wants the feature supported, it needs to work
with the upstream to get it supported upstream, so that it can
later get upgrades "for free" (in the same way that you would
prefer to upstream your GTK+ patches than to maintain your
downstream patches).
I think it's a grand vision which we have yet to realize in it's full
potential, BuildStream can only make it possible but it cannot enforce
that people use it responsibly (not any more than a C compiler can
force programmers to write stable API surfaces responsibly).
That's where freedesktop-sdk and gnome-build-meta come in, as they are
pioneering this space, we are practicing how to make things better in
the long term, and hope these projects will serve as a model for
developing BuildStream projects in the future.
A true testament to the success of the junctions approach, will be the
day when gnome-build-meta does automated tracking of the latest
freedesktop-sdk stable branch and upgrading freedesktop-sdk rarely
causes any trouble; you just get all your security patch updates for
free, while still deriving a lot of context through include files and
such.
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]