Re: [BuildStream] Guidance on unit tests
- From: Tristan Van Berkom <tristan vanberkom codethink co uk>
- To: Jim MacArthur <jim macarthur codethink co uk>, buildstream-list gnome org
- Subject: Re: [BuildStream] Guidance on unit tests
- Date: Tue, 20 Nov 2018 19:33:39 +0900
Hi Jim, Angelos,
On Mon, 2018-11-19 at 11:13 +0000, Jim MacArthur via BuildStream-list wrote:
On 16/11/2018 10:48, Tristan Van Berkom via BuildStream-list wrote:
[...]
In summary, I feel that saying that end-to-end testing is "preferred"
is not strongly worded enough, I would rather the statement discourage
unit testing more strongly, and make it explicit that unit testing is
to be avoided as much as at all possible - and that we should strive to
convert any existing unit tests to end to end tests at all times.
Further than this, one should ask themselves the question: If it is not
possible to exercise this code path or branch statement by running
BuildStream in any way; then what is the justification for this line of
code to exist ?
If a line of code can be proven to not be testable in an end to end
test, it does not really mean that it should be unit tested, it rather
means that the line of code has been proven to be dead code and code
should rather be restructured to exclude the line.
I agree with all the above, particularly testing at the function level
which I think is too low-level. However, I think it misses one value of
unit tests, that they provide a direct indication of what has broken.
You should be able to exercise all code paths via the user interface,
just as one can test the strength of rivets by sailing dozens of ocean
liners, but providing coverage isn't all of testing. Unit testing done
right (it isn't always) will tell you more precisely what is broken than
an end-to-end test.
Some of this could be improved by adding more assertions and proving
more detailed exceptions in the code instead, but these can't capture
the necessary context for a test in all cases.
You make a very good analogy and I agree with your approach.
I think that we could do well to have tests for internal surfaces which
make sense to test and we want to have a wall of defense for, which is
why I like having the YAML level tests in place.
I think another good example of an internal API surface worthy of it's
own test battery is the CasCache, I would be very supportive of
additional tests which test internal API surfaces, as it helps to keep
the architecture sane to have well defined behavioral expectations of
internal components like this (it could turn out that your ocean liners
are sailing properly *because* of a malfunctioning rivet, this is also
a very undesirable situation).
That said, I don't think that the changes made to the contributing
guide reflect this approach and strategy at all; we should not be less
strict about the end to end testing due to the existence of unit
testing of internal API surfaces, and we should not really shy away
from writing tests which are backed by internal APIs, we should rather
strive for testing of internal API surfaces which need to have well
defined API contracts for the higher level codebase to rely on, without
omitting tests which observe how these behaviors impact and define the
outward user experience.
On Fri, 2018-11-16 at 17:39 +0000, Angelos Evripiotis wrote:
[...]
I fear the change may have landed prematurely then. Are you good with
it as-is, on the understanding that these enhancements will also make
their way in separately?
Don't get me wrong, I am not frustrated to the point of reverting.
I do fear we are sending the wrong message with this and hope to reach
consensus and rectify that as soon as possible.
Do you agree with Jim and myself on the above conversation ?
As you wrote this last patch to the guide, I do hope that you will
offer up an alternative.
Cheers,
-Tristan
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]