Re: Some shortcomings in gtestutils



On Thu, Feb 21, 2013 at 11:27:03AM +0800, Sam Spilsbury wrote:
On Thu, Feb 21, 2013 at 10:46 AM, Federico Mena Quintero
<federico gnome org> wrote:
Hi, everyone,

I've been writing some tests for GtkFileChooserButton and putting them
in gtk+/gtk/tests/filechooser.c - this is the old test suite,
resurrected and alive.

So, I've been learning what gtestutils provides.  It is not bad, but it
seems pretty pedestrian on some counts.  These are some things I'd like
to change, or at least for someone to point me in the right way of doing
them:


Warning, controversial mail ahead.

I've noticed that gtester and gtestutils is a little lackluster. I
wanted to see what the GNOME community's thoughts on using an
established xUnit framework like Google Test, CPPUnit or boost::test
would be.

I know, its C++. I get that's not appreciated around here.

The only reason why I suggest considering frameworks like these is
that they're incredibly powerful, easy to use and I've found that they
can be applied to almost any situation, C or C++. Google Test
addresses some of the concerns listed below, and does a lot more, like
custom test environments, SetUp/TearDown fixtures, matchers (which
allow you to do something like EXPECT_THAT (value, MatchesSomething
()), where that would be looking inside lists, variants, whatever). It
also ties in really nicely with Google Mock, which allows you to
define mock implementations of classes and control on a very fine
level how those mock classes behave on a per-test basis. I've used
Google Mock alongside in my own GObject projects to great effect[1],
and I've been playing around with the idea to use
GObject-Introspection to automatically generate them from GInterfaces.

(Marketspeak: Google Test has some really strong credentials in this
area. It is the test framework of choice for projects like Chromium,
XBMC, Xorg and projects at Canonical, such as Unity, Nux and Compiz.
Chromium in particular has tens of thousands of unit, integration,
system and acceptance tests, mostly all written using Google Test.
Self-promotion: Compiz has over 1350 Unit, Integration and Acceptance
tests, and works very well with Google Test).

Its just some food for thought - I agree with Federico that having a
flexible, powerful and easy to use test framework certainly lowers a
substantial barrier to entry .

Having worked with googletest and xorg-gtest [1] for X integration testing,
I can say the most annoying bit is to get the whole thing to compile. The
C++ ODR prevents us from building gtest and xorg-gtest as library and then
compiling against that. and autotools is not happy with having external
source files. if you're planning to share a test framework across multiple
source repositories, that can be a major pain.

[1] http://cgit.freedesktop.org/~whot/xorg-integration-tests/

*snip*

I think that some of the ideas you've raised here are excellent. To
address some of your concerns:

1. Most xUnit frameworks I know of have something like ASSERT_* and
EXPECT_*. The former will set the test to failed and return directly.
The latter will just set the test to "failed" and continue.

Generally speaking, it is acceptable to have multiple ASSERT_
statements because they usually belong in SetUp/TearDown logic.
ASSERT_ usually means "this test failed because it could not be run
due to a failed precondition in SetUp". Ideally, every test should
have only one EXPECT_* statement. The EXPECT_* statement is the
essence of the test, and tests should test one thing so that you have
pinpoint resolution as to which part of the unit failed.

2. The best way to handle this case is to expose the test binary in
the build-directory so that users can run it directly. Sometimes you
might have multiple test binaries, but that is fine. Ideally no test
should have any dependency on the previous test finishing. If that
happens, then you've got a serious problem with your code.

fwiw, one of the drawbacks I found with the multiple binary case is that it
reduces the chance of running all tests every time. there's a sweet spot
somewhere between too many and too few binaries and I suspect it differs for
each project.

for the googletest case for example separate binaries will give you a
separate junit xml output, which make some regression comparisons harder.

3. This is a good point

4. This is an excellent idea for bootstrapping an acceptance testing
framework. Care should always be taken when writing acceptance tests
though, in particular:

 a. They aren't a replacement for unit or integration tests. They run
particularly slowly, and are usually more failure prone because they
can be impacted by external factors that you might not expect. The
best kind of code coverage is code covered at a unit, integration and
acceptance level.
 b. Acceptance testing can also be tricky because they often rely on
introspection through, eg, the a11y interface. That can create
unwanted coupling between the internals of your system and the test,
which means that you'll be constantly adjusting the tests as the code
is adjusted. Determine what kind of interface you want to expose for
test verification and make the tests rely on that.
 c. Running on some kind of dummy display server (eg, wayland, xorg
with the "dummy" video driver[2]) is always a good idea because it
means that you don't have awkward situations where you can't get tests
running in continuous integration servers. Tests that aren't run are
broken tests.
 d. Never ever ever ever ever ever ever ever ever rely on sleep()s,
timing, or whatever in the tests when verifying conditions. Autopilot
in Unity does this and it is a giant fail.

+1 on the "don't use sleep()". it seems like a simple solution at first,
but when all of your 200+ tests are sleeping, you decrease the chance of
someone running the full suite.

Cheers,
   Peter


Otherwise, great to see these topics are being talked about.

[1] http://bazaar.launchpad.net/~compiz-team/compiz/0.9.9/files/head:/gtk/window-decorator/tests/
[2] Xorg-GTest is good for this.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]