Re: GLogLevelFlags enum and g_log



On 24/05/14 17:59, Umut Tezduyar Lindskog wrote:
Normally it shouldn't be a problem but since glib defined values of
GLogLevelFlags can fit in to 8 bits, sizeof(GLogLevelFlags) could be 1
depending on the compiler.

My understanding is that GLib (and particularly GObject) has a general
architectural assumption that, in the compiler's target ABI, all
non-bitfield enums that could fit in an int are the size of an int (and
in particular, no smaller). What compilers, platforms, ABIs that don't
make this true do you have in mind?

If this is the case, then it would seem wise to add something like

    G_STATIC_ASSERT (sizeof (enum { A = 1 }) == sizeof (int));

to check this assumption.

In practice, if the compiler assigns enums' sizes according to their
defined values, you can't add elements to the end of an enum without
breaking ABI, unless you include a placeholder value to reserve space.
Consider this hypothetical library:

    /* libfoo v1 */
    typedef struct {
        enum { E1 = 1, ..., E255 = 255 } e;
        char c;
    } Foo;

    /* libfoo v2 */
    typedef struct {
        enum { E1 = 1, ..., E255 = 255, E256 = 256, ... } e;
        char c;
    } Foo;

    /* application code */
    Foo *my_foo;
    printf ("%c", my_foo.c);

(Realistically, e could be an error code that is intended to be extended
over time as more potential error situations are discovered, like
GIOErrorEnum.)

Compile and link the application code dynamically against libfoo v1, and
then upgrade to libfoo v2. If the compiler made the first version of
Foo.e 1 byte long, then the access to my_foo.c would essentially compile
to ((char *) my_foo)[1]. When upgraded to libfoo v2, it would
incorrectly return the second byte of my_foo.e instead (and sizeof(Foo)
would change, causing further ABI breakage for any struct that contains
a Foo as a member).

More generally, I think it's fine that GLib has architectural
assumptions that make it portable to all relevant compilers and ABIs
(GNU/*, Android, other Linux libcs, *BSD, Darwin/Mac OS/iOS, Windows,
etc.) while not being portable to theoretical pathological ISO C
implementations. However, where possible it would be good to have static
assertions (G_STATIC_ASSERT) or regression tests that document those
assumptions in an automatically-checkable way.

Some other departures from ISO C that I am aware of:

* ISO C does not guarantee that null pointer constants are
  all-bits-zero, or that there is only one representation of a null
  pointer. GLib assumes that there is exactly one representation of a
  null pointer, NULL, and that it is all-bits-zero.

* ISO C does not guarantee that signed integers use twos-complement for
  negative numbers - they are allowed to use sign-and-magnitude or some
  even weirder representation. There is almost certainly code in GLib
  that assumes that they do use twos-complement.

* ISO C does not guarantee that 8-, 16-, 32- and 64-bit types exist
  (only that *if they do*, int8_t etc. are defined appropriately).
  GLib assumes that they do exist; it probably also assumes that
  char, short, int are exactly 8, 16, 32 bits respectively, and that
  long is either 32 or 64 bits.

* ISO C does not guarantee that data pointers (e.g. void *) have the
  same size and representation as function pointers (e.g.
  void (*) (void)), or even that all function pointers are the same
  size/representation. GLib assumes that data pointers and function
  pointers are both basically integers of equal size, so you can cast
  freely between them. According to Linux dlopen(3), POSIX.1-2013
  basically also requires GLib's interpretation, so it has probably
  always been true in practice on Unix platforms.

Regards,
    S



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]