Re: u/int64 support for glib, status?



On jeu, 2001-09-20 at 10:31, Paolo Molaro wrote:
> On 09/20/01 Tim Janik wrote:
> > before you jump to G_TYPE_INT64 too quickly, i'd rather have you adress my
> > second note in detail. what makes you think 64bit is going to be supported
> > by upcoming standards (i'm not saying i'm sure it's _not_ going to be
> > supported, i'm just not sure it will be). for 8/16/32, i can be pretty sure
> > because there's a hell of a lot of code out there that relies on having
> > those sizes.
> 
> Just as a data point, there are architectures that have a 64 bit integer
> but _no_ 16 bit integer type (and neither a 32 bit integer!), so, before
> worrying about future standards I'd worry about current systems.
> That said, I'm in favour of having G_TYPE_INT64. G_TYPE_BIGINT would be
> deceiving as it suggests it refers to arbitrarily large integers
> (bigint is commonly used for integers with arbitrary precision).

Given the way the glib Gtype system works, I would favor G_TYPE_BIGINT.
There is no such type as G_TYPE_INT32 or G_TYPE_UINT32: G_TYPE_INT and
G_TYPE_LONG are used instead, hiding the real size of the types
underneath. Even though I don't like this idea of hiding the real size
of underlying objects and I'd rather use G_TYPE_INT64 than
G_TYPE_BIGINT, I think we should stick to the current scheme, that is,
G_TYPE_BIGINT.

my 2 cents,
Mathieu

> 
> lupus
> 






[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]