Re: g_slice_



On Mon, 5 Dec 2005, Morten Welinder wrote:


It sounds to me that you had two pieces of code that sucked: the
glib mempools and the glibc malloc.

glibc malloc is actually pretty good, i don't think it sucks for what
it does. it's mallocs on many other platforms that can suck, and in
some cases suck badly.

Well known.  Both Gnumeric and Evolution wrote their own specialized
pool allocators because of that.  (Why two?  Because we didn't know
about each other.)

right, and how many more projects/libraries do you want to implement
their own mempools because no good implementation is provided by glib?
i think you just disqualified your own argumentation here, what
you presented is a good case for having a chunk allocator in glib.

about gnumeric/evolution implementing two allocators because they
don't know about each other, that is obviously the faults of the
projects then. if they implement a general purpose chunk allocator,
that's an obvious candidate for glib inclusion, and should have been
submitted to glib bugzilla. this would have also ensured that other
projects know about new allocator implementations.
i haven't seen such patch submisisons for glib though.

I don't think I should compare slices to memchunks because that is not
what I have been using.

this is a different case for most other glib users out there which use precompiled binary distribution packages.

 I have configuring with "--disable-mem-pools
--enable-gc-friendly" for ages.  Add in using a less bogus malloc and
you have a better baseline.

i guess enable gc-friendly logic could be added to g_slice, it's
not like memset (slice,0,slicesize) would be impossible to add to it.

So my working Purify support is now gone.
So my one piece of hot allocation code is now two.

4, if you consider glibc malloc, g_slice, the evolution and the gnumeric
allocators. you might want to talk to the evo and gnumeric guys to use
g_slice to improve that situation... ;)

So I now have two, not one, allocators keeping seperate free lists.

again, with memchunks you had numerous seperate free lists, wasting
large amounts of memory. that is way better with g_slice. g_slice is
claimed to be an improvement over using memchunks, not over "not using
memchunks" or "not doing allocations" or "leaving the machine powered
off" ;)

You could have avoided the memchunk bogosity by using the malloc-
wrapping you introduced in early November.  That would have been
an obvious improvement with none of the downsides.

that leaves one of the fundamental disadvantages of malloc over
memchunks, namely a boundary tag per allocated block.
if using malloc/free would always be better, why do you think people
have been using memchunks in the first place?

 Improving
glibc's malloc would probably be a good idea too.

feel free to talk to ulrich drepper about that. that was definitely
not within the scope of what i was going to improve with gslice though.

Which brings me to...  What data structures in glib programs (existing
and predicted in the next few years) are (a) used multi-threaded, and
(b) used often enough to warrant performance considerations?

(a) see the documentation, for many you need to ensure exclusive access,
    but some are documented as being usable in multi threaded scenrios;
(b) all. if a structure is not used, it's probably not worth having it
    in glib.

I see GList nodes, GSList nodes, and GHashTable nodes.  Maybe that leaves
out one of two, but the scale is right.

This had better be what you are optimizing for.  Not, for example:
- if you use n CPUs (n>1), you want to be able to allocate n times the
  amount of memory of a single CPU per time interval, and not just be safe
  against concurrent allocations

see http://bugzilla.gnome.org/show_bug.cgi?id=116805 if you don't believe me.

- if you consider one allocator per thread, take into account that chunks
  may be allocated in one thread and freed in another
Those are fine considerations.  Just not for glib.

sorry, how do i have to understand this? do you want to segfault for code like:

threadA:
  GDK_THREADS_ENTER ();
  foo = gtk_widget_new (GTK_TYPE_FOO, NULL);
  GDK_THREADS_LEAVE ();

threadB:
  GDK_THREADS_ENTER ();
  gtk_widget_destroy (foo);
  GDK_THREADS_LEAVE ();

Add to that, that the number of C programmers who can write working
threaded applications is even smaller than the number of C programmers
who can write code that doesn't leak enough to be a problem.  You are
not going to see many multi=threaded C applications using glib.

you are approximately 7 years too late to argue about having threading
support to glib or not.

 And if
threading is taking place in a higher language, it isn't clear that
malloc is where you need the smarts.

you might want to read up in:
  [Bonwick01] Bonwick and Jonathan Adams, Magazines and vmem: Extending the
  slab allocator to many cpu's and arbitrary resources.
  USENIX 2001, http://citeseer.ist.psu.edu/bonwick01magazines.html

about the benefits from investing in core system services (in the
"Conclusions" section).

PS: thanks for your suggestion about using (void) ((type*) 0 == (mem_chain));
    for a type safe g_slice_free(). that has been constructive criticism, and
    is integrated into glib now.


Morten


---
ciaoTJ



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]