Re: Changing chunk size of {GList,GSList,GNode} allocators
- From: Owen Taylor <otaylor redhat com>
- To: gtk-devel-list redhat com
- Subject: Re: Changing chunk size of {GList,GSList,GNode} allocators
- Date: 19 Jan 1999 17:46:04 -0500
Elliot Lee <sopwith@redhat.com> writes:
> Currently, the GList, GSList, and GNode allocators are set to allocate
> pools of 1024 items (resulting in 12KB, 8KB, and 20KB pools, respectively,
> on a 32-bit machine)...
[...]
> Now, when using memory pools of a certain size, you will obviously only
> have an overhead of 4 bytes per memory pools. Assuming that there is a
> random distribution of the maximum number [1] of items allocated, there
> will be on average half of a memory pool unused in a program.
>
> This leads to this chart (it ignores the negligible fixed overhead of
> creating a memchunk):
But there is also a 24 byte overhead for each pool within
gmem.c. So the '4 bytes / pool' overhead you mention
is really a '28 bytes / pool' overhead, which puts all
your figures below off by a factor of sqrt(7) [ about 2.7 ]
I.e, that 82 bytes optimium pool size you site for GList,
really should be 217.
Now, actually, if we special cased ALLOC_ONLY memchunks,
like the ones in question here, we could actually get
rid of this extra overhead, since we don't have to
keep the information around. But that doesn't happen
now.
But since you say "10,000 is a BIG overestimate"
and the final answer has a sqrt(10,000) factor,
isn't saying "82" pretty ridiculous.
Your conclusion that we are allocating too large
pools is most likely correct, but I think if we
just made all the pool sizes 256 instead of 1024,
that would be more reasonable than trying to
figure it out down to the item. (Not to say that
isn't a fun excercise ;-)
[
BTW, for people who want an executive summary,
Elliot's results were
2 * items allocated * per-pool wastage
optimimum size = sqrt ( -------------------------------------- )
sizeof(item)
]
If the per-pool overhead was minimized for ALLOC_ONLY
memchunks, 128 item pools might be appropriate.
(I suspect that most malloc's have a bit more per-block
overhead than the glibc one)
Regards,
Owen
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]