Re: gobject profile ... (fwd)



Michael Meeks <michael ximian com> writes:

> Hi there,
> 
> 	This may or may not be appreciated - but, I'm profiling test-ui in
> a nice tight loop doing 1000 batched sets; to see if any obvious
> bottlenecks come out ( with CVS HEAD libbonoboui/test/test-ui and
> eazel-tools/prof/prof incidentaly ):

Oh for decent profiling tools.... this sort of information is useful,
but without call graphs not that useful. I'd really like to be able
to see _where_ all the lock/unlock action was coming from.

> 	Anway - as you can see, we hit a lot of things before anything I
> have control over (superficialy, although clearly not algorithmicaly). And
> worse - it seems that locking is swallowing a huge chunk of the time, far
> outweighing anything else - at 40% of the time ... :-)
> 
> 	So. It seems over the codebase, mostly we have constructs of the
> type:
> 	static GMutexType *amutex = NULL;
> ...
> 	if (g_thread_supported ())
> 		amutex = new_mutex ();
> ...
> 	if (amutex)
> 		take_lock (amutex);
> 
> 	etc. which seems to serve us well and good for Gdk etc. etc.
> 
> 	So is there any good reason why the type_rw_lock in gtype.c is not
> handled this way ? I'm happy to pay the locking penalty if I'm using it,
> but ...

I'm a little confused here ... the fact that __pthread_mutex_lock()
and __pthread_mutex_unlock() show up in your profile proves that
you called g_thread_init()...

Even though the overhead of calling g_static_rw_lock_reader_[un]lock alone 
is much smaller than actually doing the locking, I would also like to
see the reader-writer locks conditionalized on g_thread_initialized(),
but it isn't going to make the overhead for the threaded case go
down. The only way of doing this is to drive down the amount of
locking, either by being smarter in lock/unlock placement or by
calling the functions that are doing the locking less.

Regards,
                                        Owen




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]