Re: To Discuss: Application Startup Time



Havoc Pennington <hp redhat com> writes:

> > 	I would like to bring up a discussion on how we could make GNOME
> > applications start faster, at hacker level and user level.
> > 
> 
> Not to be a smartass, but the discussion is very short ;-)

Well, the discussion is short if you limit it to "profile and
optimize", but it isn't obvious what kinds of profiling and
optimization are needed.

> 1. Profile application startup
> 2. Post mail with profile results showing what takes up what 
>    percent of startup time
> 3. Optimize those things that take up most of the time
> 
> Valgrind, oprofile, gprof, speedprof could all be useful tools for step
> 1...

I have a profiler that presents basically the same data as speedprof,
but for the entire system, not just a single application. It's
unfinished and crude, but it does basically work:

        http://www.daimi.au.dk/~sandmann/sysprof-0.02.tar.gz

It uses a kernel module, so you'll need to have kernel source
available to build it, and you need to be root to install the kernel
module. Also, for some reason the profiler itself has to run as root
to get symbols from the X server.

Dynamic linking is one thing that has been studied. I don't know much
about it, but I'll note that Mike Hearn pointed out that on Windows a
shared library is quite often really a COM component. As I understand
it such a component doesn't offer a big set of function entry points
like the typical Linux library; instead it offers only one, the
"QueryInterface" function. Calling that function returns a pointer to
a vtable, where all the real functions are. I imagine having only one
public function in the library reduces linking time significantly.

But to get startup times closer to the MS Office range, disk use is
probably even more important than CPU use. The problem is that there
aren't any disk profilers that I know of. Off the top of my head, here
are some things that would be nice to measure:

        - How much time was spent 
                - doing CPU work only (no outstanding disk)
                - waiting for disk with the CPU idle
                - doing CPU work with outstanding disk work

        - What kind of disk access did the application do
                - lots of seeking
                - reading consecutively

        - What things did the application cause to be read in from disk

        - How many pagefaults did this program cause

        - What code points cause those page faults

        - In what order did those page faults occur

With an understanding of what appliations spend their time on, a
possible part of the solution could be object file reordering. This is
the idea of reordering the functions in a binary file so that
functions that are often used together are kept together. This reduces
the number of pages that must be read in from disk, and it reduces the
amount of memory used during runtime.

I put up some old notes on that subject at

        http://www.daimi.au.dk/~sandmann/reordering.txt

These notes are just something I wrote for myself a while back.

- They are simply wrong in some places (The statistics stuff)

- They may assume knowledge that only exists in my head.

- It's not clear that the idea outlined is even a good one.

Still, some people may find them useful. Possibly the most useful part
of the notes is the references at the bottom.


Søren



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]