Re: Ottawa - kernel hackers ...



On Fri, 2003-06-27 at 17:19, Alan Cox wrote:
> > scattering a single, huge, uniformly unused chunk of memory across the
> > disk in 1000 tiny fragments points to some underlying brokenness
> > somewhere. [ RH 8.0 stock kernel ]. Of course - OTOH, perhaps the X
> > server is splitting it into tiles internally - but even so ...
> 
> Entropy. We do actually try and build bigger chunks but its an almost
> insoluble problem. We dont know the properties of the image. If it was
> mmapped and file backed it would be fine

	I suppose I just don't understand what's going on - I'm amazed that
what seems intuitively broken, and is empirically terrible is also
algorithmically non-fixable - is there really no hope ?

> > 	we can infer that it's the unaccounted for 8seconds that are seek time
> > on an unloaded system - but can those numbers be got more easily /
> > reliably ?
> 
> Most of those seem to be loading copies of cat I'd guess. 

	Well; my system is 'warm' inasmuch that I'm running Gnome already - so
some stuff is cached; here are two concurrent runs of it from cold:

$ time find ~/.gconf -name '*.xml' -exec cat {} > /dev/null \;
 
real    0m8.962s
user    0m0.225s
sys     0m0.369s
$ time find ~/.gconf -name '*.xml' -exec cat {} > /dev/null \;
 
real    0m0.571s
user    0m0.207s
sys     0m0.311s

	Which would seem to suggest that we have ~8 seconds of disk overhead;
presumably running 'cat' is a fairly fast process; since I have ~350
.xml files in ~/.gconf - I guess it can only take ~2ms max to do the
cat, vs. ~20ms to get the data in the cold case, the CPU sitting idle
for ~95% of the time.

	The trials of a laptop hard-disk I guess.

	Hmm,

		Michael.

-- 
 michael ximian com  <><, Pseudo Engineer, itinerant idiot




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]