Re: dead nfs mount blocks opening other folders



On Thu, 2004-11-11 at 16:50 +0000, Laszlo Kovacs wrote:
> And here lies the problem. The queues are processed from the front and 
> new items are added to the end. An item at the front of a queue can 
> block the processing of all items behind it. If my nfs folder does not 
> come back from the async call for a long time then all other items 
> behind it will not be processed. So when I click on "/", all elements 
> from "/" go into the high priority queue, get processed, then moved to 
> the low priority queue and so on. /brokennfsmount does not move out from 
> the low priority queue for a very long time. The reason is that 
> directory_count_start() takes a very long time to run. This function 
> registers a callback through gnome_vfs_async_load_directory() to do the 
> counting and it takes a very long time for this callback to be called.
> 
> If I try to cd into /brokennfsmount or ls the contents it takes a long 
> time to get the command prompt back so this is not a gnome-vfs specific 
> problem.

Is this really the problem? Are you sure the file being in a queue
blocks other items in that queue from running? It seems to me that all
items in a queue are started in parallel, and the problem seems to be
that if the /brokennfsmount file sticks in one queue, the lower prio
queues get no cycles at all.

> I think that in Nautilus we should make sure that situations like this 
> can not block processing other folders as opposed to changing things in 
> gnome-vfs.
> 
> I don't have any obvious solution in mind yet (I don't think I know the 
> code well enough for this). This is mostly for Alex and other interested 
> people to understand the problem and maybe to provide some feedback (if 
> they know this code) as to do they agree or not that what I described is 
> a problem that needs to be fixed.
> 
> I would appreciate any feedback provided.

I'm not sure what a good solution is here. The blocking was done on
purpose so that reading the high priority data from files wasn't
hampered by reading low-prio data. Maybe we can allow a few items in the
high prio queue when we start the low-prio one? Its not really a
solution though... Maybe we can start the low prio queue when all items
on the high prio queue has run for some minimal amount of time?

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
 Alexander Larsson                                            Red Hat, Inc 
                   alexl redhat com    alla lysator liu se 
He's an all-American native American farmboy looking for 'the Big One.' She's 
a provocative renegade Valkyrie who dreams of becoming Elvis. They fight 
crime! 




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]