Re: Some comments about GVFS



On Fri, 2007-05-04 at 17:52 +0200, Benjamin Otte wrote:
> > > 1b) Cancelling operations from another thread doesn't look like good
> > > design to me. I've learnt (both in theory and in practice) that
> > > threads are supposed to be independant and not call into each other.
> >
> > Eh? How else would you cancel a blocking i/o call? From within the
> > blocked thread? This is just bogus, cancellation is all about
> > cross-thread operations.
> >
> I'd argue that if you do a blocking call, you're aware that it's
> blocking and don't want to cancel it. Otherwise you'd use async I/O
> with proper cancellation mechanism such as g_main_context_wakeup().
> You've told me that it's main use case are transactions like calling
> the equivalent of gnome_vfs_xfer_async () which you'd want to
> implement in a thread by calling lots of sync operations one after
> another.
> I can see the use case, but even though I still don't like it, I can't
> come up with a better model.

Take for instance the default async i/o implementation (the one used for e.g. local files). It uses threads and blocking calls. If we want the default async implementation to support cancellation, then the sync version must support cancellation too.

> Yeah, I missed those implementations as I was only grepping for
> GMainContext which you don't use.
> I think it's a good idea to make the main context customizable so gvfs
> can be used from other threads.

The api used to have this, but there was a lot of overhead code carrying
this around and adding it to the parameter list of all functions, and I
thought there would be very very few people using it, so i killed it.
Maybe we should discuss reverting that decision...

> I'd love if GVFS would advocate the async model over the sync model
> but provide both. So g_input_stream_read would be async and
> g_input_stream_read_async would exist, too.
> The reason for this is that I think in most cases you want the async
> behaviour and it helps to tell lazy programmers that this is the right
> way to go. It's purely psychological.

I dunno. Its contrary to what all other APIs do, and i'm not sure the
psychological advantage is all that great. Programmers generally need a
specific functions, and they'll look around until they find it. Who are
we to decide for them if sync or async is the better model for them.

> I think that solution is fine. However, there is one thing I am
> missing: The read_all_available_data_right_now() function call. Or at
> least a read_x_bytes_if_available() call. This is interesting because
> in some cases I want to avoid calling back into the main loop.
> It's an issue in Swfdec with gnome-vfs where I'm supposed to display
> how many percent of the file is loaded and that display gets updated
> via the main loop. So after every read () of DEFAULT_SIZE I get my
> display updated. So loading seems slow even though it isn't, just
> because every read goes via the main loop.

The openoffice stream class has a call that gets the number of bytes you
can currently read without blocking.  Maybe something like that can be
added? For many streams this is not really something you can generally
calculate in a sane way though. Take a stream to a file on smb or nfs
for example. The way a read call works there is that you send a request
for read on N bytes at offset O, and you get back a reply with the data.
So, this function would always return zero. 

And actually, this model is very similar to what the gvfs client code
does with the gvfs daemon, so it will be true for all vfs streams.
Automatic readahead can make it different, so can reading in larger
blocks, but the operation is still not quite what you expect.

Btw, how do you handle buffer management with a call like
read_all_availible_data_right_now? You have to pass in a buffer of some
size. When you do, how is this different from the normal read call? That
already returns immediately when it has read any data and further i/o
would block.

> > > 8) The API seems very forgiving to (IMO) obvious porogramming
> errors
> > > that should g_return_if_fail. g_input_stream_read for example provides
> > > a proper error message when there's already a pending operation or
> > > when the size parameter is too large.
> >
> > Is this such a bad thing? Should we convert these to asserts?
> >
> I would advocate this. For one it's not an error message a user should
> be presented with ("Hey, someone passed a too large value to read()").
> But it's even more interesting when you stick the error into the
> object as I'm advocating above.

Yeah, this is probably a good idea.

> Another thing that came to my mind is if it would be better to use
> signals instead of providing callbacks for some operations. It seems
> somewhat weird for read(), but open and in particular close might be
> interesting to be implemented using signals.

I don't think signals are easier for one-shot function call initiated
callbacks like that. It just means you have to type more code when doing
the close() call.

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
 Alexander Larsson                                            Red Hat,
Inc 
                   alexl redhat com    alla lysator liu se 
He's a Nobel prize-winning voodoo rock star who knows the secret of the
alien 
invasion. She's a mistrustful renegade hooker with someone else's
memories. 
They fight crime! 




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]