Re: Moving to bonjour over howl



On Thu, 2005-09-22 at 03:51 +0200, Lennart Poettering wrote:
> On Tue, 20.09.05 11:27, Alexander Larsson (alexl redhat com) wrote:
> 
> > I think you're sort of misguided at how this currently works, and not
> > very good at describing exactly how your optimal solution works. Let me
> > describe in exact detail how this actually works with the current
> > gnome-vfs + howl solution:
> > 
> > The howl daemon is always running, and all queries clients done happen
> > through it.
> > 
> > The first time an application that uses gnome-vfs calls
> > gnome_vfs_open_directory (or gnome_vfs_async_load_directory) on
> > dns-sd://local/ (which will do a query for interesting fileshares using
> > mDNS) we:
> > 1) start long running queries in the background
> > 2) do a "synchronous" browse query
> > 3) Collect responses from both these callbacks and store in global state
> 
> Why issue two queries if you aggregate the replies from both anyway?

This is slightly bogus, in a non-optimal sense. The reason for this is
that the first query is "asynchronous", and will only report results if
the glib mainloop is run. Whereas the second one is actively iterated
under the 200 msecs.

A nicer solution would do just one query and iterate it manually for 200
msecs and then "convert" it to an asynchronous query, but that might be
tricky.

> > 4) After 200 msecs, cancel the query in 2)
> > 5) return a list of results based on the global state
> > 
> > On any subsequent call to gnome_vfs_open_directory() we immediately
> > return the list from the global state, which is constantly kept
> > up-to-date by the query started in 1).
> 
> Oh, OK. I must admit that you right, I apperently didn't understand
> your code. I still wonder why you issue two queries simultaneosuly for
> the same thing?
> 
> How is the background query kept alive? In a thread? In its own
> process? 

It just sends the query to the howl daemon, and then adds the howl
connection file descriptor to the glib mainloop and handles incomming
messages there. This is done in the client app.

> > If the application adds a monitor using gnome_vfs_monitor_add() on
> > dns-sd://local/ the monitor will be called with ADDS and DELETES as we
> > get responses to query 1. All "normal" gnome-vfs UI apps like Nautilus
> > and the file selector do this.
> > 
> > This is all done in each client, however, since the query will be
> > constantly active for as long as any process that has accessed
> > dns-sd://local/ once, howl should be able to cache all the results from
> > it. (If its smart enough to notice that a new query is identical to an
> > already running query and share them.)
> 
> Avahi does detect this, too. But starting and stopping queries which
> are only referenced by a single program at a time is a waste of
> resources. 

I think this talk about resources is taking things a bit far. In
practice, for normal users, the only time this code is run is by
nautilus when the user visits the "Network" location (network:///, which
merges dns-sd://local/ if the gconf key /system/dns_sd/display_local is
set [on by default). Nautilus is generally a process that lives for as
long as the desktop session runs, so the query will always be running.
Very few other processes will normally read either network:// or
dns-sd://local/ directly.

> > What part of this do you think is non-optimal, and how would you do it
> > instead?
> 
> I still think the solution of forking off a background process that
> collects responses for all applications of a user is a better
> idea. Why?  Because it makes sure that the query is kept as long as
> possible/useful to minimize traffic. In addition different
> programs can make use of the same aggregated data. (i.e. running
> gnomevfs-ls twice within a short interval would impose the aggregation
> delay for the first invocation only). Thirdly you need to wait at
> least a full second to get a reliable complete list of services on the
> LAN. However imposing a 1s delay everytime the user accesses a DNS-SD
> share is not very user friendly, so it's best to abstain from it as
> much as possible.

In practice this forked off process is doing exactly the same work as
Nautilus does in the normal case.

> (A quick side node: Avahi (SVN) fires a special signal "ALL_FOR_NOW"
> when it thinks that more responses are unlikely in the next
> while. Better use this as a replacement for an unconditional time
> based delay)

Yeah, that sounds quite useful, and certainly better than hardcoding a
time. How do you detect when to fire it though?

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
 Alexander Larsson                                            Red Hat, Inc 
                   alexl redhat com    alla lysator liu se 
He's an otherworldly vegetarian astronaut from the Mississippi delta. She's a 
hard-bitten blonde bounty hunter with an MBA from Harvard. They fight crime! 




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]