network transparency (Re: GNOME CVS: gnome-core mmclouglin)
- From: Havoc Pennington <hp redhat com>
- To: Rodrigo Moya <rodrigo gnome-db org>
- Cc: Jeff Waugh <jdub perkypants org>, GNOME Hackers <gnome-hackers gnome org>, GNOME Components <gnome-components-list gnome org>
- Subject: network transparency (Re: GNOME CVS: gnome-core mmclouglin)
- Date: 02 Dec 2001 12:49:09 -0500
Hi,
Wow, wake up to a cool is-network-transparency-good flamewar in the
morning. ;-)
Rodrigo Moya <rodrigo gnome-db org> writes:
> while it is a good idea to have those interfaces to those libraries, the
> point in having remote components is to have them totally network
> transparent, that is, that I can just change a .oaf/.server file to
> point a component on a remote machine, restart the application that uses
> that components, and have it work exactly the same way as it does with a
> local component.
That results in either a) your local-case code is far more complex
than normal or b) your remote-case code is pretty broken.
For many cases in GNOME so far, b) has turned out to be true.
Networks crash. Networks have latency and function calls are no longer
fast. Out-of-process components crash, get killed, etc. Asynchronicity
achieved through re-entering the main loop results in fragile apps. It
just don't work worth a damn. Thus you see Evolution and Nautilus
getting in bizarre hosed states and losing processes, even though
Nautilus at least already has more-complex-than-local code that's only
required due to the remote case, to try to handle errors in remoting.
The irony of CORBA is that we only really need it at the moment for
the out-of-process case, but CORBA only really works properly for the
in-process case. ;-) I'm exaggerating a bit, but not too much. It may
sound like an extreme idea but I think it's pretty well-accepted these
days; for example Rob Gingell's talk at GUADEC made this point, and
the SOAP/XML-RPC trend reflects the same point.
In reality, apps that want to robustly use a network server need to be
enormously aware that they are networked; GConf has mountains of code
that would not be required if gconfd were in-process. It jumps through
hoops to avoid making remote calls when it can use local info, it
jumps through hoops to survive server or client crashing, it jumps
through hoops to try to avoid blocking too long, it jumps through
hoops to avoid having multiple server instances. It's a big old nasty
mess. But it at least thinks of being robust.
So what you really want isn't network transparency, but easy
networking; i.e. nice interfaces for doing networking, that go ahead
and expose the things you need to worry about, such as asynchronicity
and latency, and give you nice tools to manage them. Burying those
things under "transparency" is not a win, it just means people don't
deal with them.
Anyhow, I think we need to stick to fairly controlled and sane
out-of-process situations for now, such as panel applets. Our current
architecture should handle that fine. I don't think we're going to be
able to scale to more genuinely networked situations until we face the
reality that Networking is Hard.
We do need to go ahead and be fixing GNOME so you can run two
concurrent sessions at the same time, though. Ideally on two different
machines... this is an important short-term issue. And it probably
involves at least some remote componentry.
Havoc
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]