Re: [g-a-devel]Re: Proposed implementation agnostic GNOME Speech API.



Hi Bill, Michael,

> >	Actually looking at the code, this doesn't seem to add a whole lot to
> >me. I don't think providing a different API hides much more of the
> >implementation really.
> >  
> >
> I agree with Michael here.  The existing C bindings are fairly simple; 
> IMO the time spent writing more wrappers would be better spent writing a 
> few bits of sample code for the benefit of developers who aren't totally 
> at home with CORBA C bindings.  There's really very little difference 
> between
> 
> GNOME_Speech_Speaker_getSupportedParameters (obj, &ev);
> 
> and
> 
> speech_speaker_get_supported_parameters (speaker);

There is a big difference. The latter totally hides the under-lying
implementation. The former doesn't. If an application writes to the
latter speech API, and if a new speech implementation comes along at 
a later date using alternative technology, that application will not 
have to be rewritten to work with it.

The second intent here is to look at the bigger picture and provide a
speech API (and implementation) that will work across desktops (GNOME 
and/or KDE). That won't happen if CORBA variables are being exposed in the 
function definitions but might if this is hidden. It won't happen with
the existing CORBA impl. either, but I intend to prototype using D-BUS.

> >	Then of course there is the Java angle - the C binding doesn't make
> >life any easier for Java/Python etc. which are unlikely to want to write
> >extra custom bindings for gnome-speech - esp. in it's not-uber-stable
> >API state; CORBA/IDL de-couples you from the linking problems there.
> >  
> >
> Yes; and DBUS+Java are a potential problem, if one considers different 
> back-ends.  I think it's vital that both client and server be 
> implementable in more than just C here.  Certainly that was the impetus 
> behind making gnome-speech IDL-based; also it needs to be 
> remote-callable, for instance it would be fairly
> common for the speech engine to be non-co-located with a user 
> application - for instance if the application were remote (so that the 
> audio device is local), or if the user has a high-bandwidth connection 
> for audio and a high-quality speech engine were available on a server 
> somewhere.  This last point could be important for voice-input as well.

There is nothing in the C-style Speech API that prevents a client/server 
arrangement. In fact that's exactly what the existing Bonobo/ORBit2
implementation is. Nothing has changed w.r.t. this.





[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]