Re: [Kde-accessibility] Proklam and KMouth
- From: Peter Korn <peter korn sun com>
- To: Gunnar Schmi Dt <gunnar schmi-dt de>
- Cc: Pupeno <pupeno pupeno com>, kde-accessibility mail kde org, gnome-accessibility-list gnome org
- Subject: Re: [Kde-accessibility] Proklam and KMouth
- Date: Sun, 22 Sep 2002 23:01:17 -0700
Hi Gunnar,
Good to hear about your KMouth project to provide augmentative communication
functionality in the Linux environment. There are quite a few special
purpose devices which do this today in the commercial world, and I'd bet that
a number of those companies would be very interested in supporting and
re-packaging a general purpose Linux solution.
I highly recommend you check out the GNOME Accessibility work (see
http://developer.gnome.org/projects/gap). The GNOME community is developing
a rich accessibility infrastructure on top of the GNOME 2 library stack,
including the GNOME Accessibility Service Provider Interface (which should
really perhaps be called the GNU Accessibility Service Provider Interface)
which is used by several assistive technologies (such as Gnopernicus - the
GNOME Screen reader/magnifier; and GOK - the GNOME On-screen Keyboard) to get
access to all of the applications written using the GTK+ libraries, Java
Swing libraries, as well as StarOffice and Netscape. User-interfaces which
support the AT SPI (either directly or through one of several bridges) would
therefore work with these assistive technologies.
Also part of the GNOME Accessibility work is the gnome-speech project, which
provides an API to text-to-speech engines (both software and hardware).
gnome-speech presently has drivers for Festival, FreeTTS (a Java port of
Flight [Festival Light]), and the ViaVoice engine (IBM's packaging of the
Eloquence engine now distributed by SpeechWorks). We are looking at various
hardware synthesizers as well, such as the popular DECtalk Express.
It would be great of KMouth could work with gnome-speech, and thereby support
all gnome-speech synthesizers. And, especially if you are concerned about
finding ways for users with limited physical dexterity to drive the KMouth
interface, you might consider supporting AT SPI. If you did, users with
significant physical impairments who used the GNOME On-screen Keyboard, coudl
also use KMouth. This would thereby provide immediate access to users of
single-switch systems, sip-puf straw devices, mouth-sticks, head-mice, and
even eye-tracking systems.
Please don't hesitate to ask questions on this list -
<gnome-accessibility-list gnome org>. You'll find a large community of
experts here.
Regards,
Peter Korn
Sun Accessibility team
Gunnar Schmi Dt wrote:
Hello,
I am writing a project called KMouth, which enables persons that cannot speak
to let their computer speak, e.g. mutal people or people who have lost their
voice (like my mother, who cannot control her tongue).
Among the planned features of KMouth are the possibility to select phrases
from (user defined) phrase books (so that regularly used phrases do only need
to be typed in once), and the selection of a language that is used for the
pronounciation (so that English phrases can be spoken with an English
pronounciation and German phrases can be spoken with a German pronounciation,
for example).
Currently the program uses a shell script for the actual text-to-speech
conversion. As I have seen that Proklam shall provide both an interface for
other progams to speak and a module in kcontrol, I would like to use Proklam
as standard text output system, so I don't have to deal with configuration
issues.
From the messages on the mailing lists, I have understood that Proklam does
not support more than one text-to-speech system at a time. However there are
some multi-lingual text-to-speech systems (e.g., Festival --- if what I have
heard is correct). What I have not yet found is information whether these
languages can be specified when letting a text be spoken.
An other issue is that you might need to use more than one tts-system in order
to provide all languages of your interest (for example English and German for
my mother). So I have basically two options:
Either I have to implement my own GUI for a multilingual text-to-speach system
that uses a number of simple shell scripts (one for each language), or I have
to find a mechanism to switch the language within Proklam. Of course I would
prefer the latter, as I do not wish to duplicate functionality of Proklam.
Also, as I am currently using hadifax (in combination with mbrola) as a German
tts system, I think about helping to write a Proklam module for hadihax.
Gunnar Schmi Dt
_______________________________________________
kde-accessibility mailing list
kde-accessibility mail kde org
http://mail.kde.org/mailman/listinfo/kde-accessibility
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]