[Nautilus-list] Re: Feature request: mouse gestures in Nautilus



> 	There are decisions to be made on where the mouse gesture support
> should be put.
> 	a. How much should be abstracted? Should it be only for pointing
> 	devices like mice? Or for both mice and pens?

The device used to input gestures shouldn't be relevant (from a purely
technical point of view).

The main issue with different input devices is the shape of the gesture.
Different devices are better suited to different types of gestures, i.e.
a trackball is excellent for making loose gestures like G and C, while
the mouse is better suited to R, L and T, etc. Confirming or denying this
via usability research and tests would be quite interesting and useful.

> 	b. Should there be an X extension or is there one that can
> 	accomodation the mouse gesture functionality?
> 	c. Should the mouse gesture core be put in the X server, the
> 	graphical environment (like GNOME or KDE), the Window manager
> 	(like sawfish) or as an even higher level applications like
> 	in the Sensiva solution?

wayV adopted the same approach, purely by coincidence, as Sensiva, i.e.
an application that works pretty much at the same level as a window
manager.

There's no reason what so ever to change X. With regards to window
managers (WMs) or graphical environments (GEs)...I think full gesture
integration wouldn't be ideal, but I'm not 100% sure.

It would be useful if apps, WMs and GEs provided a non-gui method of
controlling them, e.g send a message to Netscape telling it to open a
new window, send a message to your mail client telling it to pull down
new mail, etc. Currently one of the ways wayV controls the UI is by
emulating keypress' (a kludge), using the Xtest extension.

Key issues about gesture UI's include:
1. User feedback. What forms and types?
2. User prompting. Should users have to rely on completely remembering
all the available gestures? If not how and when should they be
informed of the available gestures?
3. Gesture consistency. Should there be a standard set of gestures or
should a person be allowed to setup all their own gestures that best
suit their visual memory?
4. Gesture matching techniques. Libstroke uses one method, wayV another
and xscribble a third. Which should be used? Whats the best architecture
to support different gesture recognition techniques?
5. Modes of input. What ways should gesture input activation and cessation
occur? 

Completely offtopic: When is X going to support multi-scalar windows?
That would be noteworthy, extremely useful and require some interesting X
server coding.

And now I must get on with making gesture input possible via a webcam :)

Mike

-- 
wayV - Open Source Handwriting and Gesture Recognition for X 
       http://stressbunny.com/wayv

Not till the waters refuse to glisten for you and the leaves to rustle
for you, do my words refuse to glisten and rustle for you.
                                   -- Walt Whitman





[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]