I agree with you, it would be a useful fallback. It would never be a primary solution though, it is essentially screen-scraping, and would have the same disadvantages as screen-scraping approaches that were used before accessibility API:
* Accessibility APIs make it the app developer's responsibility to implement propper accessibility, this is by design. App developers know not just their app, but also the content associated with it, and therefore can implement an accessible experience that may be different from the visual layout and yet more efficient for AT users. An example is the list of recommendations on Youtube. Visually they are in a grid, but the screen reader sees it as a list of headings.
* One of the most important requirements for a screen reader is responsiveness. The quicker a blind user knows about an update in the interface, the better. Even a slight delay before an announcement may result in an interface that feels sluggish. This is why blind users often use speech at a very fast speaking rate. A pattern-based approach will always be slower than just reading the state via the accessibility API.
* It is dependent on the visual layout, which means accessibility may break just because an app got new icons, or a few components shifted position. These things are independent from the accessibility API.
Still, it may be a useful approach to consider for special edge cases, and I am interested to see what happens in this space.
Regards,
Rynhardt
_______________________________________________Hi Shadyar,Not an immediate solution at all, but I would say that AI (Machine Learning) which snapshots the screen or window and is able to extract the text from the snapshot image to then read it aloud, might be superior to legacy accessibility API paradigms which rely on the application developers to interleave "accessibility" (ARIA etc.) information in each and every field.
Or at least as an augmentation that should be able to provide a really great fallback to any ARIA-like paradigm.It would be a project, sure, but it is really very accomplishable at this time and age.Hopefully one day our desktops will be more fluid than only providing voice services on top a graphical interaction interface, but a lot can be done till then by leveraging Computer Vision AI in this space. Sorry again that this is not any immediate solution.MatanOn Fri, May 28, 2021 at 4:04 AM Shadyar Khodayari via gnome-accessibility-list <gnome-accessibility-list gnome org> wrote:Hello
I'm a blind computer engineer, a developer and familiar with OS
Windows and Screen reader NVDA entirely.
I recently installed Linux Ubuntu v20.4 using Orca.
I read Accessibility section of Ubuntu documentation as well as Orca
documentation.
After logging in,
1. When I am at Desktop through either pressing Super key + D or
holding Alt + CTRL and pressing Tab, next pressing arrow keys or Tab,
Orca does not read desktop icons.
2. 3. When I open a windows Settings, I press Tab numerous times but
Orca does not read the Setting categories like wireless, Bluetooth ETC
in the window. It seems focus never moves on this part of the window.
3. When I open a windows like application files or file trash, I press
Tab numerous times but Orca does not read the main part of the window.
It seems focus never moves on this part of the window.
4. Should I do a specific config on Gnome?
5. should I install another desktop environment?
I will be appreciate if you would help me.
Thanks and Regards
Shadyar KHodayari
_______________________________________________
gnome-accessibility-list mailing list
gnome-accessibility-list gnome org
https://mail.gnome.org/mailman/listinfo/gnome-accessibility-list
gnome-accessibility-list mailing list
gnome-accessibility-list gnome org
https://mail.gnome.org/mailman/listinfo/gnome-accessibility-list