Re: Article on Screen Reader History (including NVDA)


 

Hi Gene and others,

Could you mind explain the reasons behind NV Access adding the following commands in 2012:

  • One finger flick down: read next line in text review mode
  • One finger flick up: read previous line in text review mode

Context: it's 2012,, and Microsoft announced Windows 8. Well, we'll just say that, for the purposes of this thread, Windows 8 was touch-centric. Microsoft engineers then say to themselves, "okay, we have a screen reader called Narrator that ships with Windows. Since Windows 8 will be a friend to touchscreens, what can Narrator do?" The result: touch commands in Narrator. Fast forward to June 2012, and NV Access folks (back then, Mick and Jamie) said, "okay, Narrator might be developing a touch-centric commands. Can NVDA do it?" The result: NVDA 2012.3 with touch support. Fast forward to 2013, and a college student from United States says to himself, "okay, NVDA comes with touchscreen support, but some important commands are missing. Can I improve it somehow?" The result: Enhanced Touch Gestures add-on.

I think it would have made sense to say that the intention of screen reader developers (count me in as one of them) was focus solely on keyboard navigation. This made sense until 2009, but we know what happened that year. Nowadays, I think it makes more sense to say that keyboard is a primary input method, with mouse and touchscreens (and other interaction paradigms) slowly catching up.

I carefully pose the following: part of our insistence that keyboard interaction is the only way to use screen readers is our own assistive tech training. As many people pointed out (directly or not), in the 20th century, we grew up with the notion that mouse interaction was out of reach due to the notion that blind people cannot interact with screen elements effectively, hence tutorials and teachers focused on keyboard navigation. In 2020's, the screen reader landscape and interaction paradigm are vastly different, with tutorials and teachers also mentionig touchscreens and mouse features. Why did various screen readers began focusing on touch and mouse interaction in the 2010's? Among other reasons, the proliferation of touchscreens made ap developers, operating system vendors, screen reader developers, and users realize the advantages and drawbacks of different paradigms.

As a person who have experienced different input paradigms (keyboards, braille displays, touchscreens, mouse, voice, and even code), happen to be a long-time user of various screen readers (NVDA and JAWS, to name a few), and produced tutorials for assistive tech software and hardware, I understand the sentiment that keyboard is the way to go as far as screen reader interaction paradigm is concerned. But remember the bug fix item I posted earlier: I am the person who brought that bug fix, stemming from my belief that users can use a variety of input devices to accomplish the intention of a screen reader: to process, interpret, and present screen information. And trust me, that bug fix took sleep away from me a few weeks ago (in the end, it worked out).

What I'm ultimately saying is this: let us teach users to dream big. Keyboards, while the primary interaction paradigm, is not the only way to get a screen reader to perform its task, letting users understand what is shown on screen. Let us not perpetuate an "input interaction blackout" - limiting people to just the pimary ways of doing things to not notice other possibilities exis. And in the immediate context, it is certaintly possible to read web content via mouse and touch - we were used to keyboards because virtual buffers (browse mode) are designed for document navigation with keyboards in mind, but today's web calls for different interaction strategies.

Cheers,

Joseph

Join nvda@nvda.groups.io to automatically receive all group messages.