Hi all,
Before we actually get into talking about
NVDA components, it is important to think about what a
screen reader is and is not, as well as overall concepts
(and the big picture) behind possibilities and constraints
of screen readers. We also need to go over accessibility in
relation to screen reading. Only then the rest of the Inside
Story posts will make sense because the story begins and
ends with defining the reality, possibilities, and
constraints of screen reading technology (for anyone wishing
to submit code contributions to NVDA project, you need to
think about the overall social and cultural reality NVDA and
its users are facing).
First, let’s talk about what a screen
reader is not. A screen reader is not an operating system,
nor the user interface for an operating system. It is not a
“jack of all trades” productivity tool, nor the only way for
blind people to use computers (although screen readers get
lots of attention because they are one of the most familiar
tools the society will see). A screen reader is not your
accessibility advocate, nor designed to bring disability
justice to everyone. Most importantly, a screen reader is
not the million-dollar answer to everything in life,
blindness, and accessibility. Shocking? I assume so (for
most of us).
The truth is, I sometimes feel that a
screen reader is one or more of the “nots” I listed. Folks
on this forum encounter and live with screen readers 24
hours a day, 7 days a week, 365 (or 366) days a year. And
screen readers like NVDA are gaining more and more
mainstream attention (do a Google search for the terms
“accessibility” and “screen readers” and one of the results
is an article from The Verge published not so long ago on
the subject of screen reader history; the NVDA forum had an extensive talk about it a while
back). We use screen readers in many places: schools,
companies, accessibility testing, software development, or
even as an example of progress of accessibility.
So what exactly is a screen reader? Among
many Google searches, the common theme is that it is a
program that helps blind people use computers by reading
screen content. More specifically, a screen reader is a
program that reads content the user is interacting with (or
not). Sometimes the content is accessible and usable (both
terms are important), while others are not, requiring tips
and tricks to make them screen reader and user friendly. I
will come back to what I just said in a moment.
In a more technical definition, a screen
reader is an information processor that gathers, interprets,
and presents information displayed on screen and provide
ways to let blind users interact with the computer-based
task at hand. Screen readers such as NVDA use facilities
provided by the operating system (Microsoft Windows, in this
case) and apps to gather information on the screen (and
sometimes off-screen). Screen readers have rules and
expectations about what the gathered information is and
should be, and uses sophisticated rules to interpret what it
has “seen” i.e. gathered with help from the operating
system, the ap in question, and other ways. Based on
information gathered and subsequently interpreted, screen
readers use components such as text-to-speech (TTS),
braille, and other output mechanisms to present screen
content. I will address exactly which components are part of
NVDA in the next Inside Story.
To illustrate the overall workings of a
screen reader at the highest level (or not so high level),
let us say that you open Notepad and type the letter “H”. On
screen, the letter “H” is shown, and NVDA says “H” if speak
typed characters is on (NVDA+number row 2). If a braille
display is connected, it will show the letter “H” in braille
(in Unified English Braille, it is dots 6 and then 125, or
in this case, it could be dots 56, 6, then 125). But how can
NVDA accomplish so much magic? Here’s how:
- User types the letter “H”.
- Windows realizes that something
happened from the keyboard, so it tries to interpret what
happened.
- Windows sees that a character
was entered and sees where the system focus is.
- Windows sees that Notepad, a
text editor is in use, so it displays the letter “H” on
the screen.
- At the same time, a helper
called accessibility API notices this event and sees that
a character was entered.
- The accessibility API then
tells whoever is listening (NVDA, in this case) that an
input event occurred.
- In turn, Notepad (app) realizes
that an accessibility API is running, so it says to the
accessibility API, “please raise a value change event so
the screen reader can announce it to the user.”
- In turn, the accessibility API
raises value change event, which is then recognized by
NVDA.
- NVDA knows that a value change
event has occurred, so it tries to find out what has
changed, and eventually sees that a new character was
entered.
- NVDA then uses the configured
speech synthesizer to inform the user that the letter “H”
has been entered. This does not happen if the user says to
NVDA, “don’t tell me typed characters.”
The steps listed above should provide
just enough information to demonstrate the idea that a
screen reader is, in essence, a sophisticated information
processor: gathers, interprets, and presents information.
Going back to what I said above about
accessible and inaccessible (and usable and unusable)
content: what I outlined above may suggest that everything
is accessible if things work out between the operating
system, apps, and screen readers. This ignores the fact that
screen readers are, believe it or not, workarounds to the
current social and cultural conditions of computing,
disability, accessibility, and usability. Remember one of
the “nots” of screen readers: they are not accessibility
advocates for you. Why? Think about the term “assistive
technology”. What does it mean in practice? It means that
computers, tablets, smartphones, and gadgets we live with
are not designed with disability in mind, and screen readers
came along to “fill” the gap for inaccessible and unusable
computing. The history of screen readers is filled with
slogans such as “equal access to technology”, “making things
more productive”, “helping blind people get jobs” and others
(the story of screen readers goes back decades, believe it
or not).
The term “assistive technology”, at a
first glance, is a positive definition for folks on this
forum and elsewhere: tools to help you succeed in using
computers to perform tasks. But on the flip side, it shows
that, despite progress such as accessibility standards and
novel approaches to provide “technological social justice”
(disability emojis, for example), the world is still, for a
lack of better word, unconcerned (or not educated enough or
not fully aware of, perhaps) toward blind people. Screen
readers exist precisely because they demonstrate the lack of
consideration for the disabled when designing digital
technologies, and as we will see in subsequent Inside Story
of NVDA series, people like Mick Curran and others came up
with workarounds upon workarounds to demonstrate the
continued need for advocacy.
My statement that screen readers are
workarounds should ring a bell for some of you. Not just
because your life experiences are filled with accessibility
advocacy, but also because it touches on one of my own
mantras about accessibility and usability: mindset matters.
Fixing inaccessible applications so it can become screen
reader user friendly is just a micro-level solution. The
steps I listed to demonstrate parts of NVDA internals came
after years of advocacy by blind people, informing Microsoft
that they need to do better (people who lived in the 1990’s
should remember what I’m talking about). Accessibility
standards and API’s are next level up in solving computing
issues for screen reader users (by doing so, people and
organizations writing standards are acknowledging the
continued issues faced by disabled people thanks to larger
social and cultural issues at hand). The fundamental issue,
and the reason that NVDA is not the million-dollar answer to
everything in life for screen reader users, is the
perpetuation of ignorance by both sides of the coin:
ignorance by the public (mainstream) that accessibility and
usability matters in software design, and ignorance by
screen reader users and disability advocacy organizations
that we are a minority and must advocate continuously.
Putting all into context of NVDA, just
because the screen reader is free and open-source does not
mean equal access to technology is here at last. When you
use NVDA or contribute code to the project, you are doing
three things at once: shows dedication to the project,
acknowledges the progress made in screen reading, and
understands the effects of social and cultural attitude
toward disability. The last one is the reality of screen
reading as it stands in 2022: even if COVID-19 pandemic made
us realize how screen readers are important for us, it also
brought challenges such as inaccessible and unusable
videoconferencing systems, unreadable online documents, and
the notion that technology can solve world’s problems (it
won’t, I think). When looking at NVDA from the big picture
of accessibility and usability, it opens up possibilities
and constraints. Possibilities because the code is out there
so people can study and research it, and constraints as the
same source code demonstrates the larger social and cultural
issues faced by blind people. This is perhaps the biggest
lesson I want readers to understand as we meet NVDA
internals: screen readers such as NVDA represent the
reality, possibilities, and constraints of people needing to
use alternatives due to social and cultural attitudes. And
throughout Insider Story series, I will highlight all three
of them as much as possible.
Remember this: screen readers are not
productivity tools, the solution to life’s problems,
technological social justice, nor can advocate for users. As
sophisticated information processors, screen readers
represent the reality, possibilities, and constraints of
disability in the form of technology. NVDA both shows the
progress and waypoint toward accessibility and usability,
and in extension, more need for disability advocacy. I want
all of you to understand this, otherwise the rest of The
Insider Story of NVDA will not make sense – not only I will
take you on a journey on NVDA internals, but also help you
contemplate a lot (for anyone wishing to contribute code to
NVDA project, you must have the mindset that you are
contributing to both the possibilities and constraints of
accessibility and disability).
Next: NVDA screen reader components
and/or any feature you would like me to cover (comments and
topic suggestions are welcome).
Cheers,
Joseph