Date   

Re: Announcing Office Desk: a new add-on to improve support for Microsoft 365 apps

 

Hi,

Good question. At the moment I have no plans for Microsoft Teams, but might be something to think about once folks get feedback.

As for overall development feedback, GitHub is the best place for them. Even if I don't get to implement bug fixes, I'm sure someone can do so (I am taking a hands-off approach with Office Desk so I can focus more on Windows App Essentials, Add-on Updater, and Inside Story of NVDA series, and more importantly, school).

Cheers,

Joseph


Re: Announcing Office Desk: a new add-on to improve support for Microsoft 365 apps

Sylvie Duchateau
 

Hello Joseph and all,

This is a good initiative.

Could you please tell us if improvements in using Microsoft Teams, part of Office 365, is included in this add-on?

How would you like the user community to help you improve the add-on? Send you feedback on this list or directly on Github?

Best

Sylvie

De : nvda@nvda.groups.io <nvda@nvda.groups.io> De la part de Joseph Lee via groups.io
Envoyé : dimanche 18 septembre 2022 09:20
À : nvda@nvda.groups.io
Objet : [nvda] Announcing Office Desk: a new add-on to improve support for Microsoft 365 apps

 

Hi all,

I’m delighted to announce a new add-on project specifically designed for Microsoft 365 users in mind: Office Desk, a new add-on designed to provide improved support for Microsoft Office applications. Borrowing concepts from Windows App Essentials, this add-on consists of a global plugin and a collection of app modules for Office apps such as Excel, Word, Access and others.

The goal of Office Desk project is to serve as a collaborative add-on development project where users and developers of Office applications can bring needed fixes and enhancements. Although I’m starting this project with a couple fixes for Word and an enhancement across office apps, I envision a team of users and developers working on this project, with me serving as initial coordinator and release manager. Eventually I envision parts of the new add-on project becoming part of NVDA Core when the code matures. As this is a new experience for me (leading a team of developers instead of mentored by more experienced developers), I expect some things to work through in the beginning but I think the collaborative approach in add-on development will provide a better experience.

As a starter, Office Desk includes the following fixes and enhancements:

  • In Office backstage view, NVDA will announce suggestions count when searching for recently opened files, and you can use up or down arrow keys to review suggestions. This resolves NVDA issue 13772.
  • In Word 365, NVDA will no longer announce format commands such as bold and italic on/off multiple times. This resolves NVDA issue 10950.
  • In Word 365, NVDA will announce labels in places such as address fields in envelopes dialog. This resolves NVDA issue 14156, first reported on this forum a few days ago.

The first beta build is now available for you to test:

https://github.com/josephsl/officeDesk/releases/download/dev/officeDesk-20220918-dev.nvda-addon

 

Important notes:

  • Office Desk requires Windows 10 and later and NVDA 2022.2 and later. Office Desk will present an error dialog if you are using an older Windows release when attempting to install it.
  • For best experience, Office 2016 and later or latest Microsoft 365 version is required (any Office application with version 16.0 will work best). Although Office 2013 might work, I cannot guarantee it, and that version of Office is nearing end of support.
  • To see fixes in action, ask NVDA to use UI Automation in Word (NVDA Settings/Advanced; you must check the checkbox first before modifying the mentioned setting); wording may differ between NVDA releases.
  • Due to implementation of Word 365 fixes, this add-on will conflict with Word Access add-on developed by PaulBer19. It might be possible to solve Word 365 issues without introducing conflict with Word Access add-on but that may require changing NVDA internals, making the resulting fix unpredictable and may cause stability issues when using Word and outlook.
  • Office Desk add-on does not include Outlook Extended add-on so you can use both add-ons at the same time.

 

If things go well, I hope to release the stable version of Office Desk add-on in October.

Cheers,

Joseph


Re: The Inside Story of NVDA: what a screen reader is and is not, possibilities and constraints of screen readers #NVDA_Internals

Brian's Mail list account
 

I think the biggest loss and the one most program and web site developers cannot get their heads around is the over view issue. Yes you can list headers, lists links buttons and interactive areas, but the mental pictures we get are different to what the sighted will have.
I find it very frustrating to be told you just need to go right from where you are into the side bar, when a keyboard its impossible to achieve this in one movement unless you start emulating the mouse and manually shifting the focus, a task quite hard to do without accidentally triggering something else.
Brian

--
bglists@...
Sent via blueyonder.(Virgin media)
Please address personal E-mail to:-
briang1@..., putting 'Brian Gaff'
in the display name field.

----- Original Message -----
From: "Brian Vogel" <britechguy@...>
To: <nvda@nvda.groups.io>
Sent: Monday, September 19, 2022 3:27 AM
Subject: Re: [nvda] The Inside Story of NVDA: what a screen reader is and is not, possibilities and constraints of screen readers #NVDA_Internals


On Sun, Sep 18, 2022 at 08:26 PM, Gene wrote:


You can't have equal access as a blind user because you are not accessing
the technology in the same way and sight provides more information and
faster when dealing with computer information if it is visual.
-
Gene,

You and I are in close to absolute agreement about this. But, and it's an important but, it's "as equal as can currently be achieved" when it comes to written content.

If you want equal to be "equal to having vision," then that will never be achieved. I've said it many times, and have no hesitation about doing so: Screen readers are workarounds that substitute one sensory modality, audition/hearing, for another vision. There is no way to do this without something being "lost in translation" even if the only thing (and it's usually not the only thing) is speed. The same would be true in reverse. And there are things, like picture description, that will never, ever, ever, come close to being able to transmit the information an image contains, in its totality, that can be and is transmitted by seeing it. Just as you cannot say that describing something as "a flute playing," or "a piano playing," carries the same information, whether to a deaf or hearing person, as actually hearing it does. There are things that are sui generis to each of our sensory modalities that defy any conversion. So, that's another reason there can never be "perfection of equality."

The above being said, the equality of access to written material is really about as close as you're ever going to get to equal. Slogans/tag lines are not meant to be perfect expressions that capture every nuance of an idea (or the limitations it entails, either).
--

Brian - Windows 10, 64-Bit, Version 21H2, Build 19044

It is well to open one's mind but only as a preliminary to closing it . . . for the supreme act of judgment and selection.

~ Irving Babbitt


Re: could nvda announce entries in the Windows 11 emoji panel?

Quentin Christensen
 

And just for the record, I don't have ANY add-ons installed currently (And I never had a problem accessing the emoji panel😀)


On Mon, Sep 19, 2022 at 12:54 PM Supanut Leepaisomboon <supanut2000@...> wrote:
Hmmm, I just tried on my laptop just now and the emoji pannel seems to work just fine, though on my laptop Windows App Essentials is running and I did not turn on any of the developer/advance settings.
Maybe I'll have to investigate further on my desktop back home to see what's going on. Though, on my laptop I just updated to NVDA 2022.2.3.



--
Quentin Christensen
Training and Support Manager


Re: could nvda announce entries in the Windows 11 emoji panel?

 

Hmmm, I just tried on my laptop just now and the emoji pannel seems to work just fine, though on my laptop Windows App Essentials is running and I did not turn on any of the developer/advance settings.
Maybe I'll have to investigate further on my desktop back home to see what's going on. Though, on my laptop I just updated to NVDA 2022.2.3.


Re: The Inside Story of NVDA: what a screen reader is and is not, possibilities and constraints of screen readers #NVDA_Internals

 

On Sun, Sep 18, 2022 at 08:26 PM, Gene wrote:
You can't have equal access as a blind user because you are not accessing the technology in the same way and sight provides more information and faster when dealing with computer information if it is visual.
-
Gene,

You and I are in close to absolute agreement about this.  But, and it's an important but, it's "as equal as can currently be achieved" when it comes to written content.  

If you want equal to be "equal to having vision," then that will never be achieved.  I've said it many times, and have no hesitation about doing so:  Screen readers are workarounds that substitute one sensory modality, audition/hearing, for another vision.  There is no way to do this without something being "lost in translation" even if the only thing (and it's usually not the only thing) is speed.  The same would be true in reverse.  And there are things, like picture description, that will never, ever, ever, come close to being able to transmit the information an image contains, in its totality, that can be and is transmitted by seeing it.  Just as you cannot say that describing something as "a flute playing," or "a piano playing," carries the same information, whether to a deaf or hearing person, as actually hearing it does.  There are things that are sui generis to each of our sensory modalities that defy any conversion.  So, that's another reason there can never be "perfection of equality."

The above being said, the equality of access to written material is really about as close as you're ever going to get to equal.  Slogans/tag lines are not meant to be perfect expressions that capture every nuance of an idea (or the limitations it entails, either).
--

Brian - Windows 10, 64-Bit, Version 21H2, Build 19044  

It is well to open one's mind but only as a preliminary to closing it . . . for the supreme act of judgment and selection.

       ~ Irving Babbitt


News – NV Access - NVDA 2022.2.3 Released #nvaccessnewsfeed

Group Integration <nvda@...>
 

NVDA 2022.2.3 Released

By Sean Budd

NV Access is pleased to announce that version 2022.2.3 of NVDA, the free screen reader for Microsoft Windows, is now available for download. This is a patch release to fix an accidental API breakage introduced in 2022.2.1, which broke compatibility with NVDA remote.

Please note that as this is a patch release, the “What’s new” text has not been translated for this release. Users running NVDA in languages other than English, accessing the “What’s new” text from the Help menu will show the latest version as 2022.2. The correct current version can always be found in the “About NVDA” dialog, available from the Help menu.

Important Note:

Please note, after updating any software, it is a good idea to restart the computer. Restart by going to the Shutdown dialog, selecting “restart” and pressing ENTER. Updating software can change files which are in use. This can lead to instability and strange behaviour which is resolved by rebooting. This is the first thing to try if you do notice anything odd after updating.

Links

Close-up photograph of NVDA logo in notification area.

Close-up photograph of NVDA logo in notification area.


Re: Spelling / grammar checkers - I found one!

Pranav Lal
 

Try pro writing aid.  It used to be as inaccessible but things may have changed.


Re: NVDA's built-in OCR Capabilities

Pranav Lal
 

Bhavia,

 

See the cloud vision add-on. It describes images.

 

Pranav


Re: The Inside Story of NVDA: what a screen reader is and is not, possibilities and constraints of screen readers #NVDA_Internals

Gene
 

I am not making this comment because I think the essay should be changed.  But I think a slogan like equal access to technology, which sounds good and which you encounter often in terms of advocacy, is misleading and somewhat meaningless.  What is equal access?  You can't have equal access as a blind user because you are not accessing the technology in the same way and sight provides more information and faster when dealing with computer information if it is visual.  Obviously, I'm not talking about streaming something that is only audio content.  But a sighted person can look at a screen and find something much more quickly some of the time than a blind person can.  If a blind person is already familiar with an interface or knows enough what he/she is looking for, the person may find content as quickly or perhaps faster than a sighted person but there are many times when this is not the case and the blind person finds what is being looked for less efficiently.

I'm sure that, in an unfamiliar dialog, a sighted person can skim what is there and find what they are looking for, if they have an idea what they are looking for because of what they already know about how a certain type of program or dialog works, than a blind person who tabs and listens to field after field.  In a known and familiar dialog, the blind person, through use of shortcuts may do something as fast or faster than a sighted person.

And that brings up something else screen-readers are not.  They are not ways to enable you to use a program without putting in the time and work to learn enough about the interface to use it efficiently. 

Gene
On 9/18/2022 4:10 PM, Joseph Lee wrote:

[Edited Message Follows]
[Reason: Terminology]

Hi all,

Before we actually get into talking about NVDA components, it is important to think about what a screen reader is and is not, as well as overall concepts (and the big picture) behind possibilities and constraints of screen readers. We also need to go over accessibility in relation to screen reading. Only then the rest of the Inside Story posts will make sense because the story begins and ends with defining the reality, possibilities, and constraints of screen reading technology (for anyone wishing to submit code contributions to NVDA project, you need to think about the overall social and cultural reality NVDA and its users are facing).

First, let’s talk about what a screen reader is not. A screen reader is not an operating system, nor the user interface for an operating system. It is not a “jack of all trades” productivity tool, nor the only way for blind people to use computers (although screen readers get lots of attention because they are one of the most familiar tools the society will see). A screen reader is not your accessibility advocate, nor designed to bring disability justice to everyone. Most importantly, a screen reader is not the million-dollar answer to everything in life, blindness, and accessibility. Shocking? I assume so (for most of us).

The truth is, I sometimes feel that a screen reader is one or more of the “nots” I listed. Folks on this forum encounter and live with screen readers 24 hours a day, 7 days a week, 365 (or 366) days a year. And screen readers like NVDA are gaining more and more mainstream attention (do a Google search for the terms “accessibility” and “screen readers” and one of the results is an article from The Verge published not so long ago on the subject of screen reader history; the NVDA forum had an extensive talk about it a while back). We use screen readers in many places: schools, companies, accessibility testing, software development, or even as an example of progress of accessibility.

So what exactly is a screen reader? Among many Google searches, the common theme is that it is a program that helps blind people use computers by reading screen content. More specifically, a screen reader is a program that reads content the user is interacting with (or not). Sometimes the content is accessible and usable (both terms are important), while others are not, requiring tips and tricks to make them screen reader and user friendly. I will come back to what I just said in a moment.

In a more technical definition, a screen reader is an information processor that gathers, interprets, and presents information displayed on screen and provide ways to let blind users interact with the computer-based task at hand. Screen readers such as NVDA use facilities provided by the operating system (Microsoft Windows, in this case) and apps to gather information on the screen (and sometimes off-screen). Screen readers have rules and expectations about what the gathered information is and should be, and uses sophisticated rules to interpret what it has “seen” i.e. gathered with help from the operating system, the ap in question, and other ways. Based on information gathered and subsequently interpreted, screen readers use components such as text-to-speech (TTS), braille, and other output mechanisms to present screen content. I will address exactly which components are part of NVDA in the next Inside Story.

To illustrate the overall workings of a screen reader at the highest level (or not so high level), let us say that you open Notepad and type the letter “H”. On screen, the letter “H” is shown, and NVDA says “H” if speak typed characters is on (NVDA+number row 2). If a braille display is connected, it will show the letter “H” in braille (in Unified English Braille, it is dots 6 and then 125, or in this case, it could be dots 56, 6, then 125). But how can NVDA accomplish so much magic? Here’s how:

  1. User types the letter “H”.
  2. Windows realizes that something happened from the keyboard, so it tries to interpret what happened.
  3. Windows sees that a character was entered and sees where the system focus is.
  4. Windows sees that Notepad, a text editor is in use, so it displays the letter “H” on the screen.
  5. At the same time, a helper called accessibility API notices this event and sees that a character was entered.
  6. The accessibility API then tells whoever is listening (NVDA, in this case) that an input event occurred.
  7. In turn, Notepad (app) realizes that an accessibility API is running, so it says to the accessibility API, “please raise a value change event so the screen reader can announce it to the user.”
  8. In turn, the accessibility API raises value change event, which is then recognized by NVDA.
  9. NVDA knows that a value change event has occurred, so it tries to find out what has changed, and eventually sees that a new character was entered.
  10. NVDA then uses the configured speech synthesizer to inform the user that the letter “H” has been entered. This does not happen if the user says to NVDA, “don’t tell me typed characters.”

The steps listed above should provide just enough information to demonstrate the idea that a screen reader is, in essence, a sophisticated information processor: gathers, interprets, and presents information.

Going back to what I said above about accessible and inaccessible (and usable and unusable) content: what I outlined above may suggest that everything is accessible if things work out between the operating system, apps, and screen readers. This ignores the fact that screen readers are, believe it or not, workarounds to the current social and cultural conditions of computing, disability, accessibility, and usability. Remember one of the “nots” of screen readers: they are not accessibility advocates for you. Why? Think about the term “assistive technology”. What does it mean in practice? It means that computers, tablets, smartphones, and gadgets we live with are not designed with disability in mind, and screen readers came along to “fill” the gap for inaccessible and unusable computing. The history of screen readers is filled with slogans such as “equal access to technology”, “making things more productive”, “helping blind people get jobs” and others (the story of screen readers goes back decades, believe it or not).

The term “assistive technology”, at a first glance, is a positive definition for folks on this forum and elsewhere: tools to help you succeed in using computers to perform tasks. But on the flip side, it shows that, despite progress such as accessibility standards and novel approaches to provide “technological social justice” (disability emojis, for example), the world is still, for a lack of better word, unconcerned (or not educated enough or not fully aware of, perhaps) toward blind people. Screen readers exist precisely because they demonstrate the lack of consideration for the disabled when designing digital technologies, and as we will see in subsequent Inside Story of NVDA series, people like Mick Curran and others came up with workarounds upon workarounds to demonstrate the continued need for advocacy.

My statement that screen readers are workarounds should ring a bell for some of you. Not just because your life experiences are filled with accessibility advocacy, but also because it touches on one of my own mantras about accessibility and usability: mindset matters. Fixing inaccessible applications so it can become screen reader user friendly is just a micro-level solution. The steps I listed to demonstrate parts of NVDA internals came after years of advocacy by blind people, informing Microsoft that they need to do better (people who lived in the 1990’s should remember what I’m talking about). Accessibility standards and API’s are next level up in solving computing issues for screen reader users (by doing so, people and organizations writing standards are acknowledging the continued issues faced by disabled people thanks to larger social and cultural issues at hand). The fundamental issue, and the reason that NVDA is not the million-dollar answer to everything in life for screen reader users, is the perpetuation of ignorance by both sides of the coin: ignorance by the public (mainstream) that accessibility and usability matters in software design, and ignorance by screen reader users and disability advocacy organizations that we are a minority and must advocate continuously.

Putting all into context of NVDA, just because the screen reader is free and open-source does not mean equal access to technology is here at last. When you use NVDA or contribute code to the project, you are doing three things at once: shows dedication to the project, acknowledges the progress made in screen reading, and understands the effects of social and cultural attitude toward disability. The last one is the reality of screen reading as it stands in 2022: even if COVID-19 pandemic made us realize how screen readers are important for us, it also brought challenges such as inaccessible and unusable videoconferencing systems, unreadable online documents, and the notion that technology can solve world’s problems (it won’t, I think). When looking at NVDA from the big picture of accessibility and usability, it opens up possibilities and constraints. Possibilities because the code is out there so people can study and research it, and constraints as the same source code demonstrates the larger social and cultural issues faced by blind people. This is perhaps the biggest lesson I want readers to understand as we meet NVDA internals: screen readers such as NVDA represent the reality, possibilities, and constraints of people needing to use alternatives due to social and cultural attitudes. And throughout Insider Story series, I will highlight all three of them as much as possible.

Remember this: screen readers are not productivity tools, the solution to life’s problems, technological social justice, nor can advocate for users. As sophisticated information processors, screen readers represent the reality, possibilities, and constraints of disability in the form of technology. NVDA both shows the progress and waypoint toward accessibility and usability, and in extension, more need for disability advocacy. I want all of you to understand this, otherwise the rest of The Insider Story of NVDA will not make sense – not only I will take you on a journey on NVDA internals, but also help you contemplate a lot (for anyone wishing to contribute code to NVDA project, you must have the mindset that you are contributing to both the possibilities and constraints of accessibility and disability).

Next: NVDA screen reader components and/or any feature you would like me to cover (comments and topic suggestions are welcome).

Cheers,

Joseph



Re: Spelling / grammar checkers - I found one!

 

On Sun, Sep 18, 2022 at 02:51 PM, farhan israk wrote:
I know that both google docs, gmail and other wordprocesor have built- in grammar checkers but they are not as perfect as third party grammar checkers such as grammarly.
-
And all I can say is, "So, and now what?"

You have the tools you have (and that's not dismissive and it's a fact for the sighted as well as the blind).  You will not always have access to your preferred tool for a given thing for a wide variety of reasons.  Grammarly is not God's Gift to Grammar Checking, and other tools do a mighty good job.  Pick one of them, as the Grammarly ship has sailed (and you need something in the near term - which you don't have and won't have in Grammarly).
--

Brian - Windows 10, 64-Bit, Version 21H2, Build 19044  

It is well to open one's mind but only as a preliminary to closing it . . . for the supreme act of judgment and selection.

       ~ Irving Babbitt


Re: The Inside Story of NVDA: what a screen reader is and is not, possibilities and constraints of screen readers #NVDA_Internals

 

Hi,

Ah, thanks for correcting me (edited the original post).

Cheers,

Joseph


Re: The Inside Story of NVDA: what a screen reader is and is not, possibilities and constraints of screen readers #NVDA_Internals

 

On Sun, Sep 18, 2022 at 05:10 PM, Joseph Lee wrote:
But on the flip side, it shows that, despite progress such as accessibility standards and novel approaches to provide “technological social justice” (disability emojis, for example), the world is still, for a lack of better word, hostile toward blind people.
-
Actually, I have to take issue with this, as there are better words.  Hostility implies intent on one side, and that intent is to injure, hurt, etc.  I'd say the better phrase is, "fundamentally unconcerned with the needs of," rather than, "hostile toward."

I don't think "the world at large" is actively, or even passively (for the most part), hostile toward those who are blind. They just don't think about them at all.  Accusing any individual or group of hostility will put up certain obstacles that need not occur if that accusation is avoided.  Given the size of the blind community as part of the whole of humanity, it's somewhat easy to see how this is a somewhat natural side effect.  It's a very small minority among humanity as a whole.

While the end result is what could accurately be called a "hostile digital environment," where hostile means, "very unfavorable to life or growth."  But that hostile digital environment is not, absolutely not, the result of active hostility toward those who are blind.  Not being considered due to relative invisibility (which is what's going on, still, with the general public) is not the same as being attacked, assailed, or similar.  And that's the reason that advocacy remains critical, still.

But in the time I've been in computing (starting in the early 1980s) when accesibility was not even really a concept yet, and certainly not taught, to where we are now and the path things are on, is a tectonic shift.  And I credit all of that shift to advocacy from the blind community for the blind community combined with the willingness of certain key players in the sighted world to pay attention. Accessibility principles are now taught at the university level which is where "the future of computing" has always had its roots.  It's "baked in" far more often these days rather than being very clumsily retrofitted. 

What the "general public" thinks about this is not a central issue here and it's completely unrealistic to believe that the general public as a collective whole will ever pay any significant attention to accessibility in computing, mostly because they have nothing to do with implementing it and have no need for it themselves.  I think that far too often people fail to take what is called Hanlon's Razor to heart:  Never attribute to malice that which is adequately explained by stupidity.
I would, however, substitute "ignorance" for stupidity.  And considering that circles straight back to why I can't and don't consider the broader world to be actively hostile toward the blind.  They just don't consider you all at all, for the most part.  The probability of that changing to any great extent among the general public, ever, is quite small.  Catching "the right ears" when advocating is key.

As the regular in these parts who's fully sighted I am definitely "the odd man out," in the NVDA community.  The fact that I have proficiency with NVDA (and JAWS, to a lesser extent these days) and other assistive technology makes me the same in the sighted community.  While I don't claim to speak for all sighted people, I believe my perspectives on computing and accessibility cannot be dismissed, as they have not been formed hastily nor in an information vacuum on all sides.
--

Brian - Windows 10, 64-Bit, Version 21H2, Build 19044  

It is well to open one's mind but only as a preliminary to closing it . . . for the supreme act of judgment and selection.

       ~ Irving Babbitt


The Inside Story of NVDA: what a screen reader is and is not, possibilities and constraints of screen readers #NVDA_Internals

 
Edited

Hi all,

Before we actually get into talking about NVDA components, it is important to think about what a screen reader is and is not, as well as overall concepts (and the big picture) behind possibilities and constraints of screen readers. We also need to go over accessibility in relation to screen reading. Only then the rest of the Inside Story posts will make sense because the story begins and ends with defining the reality, possibilities, and constraints of screen reading technology (for anyone wishing to submit code contributions to NVDA project, you need to think about the overall social and cultural reality NVDA and its users are facing).

First, let’s talk about what a screen reader is not. A screen reader is not an operating system, nor the user interface for an operating system. It is not a “jack of all trades” productivity tool, nor the only way for blind people to use computers (although screen readers get lots of attention because they are one of the most familiar tools the society will see). A screen reader is not your accessibility advocate, nor designed to bring disability justice to everyone. Most importantly, a screen reader is not the million-dollar answer to everything in life, blindness, and accessibility. Shocking? I assume so (for most of us).

The truth is, I sometimes feel that a screen reader is one or more of the “nots” I listed. Folks on this forum encounter and live with screen readers 24 hours a day, 7 days a week, 365 (or 366) days a year. And screen readers like NVDA are gaining more and more mainstream attention (do a Google search for the terms “accessibility” and “screen readers” and one of the results is an article from The Verge published not so long ago on the subject of screen reader history; the NVDA forum had an extensive talk about it a while back). We use screen readers in many places: schools, companies, accessibility testing, software development, or even as an example of progress of accessibility.

So what exactly is a screen reader? Among many Google searches, the common theme is that it is a program that helps blind people use computers by reading screen content. More specifically, a screen reader is a program that reads content the user is interacting with (or not). Sometimes the content is accessible and usable (both terms are important), while others are not, requiring tips and tricks to make them screen reader and user friendly. I will come back to what I just said in a moment.

In a more technical definition, a screen reader is an information processor that gathers, interprets, and presents information displayed on screen and provide ways to let blind users interact with the computer-based task at hand. Screen readers such as NVDA use facilities provided by the operating system (Microsoft Windows, in this case) and apps to gather information on the screen (and sometimes off-screen). Screen readers have rules and expectations about what the gathered information is and should be, and uses sophisticated rules to interpret what it has “seen” i.e. gathered with help from the operating system, the ap in question, and other ways. Based on information gathered and subsequently interpreted, screen readers use components such as text-to-speech (TTS), braille, and other output mechanisms to present screen content. I will address exactly which components are part of NVDA in the next Inside Story.

To illustrate the overall workings of a screen reader at the highest level (or not so high level), let us say that you open Notepad and type the letter “H”. On screen, the letter “H” is shown, and NVDA says “H” if speak typed characters is on (NVDA+number row 2). If a braille display is connected, it will show the letter “H” in braille (in Unified English Braille, it is dots 6 and then 125, or in this case, it could be dots 56, 6, then 125). But how can NVDA accomplish so much magic? Here’s how:

  1. User types the letter “H”.
  2. Windows realizes that something happened from the keyboard, so it tries to interpret what happened.
  3. Windows sees that a character was entered and sees where the system focus is.
  4. Windows sees that Notepad, a text editor is in use, so it displays the letter “H” on the screen.
  5. At the same time, a helper called accessibility API notices this event and sees that a character was entered.
  6. The accessibility API then tells whoever is listening (NVDA, in this case) that an input event occurred.
  7. In turn, Notepad (app) realizes that an accessibility API is running, so it says to the accessibility API, “please raise a value change event so the screen reader can announce it to the user.”
  8. In turn, the accessibility API raises value change event, which is then recognized by NVDA.
  9. NVDA knows that a value change event has occurred, so it tries to find out what has changed, and eventually sees that a new character was entered.
  10. NVDA then uses the configured speech synthesizer to inform the user that the letter “H” has been entered. This does not happen if the user says to NVDA, “don’t tell me typed characters.”

The steps listed above should provide just enough information to demonstrate the idea that a screen reader is, in essence, a sophisticated information processor: gathers, interprets, and presents information.

Going back to what I said above about accessible and inaccessible (and usable and unusable) content: what I outlined above may suggest that everything is accessible if things work out between the operating system, apps, and screen readers. This ignores the fact that screen readers are, believe it or not, workarounds to the current social and cultural conditions of computing, disability, accessibility, and usability. Remember one of the “nots” of screen readers: they are not accessibility advocates for you. Why? Think about the term “assistive technology”. What does it mean in practice? It means that computers, tablets, smartphones, and gadgets we live with are not designed with disability in mind, and screen readers came along to “fill” the gap for inaccessible and unusable computing. The history of screen readers is filled with slogans such as “equal access to technology”, “making things more productive”, “helping blind people get jobs” and others (the story of screen readers goes back decades, believe it or not).

The term “assistive technology”, at a first glance, is a positive definition for folks on this forum and elsewhere: tools to help you succeed in using computers to perform tasks. But on the flip side, it shows that, despite progress such as accessibility standards and novel approaches to provide “technological social justice” (disability emojis, for example), the world is still, for a lack of better word, unconcerned (or not educated enough or not fully aware of, perhaps) toward blind people. Screen readers exist precisely because they demonstrate the lack of consideration for the disabled when designing digital technologies, and as we will see in subsequent Inside Story of NVDA series, people like Mick Curran and others came up with workarounds upon workarounds to demonstrate the continued need for advocacy.

My statement that screen readers are workarounds should ring a bell for some of you. Not just because your life experiences are filled with accessibility advocacy, but also because it touches on one of my own mantras about accessibility and usability: mindset matters. Fixing inaccessible applications so it can become screen reader user friendly is just a micro-level solution. The steps I listed to demonstrate parts of NVDA internals came after years of advocacy by blind people, informing Microsoft that they need to do better (people who lived in the 1990’s should remember what I’m talking about). Accessibility standards and API’s are next level up in solving computing issues for screen reader users (by doing so, people and organizations writing standards are acknowledging the continued issues faced by disabled people thanks to larger social and cultural issues at hand). The fundamental issue, and the reason that NVDA is not the million-dollar answer to everything in life for screen reader users, is the perpetuation of ignorance by both sides of the coin: ignorance by the public (mainstream) that accessibility and usability matters in software design, and ignorance by screen reader users and disability advocacy organizations that we are a minority and must advocate continuously.

Putting all into context of NVDA, just because the screen reader is free and open-source does not mean equal access to technology is here at last. When you use NVDA or contribute code to the project, you are doing three things at once: shows dedication to the project, acknowledges the progress made in screen reading, and understands the effects of social and cultural attitude toward disability. The last one is the reality of screen reading as it stands in 2022: even if COVID-19 pandemic made us realize how screen readers are important for us, it also brought challenges such as inaccessible and unusable videoconferencing systems, unreadable online documents, and the notion that technology can solve world’s problems (it won’t, I think). When looking at NVDA from the big picture of accessibility and usability, it opens up possibilities and constraints. Possibilities because the code is out there so people can study and research it, and constraints as the same source code demonstrates the larger social and cultural issues faced by blind people. This is perhaps the biggest lesson I want readers to understand as we meet NVDA internals: screen readers such as NVDA represent the reality, possibilities, and constraints of people needing to use alternatives due to social and cultural attitudes. And throughout Insider Story series, I will highlight all three of them as much as possible.

Remember this: screen readers are not productivity tools, the solution to life’s problems, technological social justice, nor can advocate for users. As sophisticated information processors, screen readers represent the reality, possibilities, and constraints of disability in the form of technology. NVDA both shows the progress and waypoint toward accessibility and usability, and in extension, more need for disability advocacy. I want all of you to understand this, otherwise the rest of The Insider Story of NVDA will not make sense – not only I will take you on a journey on NVDA internals, but also help you contemplate a lot (for anyone wishing to contribute code to NVDA project, you must have the mindset that you are contributing to both the possibilities and constraints of accessibility and disability).

Next: NVDA screen reader components and/or any feature you would like me to cover (comments and topic suggestions are welcome).

Cheers,

Joseph


Re: Spelling / grammar checkers - I found one!

farhan israk
 

I know that both google docs, gmail and other wordprocesor have built- in grammar checkers but they are not as perfect as third party grammar checkers such as grammarly. You just copy any content from any of your documents and paste it in grammarly web editor. You will understand the difference. You can access grammarly suggestions after pressing f6. Unfortunately, it is not possible to insert them using a screen reader.


On Fri, Sep 16, 2022 at 12:18 AM Brian Vogel <britechguy@...> wrote:
On Thu, Sep 15, 2022 at 01:51 PM, farhan israk wrote:
However, it does not work in google docs and gmail.

Google Docs has it's own Spelling and Grammar checker, so a third party one is not strictly necessary.  Look under the Tools menu.

Gmail has a spell check in the webmail interface which is available via the More Options Menu button, then after it's run once it creates a floating button for rechecking that is screen reader accessible.

There is no need to reinvent or duplicate the wheel, and virtually every word processor (including Google Docs) has a spelling and grammar checker.
--

Brian - Windows 10, 64-Bit, Version 21H2, Build 19044  

It is well to open one's mind but only as a preliminary to closing it . . . for the supreme act of judgment and selection.

       ~ Irving Babbitt


Re: Announcing Office Desk: a new add-on to improve support for Microsoft 365 apps

udit pandey
 

yes ivery good thing


On Sun, 18 Sept 2022 at 18:18, Chris Smart <ve3rwj@...> wrote:

Thank you so much Joseph. This is most helpful and quite exciting, for those of us who use the MS Office applications regularly.

 



--
hope that you all are safe with your family,
 udit
follow me on instagram: udit@pandey123
mail me on gmail at udit52805@...
or outlook me at uditpandey6474@outlook
we should not never speak bad, we should never see bad, and we should never lisson bad


Re: order of precedence?

Brian's Mail list account
 

Yes I get similar problems using an hdmi cable to the tv, where it tries to play all sound to and in from the tv. I did try the over rides etc, but often after an update it all goes wrong again. In the end I used a hardware solution and bought a normal display adaptor socket to hdmi converter and disconnected the audio, Whalla,sound where I wanted it to go.
Brian

--
bglists@...
Sent via blueyonder.(Virgin media)
Please address personal E-mail to:-
briang1@..., putting 'Brian Gaff'
in the display name field.

----- Original Message -----
From: "Curtis Delzer" <curtis@...>
To: <nvda@nvda.groups.io>
Sent: Saturday, September 17, 2022 11:10 PM
Subject: [nvda] order of precedence?


Hi all! The subject of this message may be confusing but . . .

Suppose you tell a synthesizer in NVDA to use your headphones, and then you go to the windows control panel and tell windows to use, (in my case with virtual audio cable) "line 1."

WHY does NVDA then go to the Microsoft sound mapper? I told NVDA to use my headphones and I assumed that it would ignore any other instruction but . . .

For a while I lost speech because Microsoft and their "trying to help" attitude assumed they knew best and then, their munificence allowed me to, reconnect something new to it and then make the MS mapper find that and vuala, I had speech again, and have it now.

In essence, I told the control panel to go to line one, and NVDA was instructed to use ms sound mapper which I did not authorize so, bingo, no speech. :)

I know how to avoid this but, it is obviously a situation where Microsoft knows best and overrides the setting in NVDA's interface to a synthesizer.

The way to avoid is to set a small program running shipped with virtual audio cable called "audio repeater and tell audio repeater to play from where you direct the sound too, to the place you wish to listen, in my case my headphones which are a sound card in their own right, as such. :)

So, anyone else who plays with sound can harken to the potential issue which is vital if you have no safety net to fall into, Microsoft knows best, not the developers of NVDA. I trusted NVDA to speak from where I told it, even saved the settings but not was the case.


--
Curtis Delzer
H.S.
K6VFO
Rialto, CA
curtis@...





Re: The Inside Story of NVDA: introduction and overall goals and mindset #NVDA_Internals

Brian's Mail list account
 

One thing I would say here is that with source code if you want to make any sense of it at all, you need to have all punctuation on as you read it, as the syntax of code is very strict most of the time these days. Back in my days of being young and foolish, Basic had line numbers and all kinds of fixed parameters, but gradually as more was needed from code, especially when was a stepping stone toward compiling machine language native code, the concepts and types of data have expanded to suit the environment.
Brian

--
bglists@...
Sent via blueyonder.(Virgin media)
Please address personal E-mail to:-
briang1@..., putting 'Brian Gaff'
in the display name field.

----- Original Message -----
From: "Joseph Lee" <joseph.lee22590@...>
To: <nvda@nvda.groups.io>
Sent: Saturday, September 17, 2022 10:09 PM
Subject: [nvda] The Inside Story of NVDA: introduction and overall goals and mindset #nvdaint


Hi all,

First, thank you to Brian V for giving me permission to do something I've
been dreaming about for the last few years: giving you a tour of NVDA screen
reader internals. For the last few years, I wished I could take some time to
tell you how a screen reader works from the inside, as well as add a much
needed body of knowledge to screen reader research.

When you do a forum archive search for terms such as "internals" and
"development", you will come across posts from yours truly and others
talking about writing a series of articles on screen reader internals. To
quote a few:

* Proposal: a series of posts/articles on NVDA internals (groups.io)
<https://nvda.groups.io/g/nvda/message/91943?p=%2C%2C%2C20%2C0%2C0%2C0%3A%3A
Created%2C%2Cinternals%2C20%2C2%2C0%2C88497918>
* Re: Please update Enhanced Phonetic Reading App (groups.io)
<https://nvda.groups.io/g/nvda/message/97563?p=%2C%2C%2C20%2C0%2C0%2C0%3A%3A
Created%2C%2Cinternals%2C20%2C2%2C0%2C92512544>
* Re: control names (groups.io)
<https://nvda.groups.io/g/nvda/message/92420?p=%2C%2C%2C20%2C0%2C0%2C0%3A%3A
Created%2C%2Cinternals%2C20%2C2%2C0%2C88777986>

And we also had messages that became something outside of NVDA but
nevertheless touched parts of NVDA such as:

* nvda@nvda.groups.io | Tutorial On Using NVDA Single Letter
Navigation to navigate faster than even a sighted person
<https://nvda.groups.io/g/nvda/topic/87700584#90855>
*

nvda@nvda.groups.io | Article on Screen Reader History (including NVDA)
<https://nvda.groups.io/g/nvda/topic/92394151#97806>



I'll return to the history of NVDA in a future post, this time using
artifacts that are not really screen reader related but important to explain
choices made over the years.

The reason for writing The Inside Story of NVDA series is to explain how a
screen reader works, specifically how NVDA works behind the scenes. Part of
this series stem from the fact that NVDA, despite being an open-source
screen reader, is not documented well at the source code level. This is
perhaps one of the biggest reasons for difficulty experienced by people
wishing to contribute code to make the screen reader better and more
valuable. Better because of the notion of competition and needing to support
different scenarios, more valuable because of perceived notion that more
features mean more value and power (I can tell you right now that this is
false; you'll notice that I can and will sometimes become philosophical; as
Brian V and others have observed (including in a thread on accessible drag
and drop <https://nvda.groups.io/g/nvda/topic/93446323#99221> , screen
readers are filled with workarounds for workarounds for workarounds for
inaccessible computing; I'm not joking). At least that's the reality of
screen reader development, and I hope this series sheds some light on that
reality, first by talking about internals at a high level, then take you on
a journey of how such and such feature actually comes to life (I cannot go
into line-by-line commentary on NVDA source code as this is not a
development list, but I'll do my best to at least illustrate what's going on
in a way that I hope users can appreciate).

The second reason for posting NVDA internals is to serve as a healing
process, both for me and for the entire NVDA community. It wasn't until
taking graduate school classes that I realized that I was so focused on NVDA
development to a point where I felt burnout. This manifested this year when
I announced my month-long break
<https://nvda.groups.io/g/nvda/topic/91379776#96670> as I needed some time
to heal and reorganize my thoughts. Lack of documentation in NVDA source
code did contribute to my stress, but what made it worse was my realization
that users don't know a lot about how their favorite screen reader works
behind the scenes which is understandable. Making matters urgent was the
fact that I am a graduate student and am standing on a crossroad, knowing
that my involvement with NVDA community is coming to a close, and I felt
this is the golden opportunity to pass on what I know about NVDA and screen
reading in general to the next group of community members before I move on.
Since writing seems to be one of the ways to get off stress and heal, I felt
writing about NVDA internals will help me heal, and also to lift some of the
veil off NVDA screen reader for the community so people can better
understand what's going on and suggest changes for the better.

The third reason for writing the Inside Story series is to teach you the art
and science of technical communication. I'm personally glad that we have
people such as Quinn who are the embodiment of technical communication:
using multimedia presentations, storytelling, and other innovative ways to
discuss technical concepts, including screen reading in a way that can be
understood by many. Technical communication (or technical writing in
general) is an artistic and scientific process. Artistic because the author
needs to look at who the audience is and come up with ways to explain
difficult concepts in a friendly way. Scientific because one must show rigor
in research and understanding (including feedback from the audience),
otherwise explanations fall apart. I hope that others can contribute their
internals posts to enrich the community while practicing the art and science
of technical communication.

The fourth reason for the Inside Story series is to leave future researchers
with something to think about. Studying assistive technology at the source
code level and understanding its internals was a dream for many. It wasn't
until recently that people began to appreciate the research value of
software source code. But because it takes time to understand source code
(especially for folks unfamiliar with the primary audience for which the
source code is designed (computer users)), more so for assistive tech source
code, I figured this is the perfect time to bring the researcher in me to
life and leave something behind for future researchers to consider. I
figured that a story or two about NVDA internals may provide some hints to
researchers on where to look next.

Finally, I dedicate Insider Story series to countless people who have helped
me in my journey as a member of the NVDA community. Special thank you to NV
Access folks who have been my mentors and coworkers on the journey toward
equal access to technology. I hope the upcoming Inside Story series can
serve as a way to showcase my learning for the last ten years.

As The Inside Story of NVDA will talk about NVDA screen reader internals
from user's point of view, I'll do my best to minimize use of jargon (or if
I must, I'll define what they are along the way). Even if storytelling is
the primary mode of delivery, I expect that readers have some knowledge of
computers, and a special bonus to people familiar with accessibility
concepts and technology surrounding it such as accessibility standards and
API's, familiarity with programming, and willingness to dive a bit deeper
into technical side of things. My primary audience member is an NVDA user
who have been using the screen reader for some time (at least one year) and
is curious about how features work, both at the high-level (how it appears
to you) and slightly lower level (quite closer to the source code level) but
not up to the point of coding the screen reader itself. Sometimes I will
show you parts of source code that is essential for inner workings of a
feature with explanations afterwards. And since NVDA is continuously being
developed, what I say versus what is actually added to source code may
differ. Lastly, this is not a development list, so the level of explanation
is not a line by line commentary, but more towards a story or two of how
things come to life (for an example, see my explanation on how browse mode
is loaded
<https://nvda.groups.io/g/nvda/message/97305?p=%2C%2C%2C20%2C0%2C0%2C0%3A%3A
Created%2C%2CNVDA+says%2C20%2C2%2C20%2C92274149> ).

The first story in Inside Story series will deal with high-level overview of
components of a screen reader. This is in response to posts like the one in
August where we had a discussion about math expression pronunciation
<https://nvda.groups.io/g/nvda/topic/93127550#98750> where it was observed
that it ultimately came down to speech synthesizers, not with the screen
reader itself. I hope the upcoming post will clarify (once and for all) that
screen readers are not text-to-speech engines and vice versa. As always,
contact me directly if you have things you would like me to cover throughout
the Inside Story series.

Thanks.

Cheers,

Joseph


Re: could nvda announce entries in the Windows 11 emoji panel?

 

Hi,

In addition to usual version info, can you tell me if this happens with or without Windows App Essentials add-on installed? If Windows App Essentials is not running, it might be due to UIA event registration setting (if set to "selective" or the checkbox is checked (depending on NVDA release), NVDA will not handle events from places other than focused control, which affects Windows 10 emoji panel where NVDA may appear to do nothing when in fact emojis are being selected).

Cheers,

Joseph


Re: Announcing Office Desk: a new add-on to improve support for Microsoft 365 apps

Chris Smart
 

Thank you so much Joseph. This is most helpful and quite exciting, for those of us who use the MS Office applications regularly.